What you’re describing is an issue with all of social media. While your concerns are valid, I don’t see your arguments as privacy issue. I honestly prefer post and comment history being transparent and accessible. It’s much like Reddit and this format fits much better with an open forum style of platform.
Don’t post private information and it’s a non-issue.
Also, can’t you just delete posts and comments like on Reddit?
Also, can’t you just delete posts and comments like on Reddit?
Nothing ever dies on the Internet. With the federated nature of Lemmy, it’s possible for deletes to not sync across instances, especially if there’s defederation that happens.
But why? Is there a compromise taken on privacy in favour of visibility and mass adoption of whatever fediverse client they’re using? I don’t understand this, especially since I also find the strongest advocates for privacy right here.
A lot of Lemmy adopters joined with rose tinted glasses, and came with a lot of good ideas, like getting data out of the hands of big companies, making it easy to access it (as Reddit locked down APIs), etc. Which is all good, but a subset of them believe “not officially belonging to one company” is good enough. As for how your data is handled online, a subset of them believe nothing can be improved, and a subset believes it shouldn’t be improved because your data shouldn’t belong to you at all.
And Lemmy is made up of all sorts, so there’s overlap between Reddit refugees and diehard fans. That interaction is a lot more implicit here, but the friction is a lot more visible on sites like Mastodon where similar privacy discussions have been happening.
I’ve not seen any of these arguments. Though it may be all downvoted to hell and back.
My main gripe with adding privacy features to Lemmy is that the whole point of Lemmy is that all data is already publicly available and for Lemmy to continue working the way it does it’ll need to remain that way. And because of that there’s nothing that can be done to stop bad actors setting up an instance and selling all the data they collect.
At least in the EU (and UK to a lesser extent) no major corporation would be able to get away with selling that data, so the spent man hours on allowing privacy settings would be wasted time.
It doesn’t necessarily need to remain that way. For example,we should have the option to make our profiles private. We should also be able to create pseudonyms for content we submit. The content will still be federated, but not necessarily linked to one user ID
The admin of Blahaj is openly interested in exposing trans people’s alt accounts and outing them on their mains. And somehow it’s the biggest trans instance. We need a community and admin reaction in favour of defederating people who do that.
The only privacy setting I can encourage on any social media site is don’t share private stuff about yourself and never link to your account from other accounts
That is part of the problem though. Proper privacy allows you to express what you want to, without self censorship. The issue is not: don’t speak about x, but rather: speak about it and feel comfortable that you can do it in a safe environment. I fully agree with the account linking though
Those didn’t completely break federation, they just had some issues with a few services besides lemmy. They’re addressed now, but federation compatibility will always be an ongoing task as new services get added and existing ones change their activitypub responses.
As far as I’m aware the most widely-accepted standard for responsible disclosure is 90 days. This is a little different, since that’s normally between businesses and includes the time needed to develop a solution; it’s not typically aimed at federated or self-hosted applications rolling out an already-created patch. On the one hand, granting them that extra time to upgrade seems reasonable. On the other, wouldn’t anyone wanting to exploit a vulnerability be able to reverse-engineer it pretty easily by reading the git history?
The 90 days disclosure you’re referencing, which I believe is primarily popularized by Google’s Project Zero process, is the time from when someone discovers and reports a vulnerability to the time it will be published by the reporter if there is no disclosure by the vendor by then.
The disclosure by the vendor to their users (people running Lemmy instances in this case) is a completely separate topic, and, depending on the context, tends to happen quite differently from vendor to vendor.
As an example, GitLab publishes security advisories the day the fixed version is released, e.g. …gitlab.com/…/critical-security-release-gitlab-16….
Some vendors will choose to release a new version, wait a few weeks or so, then publish a security advisory about issues addressed in the previous release. One company I’ve frequently seen this with is Atlassian. This is also what happened with Lemmy in this case.
As Lemmy is an open source project, anyone could go and review all commits for potential security impact and to determine whether something may be exploitable. This would similarly apply to any other open source project, regardless of whether the commit is pushed some time between releases or just before a release. If someone is determined enough and spends time on this they’ll be able to find vulnerabilities in various projects before an advisory is published.
The “responsible” alternative for this would have been to publish an advisory at the time it was previously privately disclosed to admins of larger instances, which was right around the christmas holidays, when many people would already be preoccupied with other things in their life.
Just to get it out there… I checked this out about a year ago. It’s not completely open source. The project consists of many executables and “pre complied dependencies” that don’t appear to share matching checksums which may indicate modifications of some sort. Looks like a great tool, but I’m extremely skeptical of what’s going on under the hood.
Hopefully they do truly open source it and prove me wrong, I’d love to give it a try some day.
This looks interesting, but I don’t understand what it’s for. I read through the readme, but came out none the wiser. What exactly is a compose sequence?
A compose key (sometimes called multi key) is a key on a computer keyboard that indicates that the following (usually 2 or more) keystrokes trigger the insertion of an alternate character, typically a precomposed character or a symbol.
It’s a method to combine several characters on your keyboard and use it to create a special character which is not on the keyboard. For example “ and e produces ë. This tool allows you to configure those combinations.
But thanks for the feedback. I’ll update the readme to add some more context.
This community is full of people who simply “don’t like certain things”. They may say “it’s overkill” disregarding the fact that it solves their use case perfectly. Or it could be written in a language they don’t like. Or maybe they heard somebody else complain about it on a forum once and now think it’s bad.
I think flatpaks are good. The performance penalty for containerized software can be felt much more when you’re not using a good CPU. So containers do not “solve” my use case.
Yeah, it’s also the same group of people who are always complaining about how much RAM a desktop environment or app uses, that app being whichever one they are using right now.
I did it with blob storage, ended up being much cleaner and cheaper. You’ll need to toy with it a bit, but from scratch will be a lot easier than the migration I had to do. You’ll easily eat up 100+GB in pictures, which on the cloud on a VM’s drive that’s a fair chunk of money. Object storage is pennies.
Yup Yup! I’ve got it uploading objects. It seems to be an issue with fetching them. The hash is either mismatched or it’s not correctly trying to grab from the sled repo. So, I get a 500 error in store response. Not really sure how to fix it.
github.com
Oldest