Yup either official and through an Ubuntu/Debian container, or mess up your local system with the Opensuse Repo, or just use the Flatpak that just works
This is super helpful, I may post this to infosec.exchange. Flathub makes this so much more difficult to find the reason for what looks like a real breach. I don’t use Flathub for security reasons so I don’t know if you can even isolate the PID? Anyone know?
I don’t want you to have to spend a lot of time or troubleshoot over the web but if you see anything that stands out as “wow shouldn’t be there/running” when you run these commands come back to us:
I advise you stop using Signal Desktop immediately, they keep the database key in plaintext. Exposed over 5 years ago and still not fixed. Frankly I find this pretty pathetic. Making this safer could be as simple as encrypting such files with something like age and perhaps regenerate the keys on a frequent basis (yes I know full disk encryption is somehow a viable solution against unwanted physical access. But instead, they’d rather focus on security by network effect by adding shiny UX features instead of fixing infrastructural stuff, like improving trust by decentralization, not requiring phone numbers to join, or adding support for app pasphrase (which is available in case of Molly, along with regular wiping of RAM data which makes things like cold boot or memory corruption attacks harder)
maybe try setting up a matrix bridge if you feel confident you can secure that properly. On one hand it might increase attack surface (use only servers and bridges with End to Bridge Encryption) but what’s an attack surface on software that is so ridiculously compromised. Also you can try using an alternative client such as Flare. Though YMMV, for me the last time I’ve used it it was quite rough around the edges but I’m happy to see it’s actively maintained so might be worth checking out.
Also no, flatpak doesn’t fix this issue. Yeah it provides some isolation which can be further improved with flatseal, and other defense-in-depth methods. But unless you are willing to face the trade-offs of using Qubes, you won’t compartmentalize your entire system. The key file in question is stored in ~/.local/share. I’m not denying vulnerabilities in userland applications, but thanks to it’s wide reach, often massive codebases and use of unsafe languages like C, it’s the core system or networked software that is the most common attack vector. And that doesn’t ship and will never ship via flatpak.
The most obvious way this is exploitable is directory traversal. But not only that. Just look up “Electron $VULNERABILITY”, be it CSRF, XSS or RCE. Sandbox escape is much easier with this crap than any major browser, since contextIsolation is often intentionally disabled to access nodejs primitives instead of electron’s safer replacements. Btw Signal Desktop is also an electron app.
Also, Signal’s centralization, sussy shenanigans with mobilecoin and not updating their server app repo for over a year (latter they ceased afterwards iirc but still very detrimental to trust, especially since git reflog manipulation is ridiculously easy) and dependence on proprietary libraries and network services (in case of libraries there are thankfully at least a couple forks without such dependencies). Plus most of their servers that aren’t necessarily CDN being located in glowieland…
The huge red flag to me is that Signal is no longer decried as the devil of western intelligence anymore.
Frank Figliuzzi (former FBI cointel) and Chuck Rosenberg (former DEA admin) used to rail on about all of the dangers posed by Signal, but I haven’t heard an unkind word in over a couple years now.
French authorities consider it a “terrorist app”. Louis Rossmann made a video about it. It was in some court case but at this point I don’t remember whether it was a local court or higher and frankly don’t care enough to check.
We each make a choice according to our level of comfort in concern to privacy, or lack thereof, in how we choose to conduct ourselves afforded by the solutions we utilize and the rituals we observe.
Remember, privacy can never be enforced or guaranteed, only encouraged. Best practices, as available, as it were.
No worries, it seems like you understand perfectly - I was just reflecting on the downvotes above.
I like it here because the people often seem real, and the voting generally seems (to me, anyway) to follow more of a meritocratic pattern than whatever the fuck has been going on at the other place for the last ten or more years.
We should probably try to really understand these differences so we might get better at designing communities that are actually sustainable. Maybe I am just getting old - I’m tired of starting over, I’m tired of watching great communities self-destruct.
Likely because while simplex looks great and is very promising, it doesn’t add much to the conversation here. Signal is primarily a replacement for SMS/MMS, this means people generally would want their contacts readily available and discoverable to minimize the friction of securely messaging friends/family. Additionally it’s dangerous to be recommending a service that hasn’t been audited nor proven itself secure over time.
Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.
And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?
I consented to my post being federated and displayed on Lemmy.
Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?
The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.
Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.
While you make a valid point here, mine was simply that once something is out there, it’s nearly impossible to remove. At a certain point, the nature of the internet is that you no longer control the data that you put out there. Not that you no longer own it and not that you shouldn’t have a say. Even though you initially consented, you can’t guarantee that any site will fulfill a request to delete.
Should authors and artists be fairly compensated for their work? Yes, absolutely. And yes, these AI generators should be built upon properly licensed works. But there’s something really tricky about these AI systems. The training data isn’t discrete once the model is built. You can’t just remove bits and pieces. The data is abstracted. The company would have to (and probably should have to) build a whole new model with only propeely licensed works. And they’d have to rebuild it every time a license agreement changed.
That technological design makes it all the more difficult both in terms of proving that unlicensed data was used and in terms of responding to requests to remove said data. You might be able to get a language model to reveal something solid that indicates where it got it’s information, but it isn’t simple or easy. And it’s even more difficult with visual works.
There’s an opportunity for the industry to legitimize here by creating a method to manage data within a model but they won’t do it without incentive like millions of dollars in copyright lawsuits.
We’re entitled to a reasonable amount of privacy, such as locks on our doors and curtains on our windows, why shouldn’t reasonable privacy also apply to our lives online?
I find they’re a pain to use and I only have one out of social pressure, and privacy or not I’m constantly confused on why they’re so popular.
I just use a throwaway account and have the rule of not putting in any data that I don’t want to be read - which is barely anything any way because I do all my computing on my Linux laptop. I figure if they’re collecting location data and recording me then they’re just associating it with “random guy x” because I’ve never given it anything else. I should look in to one of the de-Googled Android distributions but I have so little interest and energy in anything to do with it, if it could be made totally private I would still rarely use it.
Lot of folks here making the “nothing to hide? Great show me your browsing history” type arguments.
I think this isn’t really arguing in good faith. There’s a big difference between a personal friend knowing something about you, and a faceless algorithm knowing something about you. The two cases are different; it’s fair to argue about how one is better or worse, but they are different.
I usually ask them to hand me their phone while its unlocked and that really makes some people think. Its funny because at the same time i have so little to hide that the only reason i have a passeord on my phone is because it makes stealing it harder. But im not gonna hand my data some random company just to watch braindead 30 second videos.
I guess they think they have nothing to hide, because they don’t know, or don’t care about, how their own information can be used against them.
Because it doesn’t happen in an obviously invasive manner, they don’t think it’s a big deal. It’s harder to associate an abstract concept to actual value.
privacy
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.