When you type a message a message and send it to your counter part, WhatsApp says it encrypts it and the recipient will decrypt it on their side with WhatsApp. However, WhatsApp is closed source. That means you trust WhatsApp to do what it says.
It’s like going to a contractor and telling them your message and handing them a key. The contractor says they’ll deliver it to the other party in a manner that nobody else will be able to read that message. You can ask them provide the tools they do it, explain how they do it, and show you how it’s done, but they say “no can do, trade secret”. Do you trust them?
Alright, let’s say you do trust them, they really do make the message unreadable to anybody but the other party. But every time you want to send a message, you have to go to their building, write down the message on a notepad, and then hand it + the key to the messenger. If you told them “Just to be sure, I’d like to verify that nobody else is here possibly looking at the message while I write, nor reading it when you go into the backroom to render it unreadable” and asked “Can I check for other people here?” to which they respond “no can do, trade secret”. Do you trust them?
Alright alright, so you still trust them. They won’t let you check anything, but you still trust them. The messenger is employed by the one and Sauron Inc. The owner has been caught lying about stuff before, but you trust them. No problem.
Let’s says the messenger says “hey, you know, all the communications you have when you go into the small room there, we can make copies for you! if the messages were ever misplaced, this building burned down or anything, you could always have the communication history”. You find it a great idea! Wow, it’s so convenient. They even suggest to put copies in a building in another city and the building is owned by Darth Vader Inc. You’re ecstatic! To get the process started, WhatsApp walks into your room with a bunch of blank papers and chest, then asks you to hand over your key and closes the door behind them. You are escorted out of the building and wait for the process to be over.
A few months later, the city is bombarded by Megatron. The WhatsApp building is destroyed and your communications are gone! The key you had for the messenger to render your communications unreadable? Gone too! Well, luckily you can just go to another WhatsApp building. You enter, say your name, fill in your details and you are escorted to a room that looks just like the one in the building the Megatron destroyed!
The elation is great! … until you notice that all your messages are readable. Not only that, but the key that’s used to make then unreadable by WhatsApp is sitting there on the desk - pristine and undamaged as it ever was.
Wait a moment… how did the unreadable messages and the key get restored? What exactly did Darth Vader Inc. get from WhatsApp?
Must just be a coincidence, right? You probably had the key in your pocked the whole time and gave it to WhatsApp while you were at the reception filling in your contact details. Your trust is unwavering, the security unrattled, and your communication unscathed.
You are right, we don’t and can’t know if any of what Meta says is true, but at least on the surface it seems to check out. If they are stealing your private key and unlocking all your chats in secret, then they are doing a bloody good job, since no one has leaked anything yet.
Just to clear things a bit, in your analogy you don’t hand the courier both the chest and the key. The chest has a special keypad that accepts two keys, one is your key, the other is the recipient’s key. What you do is you lock the chest with your key and then give it to the courier, which will deliver the chest to the other party, which will then open the chest with his key. In theory the courier never had access to the key.
Now the issues are that you are indeed writing your message from within the Whatsapp building and you can never know if there cameras watching you or not. You also cannot know if Whatsapp has made a copy of your key, or the recipient’s key without your knowledge.
As for how can you recover all your chat history even after you destroy your phone, it’s quite easy and Whatsapp doesn’t need to know anything in particular. The functionality allows you to make a backup and store it on Google Drive. That backup gets encrypted with your password and it’s probably the most secure thing of all, if nothing else because Meta would gain nothing from the backup having poor security (as it would already have all the data if they wanted it) while it would only make them loose face, plus would allow anyone else to gain access to all ~~your ~~their data. After you restore the backup on a new device a new key+padlock pair gets created and the lock gets shared to all your contacts (which will see the yellow box telling them your padlock has changed).
I’m not claiming it doesn’t have privacy issues mind you, I’m just saying that you can’t be sure either way, unfortunately. Still, better than Telegram that doesn’t even encrypt most of your chats.
Maybe that’s a new feature? Does WhatsApp require a password when backing up now? Haven’t used it in a few years, but back when I had it, the backup to Google didn’t require anything besides your phone number and access the google drive on your account - it was only retrievable from WhatsApp and not visible on a Google Drive interface nor API.
They will if I don’t sound paranoid and can give rational answers backed up with real articles that aren’t conspiracy sites. Much of my family are teachers, everyone has at least one university degree, and is capable of rational thought and critical thinking. They just don’t see a reason to switch. I need to put forward a reason that is worth their time.
How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?
It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.
In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.
The contents of the chat messages are e2e encrypted, so meta can’t see what you are sending.
Even if we assume correct e2ee is used (which we have no way of knowing), Meta can still see what you are sending and receiving, because they control the endpoints. It’s their app, after all.
Even if they do, you can’t know whether they can access the encryption keys. It’s all just layers of “but this, but that” and at the very bottom a layer of “trust me, bro”.
There’s an appealing notion to me that an evil upon an evil is closer to weighingout towards the good sometimes as a form of karmic retribution that can play out beneficially sometimez
I’m glad we live in a time where something so groundbreaking and revolutionary is set to become freely accessible to all. Just gotta regulate the regulators so everyone gets a fair shake when all is said and done
Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.
And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?
I consented to my post being federated and displayed on Lemmy.
Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?
The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.
Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.
While you make a valid point here, mine was simply that once something is out there, it’s nearly impossible to remove. At a certain point, the nature of the internet is that you no longer control the data that you put out there. Not that you no longer own it and not that you shouldn’t have a say. Even though you initially consented, you can’t guarantee that any site will fulfill a request to delete.
Should authors and artists be fairly compensated for their work? Yes, absolutely. And yes, these AI generators should be built upon properly licensed works. But there’s something really tricky about these AI systems. The training data isn’t discrete once the model is built. You can’t just remove bits and pieces. The data is abstracted. The company would have to (and probably should have to) build a whole new model with only propeely licensed works. And they’d have to rebuild it every time a license agreement changed.
That technological design makes it all the more difficult both in terms of proving that unlicensed data was used and in terms of responding to requests to remove said data. You might be able to get a language model to reveal something solid that indicates where it got it’s information, but it isn’t simple or easy. And it’s even more difficult with visual works.
There’s an opportunity for the industry to legitimize here by creating a method to manage data within a model but they won’t do it without incentive like millions of dollars in copyright lawsuits.
While it should have been opt-in it is not that dramatic. The server owner can see what is played anyways. And since the primary use case is a home and friends setup it is vastly different to a Netflix scale privacy break.
Are you saying that this information isn’t collected by Plex for a use case that doesn’t obviously require it? Because if it is the case, then it’s a big fucking deal.
Yes, a server owner can see what is played. But this is sending email summaries about what I am watching on my own server. Even if that friend is not invited to my particular server, and even libraries that I haven’t shared with anyone.
It doesn’t even matter if I’m embarrassed by what it sends. That information is private. Period.
Actually compared to most of the image generation stuff that often generate very recognizable images once you develop an eye for it the LLMs seem to have the most promise to actually become useful beyond the toy level.
Yea i use chatgpt to help me write code for googleappscript and as long as you dont rely on it super heavily and or know how to read and fix the code, its a great tool for saving time especially when you’re new to coding like me.
Model collapse is likely to kill them in the medium term future. We’re rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don’t fully understand, this kind of training data poisons the model.
It's not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.
Obviously this is a privacy community, and this ain’t great in that regard, but as someone who’s interested in AI this is absolutely fascinating. I’m now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn’t generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it’s even an expected thing. After all, we as humans also have the ability to recite pieces of “training data” if we seem them interesting enough.
They mentioned this was patched in chatgpt but also exists in llama. Since llama 1 is open source and still widely available, I’d bet someone could do the research to back into the weights.
Yup, with 50B parameters or whatever it is these days there is a lot of room for encoding latent linguistic space where it starts to just look like attention-based compression. Which is itself an incredibly fascinating premise. Universal Approximation Theorem, via dynamic, contextual manifold quantization. Absolutely bonkers, but it also feels so obvious.
In a way it makes perfect sense. Human cognition is clearly doing more than just storing and recalling information. “Memory” is imperfect, as if it is sampling some latent space, and then reconstructing some approximate perception. LLMs genuinely seem to be doing something similar.
I bet these are instances of over training where the data has been input too many times and the phrases stick.
Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.
I’ve seen models that almost always error into The Great Gatsby too.
This is not the case in language models. While computer vision models train over multiple epochs, sometimes in the hundreds or so (an epoch being one pass over all training samples), a language model is often trained on just one epoch, or in some instances up to 2-5 epochs. Seeing so many tokens so few times is quite impressive actually. Language models are great learners and some studies show that language models are in fact compression algorithms which are scaled to the extreme so in that regard it might not be that impressive after all.
How many times do you think the same data appears after a model has as many datasets as OpenAI is using now? Even unintentionally, there will be some inevitable overlap. I expect something like data related to OpenAI researchers to reoccur many times. If nothing else, overlap in redundancy found in foreign languages could cause overtraining. Most data is likely machine curated at best.
Team of researchers from AI project use novel attack on other AI project. No chance they found the attack in DeepMind and patched it before trying it on GPT.
privacy
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.