privacy

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

JackGreenEarth, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments

Those are all publicly available data sites. It’s not telling you anything you couldn’t know yourself already without it.

stolid_agnostic,

I think the point is that it doesn’t matter how you got it, you still have an ethical responsibility to protect PII/PHI.

Stephen304, in Plex starts narcing on its own users' anime and X-rated habits with an opt-out service, and it's going terribly

Stuff like this really makes me want to switch to jellyfin, but I watch stuff from me and my friend groups libraries and Plex lets me search for shows across my entire friend group at once. I’m afraid I’ll be waiting forever for jellyfin to allow federating servers so that bob@red.instance can share a library with alice@blue.instance allowing Alice to browse red+blue instance content from their home instance UI instead of requiring an account with every instance.

phase, in noyb files GDPR complaint against Meta over “Pay or Okay”
@phase@lemmy.8th.world avatar

Given that the average phone has 35 apps installed, keeping your phone private could soon cost around € 8,815 a year.

Nice argument they found.

Gooey0210,

Only I have like 100+ apps installed? 🫣

amio, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

fandom wikis [...] random internet comments

Well, that explains a lot.

TurboHarbinger, in Plex starts narcing on its own users' anime and X-rated habits with an opt-out service, and it's going terribly

I opted out of plex as soon as it asked me to create an online account.

You don’t need that for Jellyfin.

s7ryph, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Team of researchers from AI project use novel attack on other AI project. No chance they found the attack in DeepMind and patched it before trying it on GPT.

gerryflap, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
@gerryflap@feddit.nl avatar

Obviously this is a privacy community, and this ain’t great in that regard, but as someone who’s interested in AI this is absolutely fascinating. I’m now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn’t generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it’s even an expected thing. After all, we as humans also have the ability to recite pieces of “training data” if we seem them interesting enough.

Cheers,

They mentioned this was patched in chatgpt but also exists in llama. Since llama 1 is open source and still widely available, I’d bet someone could do the research to back into the weights.

Socsa,

Yup, with 50B parameters or whatever it is these days there is a lot of room for encoding latent linguistic space where it starts to just look like attention-based compression. Which is itself an incredibly fascinating premise. Universal Approximation Theorem, via dynamic, contextual manifold quantization. Absolutely bonkers, but it also feels so obvious.

In a way it makes perfect sense. Human cognition is clearly doing more than just storing and recalling information. “Memory” is imperfect, as if it is sampling some latent space, and then reconstructing some approximate perception. LLMs genuinely seem to be doing something similar.

j4k3,
@j4k3@lemmy.world avatar

I bet these are instances of over training where the data has been input too many times and the phrases stick.

Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.

I’ve seen models that almost always error into The Great Gatsby too.

TheHobbyist,

This is not the case in language models. While computer vision models train over multiple epochs, sometimes in the hundreds or so (an epoch being one pass over all training samples), a language model is often trained on just one epoch, or in some instances up to 2-5 epochs. Seeing so many tokens so few times is quite impressive actually. Language models are great learners and some studies show that language models are in fact compression algorithms which are scaled to the extreme so in that regard it might not be that impressive after all.

j4k3, (edited )
@j4k3@lemmy.world avatar

How many times do you think the same data appears after a model has as many datasets as OpenAI is using now? Even unintentionally, there will be some inevitable overlap. I expect something like data related to OpenAI researchers to reoccur many times. If nothing else, overlap in redundancy found in foreign languages could cause overtraining. Most data is likely machine curated at best.

TootSweet, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

LLMs were always a bad idea. Let’s just agree to can them all and go back to a better timeline.

taladar,

Actually compared to most of the image generation stuff that often generate very recognizable images once you develop an eye for it the LLMs seem to have the most promise to actually become useful beyond the toy level.

bAZtARd,

I’m a programmer and use LLMs every day on my job to get faster results and save on research time. LLMs are a great tool already.

Bluefruit,

Yea i use chatgpt to help me write code for googleappscript and as long as you dont rely on it super heavily and or know how to read and fix the code, its a great tool for saving time especially when you’re new to coding like me.

samus12345,
@samus12345@lemmy.world avatar

Back into the bottle you go, genie!

Ultraviolet,

Model collapse is likely to kill them in the medium term future. We’re rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don’t fully understand, this kind of training data poisons the model.

leftzero,

Photocopy of a photocopy.

Or, in more modern terms, JPEG of a JPEG.

CalamityBalls,
@CalamityBalls@kbin.social avatar

Like incest for computers. Random fault goes in, multiplies and is passed down.

kpw,

It's not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.

Gooey0210, in Any automated method to check for basic OPSEC mistakes whilst posting content online?

I believe Umbrella app kinda has something you want

Aurix, in Plex starts narcing on its own users' anime and X-rated habits with an opt-out service, and it's going terribly

While it should have been opt-in it is not that dramatic. The server owner can see what is played anyways. And since the primary use case is a home and friends setup it is vastly different to a Netflix scale privacy break.

okamiueru,

Are you saying that this information isn’t collected by Plex for a use case that doesn’t obviously require it? Because if it is the case, then it’s a big fucking deal.

greater_potater, (edited )

Yes, a server owner can see what is played. But this is sending email summaries about what I am watching on my own server. Even if that friend is not invited to my particular server, and even libraries that I haven’t shared with anyone.

It doesn’t even matter if I’m embarrassed by what it sends. That information is private. Period.

mindbleach, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.

Oh no.

PoliticalAgitator,

Now delete your posts from ChatGPTs memory.

mindbleach,

Deleting this comment won’t erase it from your memory.

Deleting this comment won’t mean there’s no copies elsewhere.

archomrade,

Deleting a file from your computer doesn’t even mean the file isn’t still stored in memory.

Deleting isn’t really a thing in computer science, at best there’s “destroy” or “encrypt”

mindbleach,

Yes, that’s the point.

You can’t delete public training data. Obviously. It is far too late. It’s an absurd thing to ask, and cannot possibly be relevant.

PoliticalAgitator,

And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?

DontMakeMoreBabies,

Or maybe folks should think before putting something into the world they can't control?

joshcodes,
@joshcodes@programming.dev avatar

User name checks out

PoliticalAgitator,

Yeah it’s their fault for daring to communicate online without first considering a technology that didn’t exist.

DarkDarkHouse,
@DarkDarkHouse@lemmy.sdf.org avatar

Sooner or later these models will be trained with breached data, accidentally or otherwise.

JonEFive,

This whole internet thing was a mistake because it can’t be controlled.

JonEFive,

Delete that comment you just posted from every Lemmy instance it was federated to.

PoliticalAgitator,

I consented to my post being federated and displayed on Lemmy.

Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?

The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.

Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.

JonEFive, (edited )

While you make a valid point here, mine was simply that once something is out there, it’s nearly impossible to remove. At a certain point, the nature of the internet is that you no longer control the data that you put out there. Not that you no longer own it and not that you shouldn’t have a say. Even though you initially consented, you can’t guarantee that any site will fulfill a request to delete.

Should authors and artists be fairly compensated for their work? Yes, absolutely. And yes, these AI generators should be built upon properly licensed works. But there’s something really tricky about these AI systems. The training data isn’t discrete once the model is built. You can’t just remove bits and pieces. The data is abstracted. The company would have to (and probably should have to) build a whole new model with only propeely licensed works. And they’d have to rebuild it every time a license agreement changed.

That technological design makes it all the more difficult both in terms of proving that unlicensed data was used and in terms of responding to requests to remove said data. You might be able to get a language model to reveal something solid that indicates where it got it’s information, but it isn’t simple or easy. And it’s even more difficult with visual works.

There’s an opportunity for the industry to legitimize here by creating a method to manage data within a model but they won’t do it without incentive like millions of dollars in copyright lawsuits.

cheese_greater, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Finally Google not being evil

PotatoKat,

Don’t doubt that they’re doing this for evil reasons

cheese_greater,

There’s an appealing notion to me that an evil upon an evil is closer to weighingout towards the good sometimes as a form of karmic retribution that can play out beneficially sometimez

reksas,

google is probably trying to take out competing ai

cheese_greater,

I’m glad we live in a time where something so groundbreaking and revolutionary is set to become freely accessible to all. Just gotta regulate the regulators so everyone gets a fair shake when all is said and done

GiM, in A question about secure chats

The contents of the chat messages are e2e encrypted, so meta can’t see what you are sending.

But they can see all of the Meta data, ie how often you chat with someone, how often you send pictures/videos/voice messages, etc.

That is more than enough to know everything about you and your friends.

ono, (edited )

The contents of the chat messages are e2e encrypted, so meta can’t see what you are sending.

Even if we assume correct e2ee is used (which we have no way of knowing), Meta can still see what you are sending and receiving, because they control the endpoints. It’s their app, after all.

Rose,

They use the Signal protocol for e2ee.

min_fapper,

Or so they claim. We can’t really verify their implementation though.

rmuk,

Even if they do, you can’t know whether they can access the encryption keys. It’s all just layers of “but this, but that” and at the very bottom a layer of “trust me, bro”.

GarytheSnail, in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
@GarytheSnail@programming.dev avatar

How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?

Asifall,

It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.

In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.

far_university1990,

Nobody wants to be the one to say these models are illegal.

But they obviously are. Quick money by fining the crap out of them. Everyone is about short term gains these days, no?

library_napper,
@library_napper@monyet.cc avatar

Are they illegal if they were entirely free tho?

infreq, in A question about secure chats

They will not switch anyway…

Thisfox,

They will if I don’t sound paranoid and can give rational answers backed up with real articles that aren’t conspiracy sites. Much of my family are teachers, everyone has at least one university degree, and is capable of rational thought and critical thinking. They just don’t see a reason to switch. I need to put forward a reason that is worth their time.

infreq,

I like your (ungrounded) optimism

  • All
  • Subscribed
  • Moderated
  • Favorites
  • privacy@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #