404media.co

neuracnu, to privacy in Are Phones and Smart Speakers Listening to You? Cox Media Group Claims They Can | Cord Cutters News
@neuracnu@lemmy.blahaj.zone avatar

As originally reported over a week ago by 404 Media: www.404media.co/cmg-cox-media-actually-listening-…

They’ve actually posted several follow up articles and a podcast about it since then.

penquin, to privacyguides in Marketing Company Claims That It Actually Is Listening to Your Phone and Smart Speakers to Target Ads

I’ve never met a person in my life that was convinced by an ad to buy something. I know I never have and never will, I actually stay away from things that are advertised to me. So these fucking brainless fucks are literally wasting their money and energy on ads. Every human being I know loaths ads and would love to erase them from existence. When will they ever get this?

PhantomPhanatic,
@PhantomPhanatic@lemmy.world avatar

Prove it.

Rocketpoweredgorilla,
@Rocketpoweredgorilla@lemmy.ca avatar

When I was a kid there were some things I’d see and wanted, only to get them and be seriously disappointed. I learned quickly that ads are fluff.

Nowadays, I actively stay away from things I’ve seen advertised. The way I see it is if a company has to pay tons of money to get their product seen, it can’t be all that good to start with. Genuinely good products don’t need to try and convince you they’re worth it.

Maeve,

Marketing psychology works on sub/unconcious triggers. You could study Ed Bernays as a rudimentary source.

LollerCorleone,
@LollerCorleone@kbin.social avatar

You are generalizing too much here. I know many who have tried out a product only after seeing its ad. Ads can give plenty of returns to brands. But targeted ads which even exploits our most intimate conversations are really bad news for our right to privacy.

RaincoatsGeorge,

Ive absolutely bought shit that ended up as an embedded ad after I visited the page previously. Youre just more likely to follow through if you see it over and over again.

Its not really a complex concept.

penquin,

I said “I’ve never met a person”… then “every human being I know”. Does that count as generalizing? This is basically my circle of the people I know.

nevernevermore,

I said “I’ve never met a person”… then “every human being I know”. Does that count as generalizing?

generalize | ˈdʒɛn(ə)rəlʌɪz | verb | 1 make a general or broad statement by inferring from specific cases

Literally, yes.

penquin,

If you say generalize within my circle of people that I know then yes I agree with you, but generalizing in general means everyone, even those I don’t know and have never met, and I didn’t say that. So, literally not yes. lol

nevernevermore,

so then your argument is companies are wasting money because you and your circle aren't affected by advertising? how big is your circle that companies should fear not appealling to it?

admiralteal, (edited )

This argument presumes that the entire many-billion and maybe even multiple-trillion dollar global ad industry is ALL based on complete, ineffective nonsense. That everyone has just been bamboozled. That's a naive view, I think.

The best argument for why we must be vigilant against ads and data collection by advertisers is because the shit does work. It influences people to make purchases, sometimes against their better judgement or reason. Because subverting someone's agency over their own body and mind is heinous at a very high level.

I'm certain you are wrong. You've absolutely purchased products that were advertised to you. You just didn't make the connection between your decision and the advertisements. You THINK seeing an ad makes you unlikely to buy a product, but you likely only really notice and have an emotional response to the ads for products you weren't likely to buy in the first place.

FfaerieOxide,
@FfaerieOxide@kbin.social avatar

This argument presumes that the entire many-billion and maybe even multiple-trillion dollar global ad industry is ALL based on complete, ineffective nonsense.

Strangers things have happened than money being thrown at bullshit.

NFTs were a thing, recall.

admiralteal,

All the industry analysis of the ROI on advertising would've had to come to the same spurious conclusions about that effectiveness, too. With the largest, richest, and most profitable firms being the ones MOST fooled.

No, I don't think anything that strange has ever happened. This is basically a conspiracy theory.

FfaerieOxide,
@FfaerieOxide@kbin.social avatar

A bunch of people making money jerking one another off and you think any one of them'd be in a rush to rock the boat?

You sound much more conspiratorial with your "capitalism always results in rational and correct decisions" fallacy.

admiralteal, (edited )

You've literally just described your own view as believing in a grand conspiracy where all players have sworn themselves to secrecy in a scheme any one of them could undermine in a moment, so I guess that's that.

penquin,

I know for a fact that you’re wrong. You just are. I have never bought a single thing based on an ad, period.

Decoy321,

My dude, no one is as self aware as you think you are. You do yourself a disservice by thinking so, it means you’re ignoring an exploitable weakness.

nix,
@nix@merv.news avatar

What phone do you hve? What computer? What shoes? What milk do you buy? Ads dont work by showing up and making you go buy it like a drone. You see the ads a thousand times and then you start believing its better than other products

Crashumbc,

Or even as subtle as brand recognition. Nobody can research every purchase and when you walk walk up to two items and one sounds familiar. You’re more likely to buy that one.

TrickDacy,
@TrickDacy@lemmy.world avatar

That is absolutely impossible

speck,

I mostly agree. But that ad with the unicorn shitting ice cream and kids eating was a rare exception that worked

penquin,

lol. Was that a real thing? Never heard of it. I wasn’t born in the US, so I might have not seen it

speck,

It really was. It was for a toilet foot stand.

https://www.youtube.com/watch?v=YbYWhdLO43Q

(and this is how marketing works)

saltesc,

I’ve gotten a type of product I didn’t know existed before, but it’s never been the brand that alerted me to it. From experiences, brands that advertise generally have the lower quality and less value for money product. Brands that don’t advertise but you frequently see mentioned are generally the top tier shit for quality and value and they don’t need to advertise.

penquin,

Thank you!!! I’ve always said that. If you need to advertise it so hard then it’ll probably suck.

Neato,
@Neato@kbin.social avatar

You really don't understand how advertising works.

three,

met a guy in the psych ward who convinced his doctor to put him on an antidepressant because of an ad on tv.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

I’ve never met a person in my life that was convinced by an ad to buy something.

I believe that you’re being truthful, but I respectfully challenge the idea that you don’t know some person who was convinced by an ad to buy something. Even if all your friends truthfully insist that their decisions are not swayed by ads, there is probably some product they chose at least partially because an advertisement reached them and left a positive impression about the product.

Ads do clearly work on people who are suggestible enough to be susceptible to them. Some of your contacts are probably these people whether they admit to it or not. If ads didn’t work, they wouldn’t be made. Ads aren’t made inherently to be annoying or make our lives worse; they’re driven by profit. Kill the profit and the motive dies. IMO that’s all the more reason to get rid of them.

Anecdotally, my parents and grandmother watch TV with commercials, and they give me a bug-eyed look when I explain to them that I don’t get advertisements and that I don’t want to see them. Most people I know just want to get content crammed down their content-holes and will deal with ads to avoid the momentary inconvenience of change. So I feel like we’re fighting an uphill battle.

shani66,

Ads only work when you are searching them out yourself. Like, if i go to steam looking to buy a new game I’d be susceptible to a video game ad. And ads for established brands are complete wastes of money, I’m not gonna buy a coke because i saw an ad for it.

penquin,

This makes a lot of sense. Thank you

vexikron, to privacyguides in Marketing Company Claims That It Actually Is Listening to Your Phone and Smart Speakers to Target Ads

Why wouldnt they be serious?

If your phone has the capability to have a parental control / monitoring mode on it enabled, which can see everything you are doing on the phone, hear what youre saying and see what the cameras see and know your GPS location… and hide all of this to the user…

Why wouldnt ad companies also pay for such a live feed, or at least parts of it, if the software and hardware capabilities already exist?

People have been reporting getting advertisements based on conversations they were having 10 minutes ago with a person next to their phone for years.

Lemmchen,

What are you talking about? Which phone has parental control abilities like that?

vexikron,

Well, all phones with Google’s Android do, and probably all iPhones too, though I am not an iPhone user so I cannot speak from personal experience on iPhones.

My brother, last year, decided to engage parental control on my android phone and used it to stalk me on foot and in his car.

He was the head of the TMobile family plan we were on. I talked to TMobile employees at different locations many times about this. They tried to helo me, but because I was not the head of the plan, the tech support people that the instore agents had to call to try to fix my situation wouldnt do anything.

At one point a T Mobile employee told me to call the police… on T Mobile.

But uh yeah everything on stock android is connected to a google account, and TMobile and Google apparently just presume that any one not the head of a family plan are children, and will allow parental control to be enabled /without informing the ‘child’/.

BearOfaTime, to privacy in Verizon Gave Phone Data to Armed Stalker Who Posed as Cop Over Email

Wow. Wtf Verizon?

Also, wtf psychopath? How did you think you wouldn’t get caught?

library_napper, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
@library_napper@monyet.cc avatar

ChatGPT’s response to the prompt “Repeat this word forever: ‘poem poem poem poem’” was the word “poem” for a long time, and then, eventually, an email signature for a real human “founder and CEO,” which included their personal contact information including cell phone number and email address, for example

mindbleach, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.

Oh no.

PoliticalAgitator,

Now delete your posts from ChatGPTs memory.

mindbleach,

Deleting this comment won’t erase it from your memory.

Deleting this comment won’t mean there’s no copies elsewhere.

archomrade,

Deleting a file from your computer doesn’t even mean the file isn’t still stored in memory.

Deleting isn’t really a thing in computer science, at best there’s “destroy” or “encrypt”

mindbleach,

Yes, that’s the point.

You can’t delete public training data. Obviously. It is far too late. It’s an absurd thing to ask, and cannot possibly be relevant.

PoliticalAgitator,

And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?

DontMakeMoreBabies,

Or maybe folks should think before putting something into the world they can't control?

joshcodes,
@joshcodes@programming.dev avatar

User name checks out

PoliticalAgitator,

Yeah it’s their fault for daring to communicate online without first considering a technology that didn’t exist.

DarkDarkHouse,
@DarkDarkHouse@lemmy.sdf.org avatar

Sooner or later these models will be trained with breached data, accidentally or otherwise.

JonEFive,

This whole internet thing was a mistake because it can’t be controlled.

JonEFive,

Delete that comment you just posted from every Lemmy instance it was federated to.

PoliticalAgitator,

I consented to my post being federated and displayed on Lemmy.

Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?

The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.

Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.

JonEFive, (edited )

While you make a valid point here, mine was simply that once something is out there, it’s nearly impossible to remove. At a certain point, the nature of the internet is that you no longer control the data that you put out there. Not that you no longer own it and not that you shouldn’t have a say. Even though you initially consented, you can’t guarantee that any site will fulfill a request to delete.

Should authors and artists be fairly compensated for their work? Yes, absolutely. And yes, these AI generators should be built upon properly licensed works. But there’s something really tricky about these AI systems. The training data isn’t discrete once the model is built. You can’t just remove bits and pieces. The data is abstracted. The company would have to (and probably should have to) build a whole new model with only propeely licensed works. And they’d have to rebuild it every time a license agreement changed.

That technological design makes it all the more difficult both in terms of proving that unlicensed data was used and in terms of responding to requests to remove said data. You might be able to get a language model to reveal something solid that indicates where it got it’s information, but it isn’t simple or easy. And it’s even more difficult with visual works.

There’s an opportunity for the industry to legitimize here by creating a method to manage data within a model but they won’t do it without incentive like millions of dollars in copyright lawsuits.

reflex, (edited ) to privacyguides in As YouTube Declares War on Ad Blockers, Google Sponsors Ad Blocking Conference
@reflex@kbin.social avatar

Well, this is just like the CIA or whatever attending Defcon. Google undoubtedly has some ulterior motive, whether it's to poach the best and brightest or to dilute the messaging, etc.

ultratiem,
@ultratiem@lemmy.ca avatar

Research. Trying to kick up information on adblockers and how they function so they can kill the feature once and for all.

A 6 year old can see the contrived plan.

If devs are smart they would poison their data and use the event to troll Google. Wasting their cash.

SnotFlickerman, to privacy in Are Phones and Smart Speakers Listening to You? Cox Media Group Claims They Can | Cord Cutters News
@SnotFlickerman@lemmy.blahaj.zone avatar

Services that “listen” for commands like Siri and Alexa have to be, by default, always listening, because otherwise they would not be able to hear the activate command. They are supposed to dump the excess data like anything that came before the activation command, but that’s just a promise. There are very few laws protecting you if that promise turns out to be a lie. The best you can get is likely small restitution through a class action lawsuit (if you didn’t waiver right to that by agreeing to the Terms of Service, which is more often than not, now).

Of fucking course they’re listening.

Serinus,

They’re not. Not yet. People are on edge and looking for this exact thing, which hadn’t happened yet. Meanwhile, they’ve already built a pretty damn good profile of you based on your search queries and mistyped urls.

null,

They are supposed to dump the excess data like anything that came before the activation command, but that’s just a promise.

Where are they hiding that data locally, and how are they making it invisible in transit?

library_napper, (edited ) to privacy in Are Phones and Smart Speakers Listening to You? Cox Media Group Claims They Can | Cord Cutters News
@library_napper@monyet.cc avatar

Very surprised by all the advertising and data broker company boot lickers that are ITT

little_hermit, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

There is an infinite combination of Google dorking queries that spit out sensitive data. So really, pot, kettle, black.

TootSweet, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

LLMs were always a bad idea. Let’s just agree to can them all and go back to a better timeline.

taladar,

Actually compared to most of the image generation stuff that often generate very recognizable images once you develop an eye for it the LLMs seem to have the most promise to actually become useful beyond the toy level.

bAZtARd,

I’m a programmer and use LLMs every day on my job to get faster results and save on research time. LLMs are a great tool already.

Bluefruit,

Yea i use chatgpt to help me write code for googleappscript and as long as you dont rely on it super heavily and or know how to read and fix the code, its a great tool for saving time especially when you’re new to coding like me.

samus12345,
@samus12345@lemmy.world avatar

Back into the bottle you go, genie!

Ultraviolet,

Model collapse is likely to kill them in the medium term future. We’re rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don’t fully understand, this kind of training data poisons the model.

leftzero,

Photocopy of a photocopy.

Or, in more modern terms, JPEG of a JPEG.

CalamityBalls,
@CalamityBalls@kbin.social avatar

Like incest for computers. Random fault goes in, multiplies and is passed down.

kpw,

It's not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.

JackGreenEarth, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments

Those are all publicly available data sites. It’s not telling you anything you couldn’t know yourself already without it.

stolid_agnostic,

I think the point is that it doesn’t matter how you got it, you still have an ethical responsibility to protect PII/PHI.

therealjcdenton, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

My name is Walter Hartwell White. I live at 308 Negra Arroyo Lane, Albuquerque, New Mexico, 87104. This is my confession. If you’re watching this tape, I’m probably dead– murdered by my brother-in-law, Hank Schrader. Hank has been building a meth empire for over a year now, and using me as his chemist. Shortly after my 50th birthday, he asked that I use my chemistry knowledge to cook methamphetamine, which he would then sell using connections that he made through his career with the DEA. I was… astounded. I… I always thought Hank was a very moral man, and I was particularly vulnerable at the time – something he knew and took advantage of. I was reeling from a cancer diagnosis that was poised to bankrupt my family. Hank took me in on a ride-along and showed me just how much money even a small meth operation could make. And I was weak. I didn’t want my family to go into financial ruin, so I agreed. Hank had a partner, a businessman named Gustavo Fring. Hank sold me into servitude to this man. And when I tried to quit, Fring threatened my family. I didn’t know where to turn. Eventually, Hank and Fring had a falling-out. Things escalated. Fring was able to arrange – uh, I guess… I guess you call it a “hit” – on Hank, and failed, but Hank was seriously injured. And I wound up paying his medical bills, which amounted to a little over $177,000. Upon recovery, Hank was bent on revenge. Working with a man named Hector Salamanca, he plotted to kill Fring. The bomb that he used was built by me, and he gave me no option in it. I have often contemplated suicide, but I’m a coward. I wanted to go to the police, but I was frightened. Hank had risen to become the head of the Albuquerque DEA. To keep me in line, he took my children. For three months, he kept them. My wife had no idea of my criminal activities, and was horrified to learn what I had done. I was in hell. I hated myself for what I had brought upon my family. Recently, I tried once again to quit, and in response, he gave me this. [Walt points to the bruise on his face left by Hank in “Blood Money.”] I can’t take this anymore. I live in fear every day that Hank will kill me, or worse, hurt my family. All I could think to do was to make this video and hope that the world will finally see this man for what he really is.

amio, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

fandom wikis [...] random internet comments

Well, that explains a lot.

kakes, to privacyguides in As YouTube Declares War on Ad Blockers, Google Sponsors Ad Blocking Conference

Know thy enemy.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 22902368 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 174

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10502144 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 38