This is just PR. Apple fights something silently then when it’s apparent they will lose they go “We endorse this”… and braindead fans will go “see Apple invented right to repair”. Then the rest gets numb trying to explain to them how Apple just says thing and rarely does anything to benefit users but naah…
Microsoft just started selling spare parts for an xbox controller.
Fixing a drifting thumbstick is 80% the cost of a new controller in parts alone. You can fix it for $5 if you go aftermarket and are happy desoldering over 10 points to remove it.
Which, as I understand it, is kinda the point of the bills too. As in, if there is documentation and it’s reasonably easy to dis- and re-assemble, there can be a (bigger) market for spare parts.
The problem is that the thumbsticks are soldered onto the motherboard. Microsoft’s “fix” is replacing the whole motherboard, when the sticks should really be swappable.
In a Nintendo Switch, the sticks are held in by screws and connect via a ZIF connector.
Except the device is already in your home, and most people leave their account logged in. That’s basically like you inviting someone into your house, they hang out in your spare bedroom…and they’re still there. So no need to re-grant consent to a situation that hasn’t changed. Unless you mean it auto-logs out (or you log out) and have to re-grant consent then? Most do require consent on logging in, and the average consumer would hate having to log in every time and would probably use weak passwords because of this.
But, you can at least kick them out (revoke consent).
I just don’t see how a proper law/regulation would fix/restrict this, except to make certain personalization attempts (targeted ads) illegal.
Except the device is already in your home, and most people leave their account logged in.
People buy products to serve a purpose to themselves and their family, so yes, the device is in their home FOR THEIR USE.
Being logged in isn’t an open invitation to be spied, so laws need to address that.
That’s basically like you inviting someone into your house, they hang out in your spare bedroom…and they’re still there.
The invite, in this case, is not for a company to spy on you and your family. I don’t think anyone would actually want that, especially not for the purpose of targeting them with ads.
People use voice activated devices, which do record and react to voice prompt, but the permission here is given only for that use. A company shouldn’t be able to say “hey, you can use the service you’ve paid for, and by agreeing to use that service, you also agree to give us permission to digitally invade your home and privacy.”
I just don’t see how a proper law/regulation would fix/restrict this, except to make certain personalization attempts (targeted ads) illegal.
Yes, make it illegal. And make everything opt-in without strings attached (i.e. if you agree to use the service you paid for, you agree to being spied on).
I will personally continue to use my wallet to yield power. I won’t buy devices or support companies who are evil, and will support companies who respect privacy and data freedom. The whole enshitification of the digital landscape is incredibly sad to see, TBH.
Its probably some watered down right to repair bill and the only reason apple supported it is so they can claim that theres already a rtr bill when someone wants a proper one.
Probably going to try to do the same thing to this bill that was done to the NY Right to Repair bill: Gavin Newsom will alter the bill slightly just before signing it that leaves a big gaping loophole for companies like Apple.
…a Verizon representative told Poppy that the corporation was a victim too.
Fuck off. You’re all a bunch of idiots who didn’t do an extremely quick search online to find an officer of that name in that area. Or at the very least call the police in that area to confirm said person isn’t a fraudster! Large corporations need to stop gaslighting us into thinking that when they fuck up that they’re victims!
I don’t know how other people see it but the way I see it is if a company makes as much money as Verizon does then there is no excuse for this to happen. They have more than enough money to prevent this from happening tenfold but instead of investing money into the company CEOs get paid. With that being said, I believe that if there are any issues in a company, the CEO should be a 100% responsible. If they are going to get paid more money than anyone else than they should be doing more work than anyone else and if bad things are happening below them. That means they’re not doing their due diligence.
Does the bill have any provision mandating that parts and repairs be fairly priced (with some reasonable legal definition for “fairly”)? Or is apple going to charge $2000 for a replacement iphone screen part?
The bill would define the following terms: “documentation,” “electronic or appliance product,” “product,” “fair and reasonable terms,” “service dealer,” and “trade secret.”
Following an investigation by Bloomberg, the company admitted that it had been employing third-party contractors to transcribe the audio messages that users exchanged on its Messenger app.
So not your IRL conversations.
There is no indication that Facebook has used the information it collected to sell ads.
Companies DO analyze what you say to smart speakers, but only after you have said "ok google, siri, alexa, etc." (or if they mistake something like "ok to go" as "ok google"). I am not aware of a single reputable source claiming smart speakers are always listening.
The reality is that analyzing a constant stream of audio is way less efficient and accurate than simply profiling users based on information such as internet usage, purchase history, political leanings, etc. If you're interested in online privacy device fingerprinting is a fascinating topic to start understanding how companies can determine exactly who you are based solely on information about your device. Then they use web tracking to determine what your interests are, who you associate with, how you spend your time, what your beliefs are, how you can be influenced, etc.
Your smart speaker isn't constantly listening because it doesn't need to. There are far easier ways to build a more accurate profile on you.
So, you and your friend were talking about a subject you obviously are interested in, likely spend heaps of time online searching about, commenting and following on social media and you’re surprised you got an ad for it? Bonkers.
It’s been published by multiple sources at this point that this happens because of detected proximity. Basically, they know who you hang out with based on where your phones are, and they know the entire search history of everyone you interact with. Based on this, they can build models to detect how likely you are to be interested in something your friend has looked at before.
Yup. For companies it’s much safer to connect the dots with the giant amount of available metadata in the background than risk facing a huge backlash when people analyze what data you’re actively collecting.
Which is why people need to call out the tracking that’s actually happening in the real world a lot more, because I don’t really want my search-history leaked by proxy to people in my proximity either.
On mobile, can’t find the recent one based on conversation that was floating around lemmy recently.
This one finds high levels of inconsistent misactivation from TV shows. Some shows caused more than 4 misactivations per hour (a rate of more than 80 per day) …northeastern.edu/…/smart-speakers-study-pets20/
It’s literally impossible for them to not be “analyzing” all the sounds they (perhaps briefly) record.
[Sound] --> [Record] --> [Analyze for keyword] --> [Perform keyword action] OR [Delete recording]
Literally all sounds, literally all the time. And we just trust that they delete them and don’t send them “anonymized” to be used for training the audio recognition algorithms or LLMs.
The way that “Hey Alexa” or “Hey Google” works is by, like you said, constantly analysing the sounds they said. However, this is only analyzed locally for the specific phrase, and is stored in a circular buffer of a few seconds so it can keep your whole request in memory. If the phrase is not detected, the buffer is constantly overwritten, and nothing is sent to the server. If the phrase is detected, then the whole request is sent to the server where more advanced voice recognition can be done.
You can very easily monitor the traffic from your smart speaker to see if this is true. So far I’ve seen no evidence that this is no longer the common practice, though I’ll admit to not reading the article, so maybe this has changed recently.
If they were to listen for a set of predefined product-related keywords as well, they could take note of that and send that info inconspicuously to their servers as well without sending any audio recordings. Doesn’t have to be as precise as voice command recognition either, it’s just ad targeting.
Not saying they do that, but I believe they could.
Always has been. Just yesterday I was explaining AI image generation to a coworker. I said the program looks at a ton of images and uses that info to blend them together. Like it knows what a soviet propaganda poster looks like, and it knows what artwork of Santa looks like so it can make a Santa themed propaganda poster.
Same with text I assume. It knows the Mario wiki and fanfics, and it knows a bunch of books about zombies so it blends it to make a gritty story about Mario fending off zombies. But yeah it’s all other works just melded together.
My question is would a human author be any different? We absorb ideas and stories we read and hear and blend them into new or reimagined ideas. AI just knows it’s original sources
“Blending together” isn’t accurate, since it implies that the original images are used in the process of creating the output. The AI doesn’t have access to the original data (if it wasn’t erroneously repeated many times in the training dataset).
My question is would a human author be any different?
Humans don’t remember the exact source material, it gets abstracted into concepts before being saved as an engram. This is how we’re able to create new works of art while AI is only able to do photoshop on its training data. Humans will forget the text but remember the soul, AI only has access to the exact work and cannot replicate the soul of a work (at least with its current implementation, if these systems were made to be anything more than glorified IP theft we could see systems that could actually do art like humans, but we don’t live in that world)
Google are totally into blocking ads. That was the whole catch line for selling WEI. “We’ll block all the random ads for you and keep you safe”. What they didn’t say was that they would replace the blocked ads with Google bought ads.
How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?
It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.
In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.
404media.co
Top