This assumes the only source these companies collect from is your internet traffic. It’s not.
And even if it was, VPNs don’t protect against fingerprinting.
For the past few months I’ve been using kanary which is a service that searches for your information on hundreds of different data mining sources and submits deletion requests for you.
I started with ~225 exposures and it’s gone down over time but I’m still sitting at ~50 exposures and it seems to have plateaued.
This information was data like who I’d married and when, past and current addresses, family members, etc. None of which was gleaned from internet traffic.
Right, but you’re talking about two distinctly different things. The ISP doesn’t own the websites you visit. They only have a record of your traffic. The individual websites that you visit can bust your privacy through 3rd party cookies, browser fingerprinting, cross-site tracking, and a bunch of other methods created to circumvent the user security features built into the browser. Nobody shares that information back to the ISP for free. The real issues are that huge companies like Google, Amazon, and Facebook have scripts running on millions of websites, so they can track you everywhere you go. But they’re still just single companies. The linchpin is that they then sell that information to Big Data brokers like Cambridge Analytica, and Informatica. Those companies combine literally everything you do online, everything you submit, all your history, all your data points, and build these fully accurate pictures of you. You need to take proactive measures to prevent this sort of data harvesting that go well beyond a VPN. But your ISP doesn’t have these systems in place. So unless the ISP is buying your profile from Big Data, and then selling it to the NSA, having a VPN is enough to thwart your ISP, and the issue identified in the article. You still have to take a bunch of other precautions to prevent the larger issue if you truly want any anonymity, and they’ll probably figure you out anyways.
Apple will now require a court order or search warrant to give push notification data to law enforcement in a shift from the previous practice of accepting a subpoena to hand over data.
A subpeana is a court order so that’s clear as mud.
Signal sends notifications via Apple's push notification servers. So I'm still not quite clear what are suggesting. That apps run continuously in the background. each doing real-time polling of their respective servers for notifications? Because your battery ain't going to last long.
That sounds like a cracking idea, the suggestion is that something in Apple's ToS prevents this generally - but is that the case, if Signal manages it?
They’re lying about many things, such as their respect for privacy, right to repair, sustainability, what else. Oh they’ve lied about use of slave labor if I recall correctly
No, the article is clear evidence that they are imperfect - not that they don't generally care about user privacy. In general the work they have done on privacy has been pretty good. Apple mandating end-to-end encryption might be something that they sholuld have done - and that's a reasonable criticism, but it looks like it is possible for individual app makers to encrypt their notifications: . There's syill the metadata, of course.
If I am being paid to shill for Apple they are being particularly tardy with their payments. But to answer your question, no - I'm a user who is privacy conscious and thinks Apple does a reasonable job.
I am, however always interested in knowing about where they are falling down so I can mitigate. General handy wavy accusations don't really help me practically - or indeed anyone.
Sorry for the delay. In this case they were lying that they have improved their process regarding handling such orders, implying that they will now only comply for fewer orders that they can’t (yet) deny.
I’m tired of constantly running in to the basic lack of understanding that LLMs are not knowledge systems. They emulate language, and produce plausible sentences. This journalist is using the output of a LLM as a source of knowledge… What a fucking disgrace this should be for Forbes.
Imagine a journalist just quoting a conversation with their 10 year old, where they played a game of “whatever you do, you have to pretend like you really know what you’re talking about. Do not be unsure about anything, ok?”, and used the output as a source for actual facts.
If you use ChatGPT, or Bard, or any LLM for anything beyond creative output, or with the required comprehension to vet the output, just stop. Don’t use tools you don’t understand the function or limitations of.
I’ve already had to spend hours correcting a fundamental misconception someone got from ChatGPT, which was part of a safety mechanism of medical software. I’ve also had the displeasure of finding self-contradicting documentation someone placed in a README, which was a copy-paste from ChatGPT.
It’s such a powerful tool and utility if you know what it can help with. But it requires a basic understanding, that too many people are either too lazy to make the effort for, or just lacking critical thought processes, and “it sounded really plausible”, (the full extent of what it’s designed to do) fools them completely.
LMAO I opened the link expecting an article, and I got a steady flow of quotations, but nothing to indicate who is being quoted. At the very end, the sentence “For its part, Bard states…” is used, and I can think of no clearer way to display your fundamental misunderstanding of AI. Bard can’t “state” shit in any official capacity. Bard is the same caliber of LLM as GPT, and both have a documented tendency to hallucinate.
Not all AI is bad. In the healthcare sector it could improve decision-making, produce personalised treatment plans, and so forth.
Obviously, the healthcare professionals will have final say, but it’s a good tool to have. AI will not replace them. Though it will streamline cumbersome processes.
I am all for AI as long as it’s used in a non-dystopian manner.
Isn’t that what the doctor is already supposed to be doing? Looking at our history, charts, treatment attempts and customizing? Because if not I could just WebMD and pay a doc to write a scrip.
There are a lot of things doctors record in evaluations, and they feed these AIs this information and instead of these AIs spitting out a “diagnosis” they calculate risk of harm vs risk of further investigation.
In things like pediatrics, this reduces the need for unpleasant and dangerous procedures - a CT scan impacts a 6 year old way more than a 30 year old.
AI is extremely useful for problem domains with a ton of input, medicine being one. Doctors can only do so much and rely on algorithms just like the AI does. The AI has the benefit of being able to do it a fuckload faster, more accurately, and compare it to more relevant things.
Don’t confuse it with generative AI. this is a very different system than that.
There are already talks of military use, reading all your texts, eliminating jobs with no plan to support those who lost them, AI driven cars killing people, taking all creative work from humans and leaving the menial tasks…that’s nowhere near a complete list and it’s already dystopian.
The thing is, when private companies are the ones that hold the tech and monetize it, shit is going to get dystopian before you can say “artichoke.” Capitalism is dystopian. Late stage capitalism even more so. And we are fast approaching a new frontier in which these same evil tech companies will wield this unbelievable power. I get it. There are good uses. But when the end goal is profit, our best interest comes second, if not last.
It’s a tool like anything else. It’ll be used for everything like anything else. It cannot be stopped. All we can hope for are tools to mitigate the damage and applications to outweigh what bad it’s capable of. Trying to slow it down is like trying to stop a flood with buckets. Build a boat, it will only keep rising.
Maybe I’m just old and stuck in my ways, but I don’t see the upside here. Why would I need Google’s AI to be able to tell the tone of my messages and respond appropriately. They’re my messages. People send me messages to talk to me. In what world do I want to remove myself from that process?
If my wife texts me and says “when are you going to be home from work?” I don’t want an AI looking at my chat history and making a guess. I want to tell her what I have going on right now and respond. If a friend asks me if I wanna hang out this weekend, I don’t want AI checking my calendar and seeing I’m free and then agreeing to plans. I want to think about it and come to a decision myself.
Can someone smarter than me point to an actual good use case for this?
this is almost like when people traded baubles for valuable things to unknowing natives. corporations make these inane features and expect us to pay for them by giving them all our information. They dont bother to even ask, just assume we are ok with it by having all this crap on by default.
Suppressing repeated notifications has been a thing since messaging has been a thing. If your service doesn’t offer it, find a better one. Also, this is a hypothetical, and a bad one. Why would you not simply read the texts?
You don’t have to share your number to get spam messages - I get weekly spam texts for “Susan” (not my name), which I never interact with but have been coming from random numbers for years.
Once your # is on a list, whether you put it there or not, it never leaves.
I’m the only one who has ever had this phone no., but if I were to swap now, 99% chance I’d get a reused number, which would probably come already loaded on a million different spam lists. There’s no winning.
They don’t care if you don’t answer. It costs them $0 to text you.
My profession makes me a target for a wide variety of advertising and my phone number is required by law to be listed publicly.
I already don’t. Mostly because Google Messages filters them in a way that I never even see them unless I’m actively looking. It was only when I got an iPhone that I realized exactly how horrific it is.
It’s been hacked, the light bulb is likely part of some botnet or under an attacker’s control directly. Which is why it’s sending that much data continuously. IoT/smart devices don’t send a lot of data in this sort of volume as most of the time they’re idle and maybe send a heartbeat or status update every once in a while to prove they’re alive.
This is what is called an indicator of compromise or IoC, it’s some behavior or pattern that can be used to determine what is happening or who is the one doing the attacking.
Likely OP would need to do some analysis to be able to get attribution unless it’s a very well known botnet actor in which case attribution is fairly straightforward.
You’re aware that you can send whatever traffic you want over any port right? Using 123/udp for NTP is just convention. A light bulb that is updating its time over Tor is suspect. TP-Link would have their own infrastructure or use public pools to update the device’s time.
This alone would make me more likely to switch back to iPhone, as much as I hate the walled garden. “Just switch to a private messenger app” doesn’t really work when no one else uses them. I’ve even gotten all of my family to try Signal, but they dropped it in favor of going back to imessage. It’s extremely frustrating, far from ideal, but it is what it is.
Google reading my messages at all, even if it’s “oPt OuT”, is a complete non starter.
They dropped it because people assumed it was secure just because they used signal, and is never secure and you can assume pretty much anyone could be reading your texts wanyway.
There are three big reasons why we’re removing SMS support for the Android app now: prioritizing security and privacy, ensuring people aren’t hit with unexpected messaging bills, and creating a clear and intelligible user experience for anyone sending messages on Signal.
To me, all of those reasons are BS and easily gotten around. “Unexpected messaging bills?” Have a popup that warns you that this user doesn’t have an account and is about to send a SMS, potentially incurring a cost, as an example.
Having a unified app that supports your message protocol with SMS fallback is legitimately great. I’m still bitter signal canned that feature.
But it isn’t that big of a deal to just use two apps. It’s what I’ve had to do for a while now. Anyone I actually know goes into signal, and I use SMS for my boss, my dad, and various companies.
The squawker seems interesting to me. Unfortunately it’s only available on android and there are some issues probably out of their reach. But since I use it for something basic (literally seeing images of some profiles I follow), it serves me well
This is such a weird thing I’ve noticed on this community. There was a guy not too long ago that would make new accounts like daily so he wasn’t posting under the same username and it’s like… why?
I get you want privacy, but there’s a line where it just stops making sense, and your personal info isn’t that valuable. Anyway
I get you want privacy, but there’s a line where it just stops making sense, and your personal info isn’t that valuable. Anyway
Actually, you don’t need perfect privacy. You just need good enough privacy, and here’s why:
If you’re a low-value target - i.e. a random internet user, that’s you and me - always remember that your value is low: Google, Microsoft, Amazon, Facebook… expend a certain amount of resources to fish for enough of your data to earn them a return on their investment. We’re low-value targets, so they first and foremost go for the low hanging fruits: the people who don’t know, don’t care, wallow in social media without any restraint and make it particularly easy to gather data from.
All you have to do is make it hard enough and expensive enough for the corporate surveillance collective to lose money on you: create accounts full of fake data and don’t post personal information - or make up fake personal information in your posts - to poison their wells. Don’t post photos of you or your family. Use throwaway email addresses. Use a deGoogled phone. Don’t browse without an ad blocker set on reasonably high. Use a browser with anti-fingerprinting. Don’t fill out Costco membership cards. Pay with cash stuff that you don’t want anybody to know about. Etc etc.
In other words, adopt a reasonable-enough privacy hygiene so that you’re not part of the low hanging fruits. It doesn’t have to be drastic, just good enough to make you not worth the sonsabitches’ time and effort.
If you’re a high-value target however, a Snowden or an Assange, that’s a different proposition. But for the rest of us, private enough is good enough.
Not a single friend or family member gives two shits about privacy. When I tell them about what companies know about them and what they do with that information, it’s kind of like a vegan telling a meat eater where their meat comes from. Like “wow that sounds bad but I’m not willing to make any changes”. The only difference is that instead of animals being a product, this time they are the product.
The effective way to combat this is to pull their information from data brokers and tell them everything you know. Then they feel violated, as they should.
They’ll blame you and never put two and two together though.
And it comes down to a fundamental question: Will the European Commission follow through with its intent to right-size Apple’s abuse of power? Or will the DMA be nice in theory, but in practice, have no substantive meaning for most developers?
They already do that, they can say whatever, they will be isolated with their shitty Apple products and if they want decent browsers then they will need to use decent systems. Apple can’t abuse of their power and force us to follow their abusive rules, they can’t even have a decent UI desktop. They are so bad programmers.
Android was a victim of the NSO’s Pegasus because of WhatsApp, and possibly that only worked because Facebook negotiated with phone manufacturers to bundle dodgy pre-installed system apps outside the Google Play Store.
Apple’s iOS was a victim of the NSO’s Pegasus because of iMessages.
For me, that’s enough to completely steer clear of iOS altogether. I mean, the lack of customisation and control over my device was already enough, but that kind of vindicated it for me.
Yeah, my Android doesn’t have WhatsApp, I neither have Google apps. It’s a degoogled OS. I feel free and things works, even my default web browser on my Android has NoScript (JavaScript blocker), to make it safer. With Apple… you are sold.
Ditto! No Google needed, and Facebook apps are prohibited on my phone. I can even get banking apps working with a bit of Magisk, working in Zygisk domain with a deny list hiding it from the apps. Apparently proper SafetyNet checks aren’t that common anymore.
For browsers, I’d recommend Mull and Mulch. Mull is a privacy fork of Firefox, Mulch is a hardened version of Android System Webview (the backend browser that lots of apps use). Both come pre-installed with DivestOS.
Mozilla doesn’t have the sort of leverage to make an impact by abandoning apple devices. Firefox has an incredibly low market share and this could push people to other browsers. People tend to use the same browser for stuff like bookmark and password syncing, so abandoning ios could have larger consequences.
Yeah I understand, but if Apple is fucking up with our development, not only for Firefox, for any developer that makes apps for phones… why keep following their abusive rules? When I say “stopping developing apps for Apple” I mean, any developer that dislikes the abusive rules of Apple and fees. If we abandon the system, the iOS users will need to move to Android or other systems that are more friendly for developers.
Oh yeah generally I’d agree, with firefox I just think it’d be better to do what will push the fewest people away as long as it’s possible to maintain development.
Google has just unveiled a game-changing AI upgrade for Android. But it has a darker side. Google’s AI will start to read and analyze your private messages, going back forever. So what does this mean for you, how do you maintain your privacy, and when does it begin.
Smartphone privacy is about to change forever
Google’s AI to begin analyzing private messages on Android smartphonesgetty
There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”
But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”
And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.
There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available. MORE FOR YOU Trustworthy AI: String Of AI Fails Show Self-Regulation Doesn’t Work The Best Record Players For Beginners To Spin Vinyl Music The Technological Marvel Behind A Real Bug s Life MORE FROM FORBESGoogle Issues New Incognito Guidance For Chrome Users By Zak Doffman
Such requests fall outside Google Messages newly default end-to-end encryption—you’re literally messaging Google itself. While this is non-contentious, it’s worth bearing in mind. Just as with all generative AI chatbots, including ChatGPT, you need to assume anything you ask is non-private and could come back to haunt you.
But message analysis is different. This is content that does (now) fall inside that end-to-end encryption shield, in a world where such private messaging is the new normal. Here the push should be for on-device AI analysis, with data never leaving your phone, rather than content uploaded to the cloud, where more processing can be put to work.
This is where the Android Vs iPhone battlefield may well come into play. Historically, Apple has been much stronger when it comes to on-device analysis than Google, which has historically defaulted to the cloud to analyze user content.
Unsurprisingly, Apple’s own moves to bring generative AI to iPhone users will take that approach—on-device analysis as the default when it comes to user content, albeit with a carve-out for its request architecture. And there’s building excitement as to what might be on offer with this fall’s iOS 18.
“Apple is quietly increasing its capabilities,” The FT reported this week, “to bring AI to its next generation of iPhones… Apple’s goal appears to be operating generative AI through mobile devices, to allow AI chatbots and apps to run on the phone’s own hardware and software rather than be powered by cloud services in data centres.”
For its part, Bard says that “Google has assured that all Bard analysis would happen on your device, meaning your messages wouldn’t be sent to any servers. Additionally, you would have complete control over what data Bard analyzes and how it uses it.”
You will have to judge whether this gives you comfort enough to let Bard loose on your private content. A word of caution. There’s a difference between what can’t be done, such as breaching end-to-end encryption, and what isn’t being done, such as policies as to where content analysis takes place. I would urge strong caution on opening up your content too freely, unless and until we have seen proper safeguards.
Bard agrees. “While Google assures on-device analysis,” it says, “any data accessed by Bard is technically collected, even temporarily. Concerns arise about potential leaks, misuse, or hidden data sharing practices. The extent of Bard’s analysis and how it uses your data should be transparent. Users deserve granular control over what data is analyzed, for what purposes, and how long it’s stored.” MORE FROM FORBESHow To Change Your Google Maps Settings After Street View Warning By Zak Doffman
Bard also warns that such data analysis might bias its results. “AI algorithms can perpetuate biases present in the data they’re trained on. Analyzing messages could lead to unintended profiling based on language, demographics, or social circles.”
This integration of generative AI chat and messaging will transform texting platforms forever, it will quickly open up a new competitive angle between Google, Apple and Meta, whose smartphone ecosystems and apps run our lives.
“While an exact date is still unknown,” Bard says, “all signs point towards Bard’s arrival in Google Messages sometime in 2024. It could be a matter of weeks or months, but it’s definitely coming.” Meanwhile, what we’ve seen thus far remains buried deep inside a beta release and subject to change before release.
When it is live, think carefully before you unlock your Messages privacy settings. “Ultimately,” says Bard, “the decision of whether to use message analysis rests with you. Carefully weigh the potential benefits against the privacy concerns and make an informed choice based on your own comfort level and expectations.”
The analysis of your message history isn’t the only word of caution here. This deployment of Bard is just part of the shift from browser-based to directed search, and you will need to be increasingly cautious as to the quality of the results you’re being given. Bard isn’t a chat with a friend. It’s a UI sitting across the world’s most powerful and valuable advertising and tracking machine.
On which note, Bard left me with a final thought that might be better directed at its creators than its users: “Remember, you have the right to demand clarity, control, and responsible AI development from the companies you trust with your data.”
We need an antitrust law that defines a monopoly by size/revenue of company statically by percent of US GDP, US wealth, or revenue in a particular industry. Not something g that allows the “well it feels fine” kind of defense these companies can pull.
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 9965568 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 528384 bytes) in /var/www/kbin/kbin/vendor/symfony/monolog-bridge/Processor/DebugProcessor.php on line 81
privacy
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.