Public service announcement: this article seems to be written like their only source was asking Bard some questions. If you trust Bard enough to tell you Google’s plans, you may as well be asking it when the second coming is happening, because it’ll be just as confident when it hallucinates that answer too.
I’m tired of constantly running in to the basic lack of understanding that LLMs are not knowledge systems. They emulate language, and produce plausible sentences. This journalist is using the output of a LLM as a source of knowledge… What a fucking disgrace this should be for Forbes.
Imagine a journalist just quoting a conversation with their 10 year old, where they played a game of “whatever you do, you have to pretend like you really know what you’re talking about. Do not be unsure about anything, ok?”, and used the output as a source for actual facts.
If you use ChatGPT, or Bard, or any LLM for anything beyond creative output, or with the required comprehension to vet the output, just stop. Don’t use tools you don’t understand the function or limitations of.
I’ve already had to spend hours correcting a fundamental misconception someone got from ChatGPT, which was part of a safety mechanism of medical software. I’ve also had the displeasure of finding self-contradicting documentation someone placed in a README, which was a copy-paste from ChatGPT.
It’s such a powerful tool and utility if you know what it can help with. But it requires a basic understanding, that too many people are either too lazy to make the effort for, or just lacking critical thought processes, and “it sounded really plausible”, (the full extent of what it’s designed to do) fools them completely.
LMAO I opened the link expecting an article, and I got a steady flow of quotations, but nothing to indicate who is being quoted. At the very end, the sentence “For its part, Bard states…” is used, and I can think of no clearer way to display your fundamental misunderstanding of AI. Bard can’t “state” shit in any official capacity. Bard is the same caliber of LLM as GPT, and both have a documented tendency to hallucinate.
You don’t have to share your number to get spam messages - I get weekly spam texts for “Susan” (not my name), which I never interact with but have been coming from random numbers for years.
Once your # is on a list, whether you put it there or not, it never leaves.
I’m the only one who has ever had this phone no., but if I were to swap now, 99% chance I’d get a reused number, which would probably come already loaded on a million different spam lists. There’s no winning.
They don’t care if you don’t answer. It costs them $0 to text you.
My profession makes me a target for a wide variety of advertising and my phone number is required by law to be listed publicly.
I already don’t. Mostly because Google Messages filters them in a way that I never even see them unless I’m actively looking. It was only when I got an iPhone that I realized exactly how horrific it is.
This alone would make me more likely to switch back to iPhone, as much as I hate the walled garden. “Just switch to a private messenger app” doesn’t really work when no one else uses them. I’ve even gotten all of my family to try Signal, but they dropped it in favor of going back to imessage. It’s extremely frustrating, far from ideal, but it is what it is.
Google reading my messages at all, even if it’s “oPt OuT”, is a complete non starter.
They dropped it because people assumed it was secure just because they used signal, and is never secure and you can assume pretty much anyone could be reading your texts wanyway.
There are three big reasons why we’re removing SMS support for the Android app now: prioritizing security and privacy, ensuring people aren’t hit with unexpected messaging bills, and creating a clear and intelligible user experience for anyone sending messages on Signal.
To me, all of those reasons are BS and easily gotten around. “Unexpected messaging bills?” Have a popup that warns you that this user doesn’t have an account and is about to send a SMS, potentially incurring a cost, as an example.
Having a unified app that supports your message protocol with SMS fallback is legitimately great. I’m still bitter signal canned that feature.
But it isn’t that big of a deal to just use two apps. It’s what I’ve had to do for a while now. Anyone I actually know goes into signal, and I use SMS for my boss, my dad, and various companies.
Not all AI is bad. In the healthcare sector it could improve decision-making, produce personalised treatment plans, and so forth.
Obviously, the healthcare professionals will have final say, but it’s a good tool to have. AI will not replace them. Though it will streamline cumbersome processes.
I am all for AI as long as it’s used in a non-dystopian manner.
Isn’t that what the doctor is already supposed to be doing? Looking at our history, charts, treatment attempts and customizing? Because if not I could just WebMD and pay a doc to write a scrip.
There are a lot of things doctors record in evaluations, and they feed these AIs this information and instead of these AIs spitting out a “diagnosis” they calculate risk of harm vs risk of further investigation.
In things like pediatrics, this reduces the need for unpleasant and dangerous procedures - a CT scan impacts a 6 year old way more than a 30 year old.
AI is extremely useful for problem domains with a ton of input, medicine being one. Doctors can only do so much and rely on algorithms just like the AI does. The AI has the benefit of being able to do it a fuckload faster, more accurately, and compare it to more relevant things.
Don’t confuse it with generative AI. this is a very different system than that.
There are already talks of military use, reading all your texts, eliminating jobs with no plan to support those who lost them, AI driven cars killing people, taking all creative work from humans and leaving the menial tasks…that’s nowhere near a complete list and it’s already dystopian.
The thing is, when private companies are the ones that hold the tech and monetize it, shit is going to get dystopian before you can say “artichoke.” Capitalism is dystopian. Late stage capitalism even more so. And we are fast approaching a new frontier in which these same evil tech companies will wield this unbelievable power. I get it. There are good uses. But when the end goal is profit, our best interest comes second, if not last.
It’s a tool like anything else. It’ll be used for everything like anything else. It cannot be stopped. All we can hope for are tools to mitigate the damage and applications to outweigh what bad it’s capable of. Trying to slow it down is like trying to stop a flood with buckets. Build a boat, it will only keep rising.
Maybe I’m just old and stuck in my ways, but I don’t see the upside here. Why would I need Google’s AI to be able to tell the tone of my messages and respond appropriately. They’re my messages. People send me messages to talk to me. In what world do I want to remove myself from that process?
If my wife texts me and says “when are you going to be home from work?” I don’t want an AI looking at my chat history and making a guess. I want to tell her what I have going on right now and respond. If a friend asks me if I wanna hang out this weekend, I don’t want AI checking my calendar and seeing I’m free and then agreeing to plans. I want to think about it and come to a decision myself.
Can someone smarter than me point to an actual good use case for this?
this is almost like when people traded baubles for valuable things to unknowing natives. corporations make these inane features and expect us to pay for them by giving them all our information. They dont bother to even ask, just assume we are ok with it by having all this crap on by default.
Suppressing repeated notifications has been a thing since messaging has been a thing. If your service doesn’t offer it, find a better one. Also, this is a hypothetical, and a bad one. Why would you not simply read the texts?
Add comment