Used as training data, or used as prompts to give further context? The former would be very troubling since it’d then be available to anyone able to engineer the right prompt. But I suspect they’re looking at doing the latter.
Read the article! The change isn’t live yet…and you can likely disable it once it drops.
“While an exact date is still unknown,” Bard says, “all signs point towards Bard’s arrival in Google Messages sometime in 2024. It could be a matter of weeks or months, but it’s definitely coming.” Meanwhile, what we’ve seen thus far remains buried deep inside a beta release and subject to change before release.
Public service announcement: this article seems to be written like their only source was asking Bard some questions. If you trust Bard enough to tell you Google’s plans, you may as well be asking it when the second coming is happening, because it’ll be just as confident when it hallucinates that answer too.
I’m tired of constantly running in to the basic lack of understanding that LLMs are not knowledge systems. They emulate language, and produce plausible sentences. This journalist is using the output of a LLM as a source of knowledge… What a fucking disgrace this should be for Forbes.
Imagine a journalist just quoting a conversation with their 10 year old, where they played a game of “whatever you do, you have to pretend like you really know what you’re talking about. Do not be unsure about anything, ok?”, and used the output as a source for actual facts.
If you use ChatGPT, or Bard, or any LLM for anything beyond creative output, or with the required comprehension to vet the output, just stop. Don’t use tools you don’t understand the function or limitations of.
I’ve already had to spend hours correcting a fundamental misconception someone got from ChatGPT, which was part of a safety mechanism of medical software. I’ve also had the displeasure of finding self-contradicting documentation someone placed in a README, which was a copy-paste from ChatGPT.
It’s such a powerful tool and utility if you know what it can help with. But it requires a basic understanding, that too many people are either too lazy to make the effort for, or just lacking critical thought processes, and “it sounded really plausible”, (the full extent of what it’s designed to do) fools them completely.
LMAO I opened the link expecting an article, and I got a steady flow of quotations, but nothing to indicate who is being quoted. At the very end, the sentence “For its part, Bard states…” is used, and I can think of no clearer way to display your fundamental misunderstanding of AI. Bard can’t “state” shit in any official capacity. Bard is the same caliber of LLM as GPT, and both have a documented tendency to hallucinate.
You don’t have to share your number to get spam messages - I get weekly spam texts for “Susan” (not my name), which I never interact with but have been coming from random numbers for years.
Once your # is on a list, whether you put it there or not, it never leaves.
I’m the only one who has ever had this phone no., but if I were to swap now, 99% chance I’d get a reused number, which would probably come already loaded on a million different spam lists. There’s no winning.
They don’t care if you don’t answer. It costs them $0 to text you.
My profession makes me a target for a wide variety of advertising and my phone number is required by law to be listed publicly.
I already don’t. Mostly because Google Messages filters them in a way that I never even see them unless I’m actively looking. It was only when I got an iPhone that I realized exactly how horrific it is.
forbes.com
Hot