AI isn’t reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone’s text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.
What would a “real AGI” be able to do that an LLM cannot?
edit: again, the smartest men in the room loudly proclaiming their smartness, until someone asks them the simplest possible question about what they’re claiming
GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)
Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.
With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.
Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.
You’re not the only one but I don’t really get this pedantry, and a lot of pedantry I do get. You’ll never get your average person to switch to the term LLM. Even for me, a techie person, it’s a goofy term.
Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like “AI” but technically isn’t for decades. Not saying that’s the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.
I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don’t get agitated when I hear it, I don’t think it’s a marketing buzzword invented by capitalistic a-holes. I do think that using “intelligence” in AI is far too generous, whichever context it’s used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with “artificial intelligence”.
Thank you for reminding me about NPCs,
we have indeed been calling them AI for years,
even though they are not capable of reasoning on their own.
Perhaps we need a new term,
e.g. AC (Artificial Consiousness),
which does not exists yet.
The term AI still agitates me though,
since most of these are not intelligent.
For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.
Or my co-workers,
who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.
To me it feels like they are pushing their bad beta products upon us,
in the hopes that we pay to use them,
so they can use our feedback to improve them.
I would argue that humans also frequently give bad advice and incorrect information. We regurgitate the information we read, and we’re notoriously bad at recognizing false and misleading info.
More important to keep in mind is that the vast, vast majority of intelligence in our world is much dumber than people. If you’re expecting greater than human intelligence as your baseline, you’re going to have a wildly different definition than the rest of the world.
For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.
Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren’t intelligent/conscious? That humans in general aren’t?
In my first AI lecture at uni, my lecturer started off by asking us to spend 5 minutes in groups defining “intelligence”. No group had the same definition. “So if you can’t agree on what intelligence is, how can we possibly define artificial intelligence?”
AI has historically just described cutting edge computer science at the time, and I imagine it will continue to do so.
asklemmy
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.