I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don’t get agitated when I hear it, I don’t think it’s a marketing buzzword invented by capitalistic a-holes. I do think that using “intelligence” in AI is far too generous, whichever context it’s used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with “artificial intelligence”.
Thank you for reminding me about NPCs,
we have indeed been calling them AI for years,
even though they are not capable of reasoning on their own.
Perhaps we need a new term,
e.g. AC (Artificial Consiousness),
which does not exists yet.
The term AI still agitates me though,
since most of these are not intelligent.
For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.
Or my co-workers,
who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.
To me it feels like they are pushing their bad beta products upon us,
in the hopes that we pay to use them,
so they can use our feedback to improve them.
I would argue that humans also frequently give bad advice and incorrect information. We regurgitate the information we read, and we’re notoriously bad at recognizing false and misleading info.
More important to keep in mind is that the vast, vast majority of intelligence in our world is much dumber than people. If you’re expecting greater than human intelligence as your baseline, you’re going to have a wildly different definition than the rest of the world.
For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.
Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren’t intelligent/conscious? That humans in general aren’t?
You’re not the only one but I don’t really get this pedantry, and a lot of pedantry I do get. You’ll never get your average person to switch to the term LLM. Even for me, a techie person, it’s a goofy term.
Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like “AI” but technically isn’t for decades. Not saying that’s the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.
What would a “real AGI” be able to do that an LLM cannot?
edit: again, the smartest men in the room loudly proclaiming their smartness, until someone asks them the simplest possible question about what they’re claiming
GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)
Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.
With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.
Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.
AI isn’t reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone’s text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.
Have you looked into the one time expense of buying an air fryer? You can make your own chips/fries/etc which are both cheaper and healthier. Obviously you have to buy the appliance but it pays off in terms of health and groceries eventually. Like, crackers are usually loaded with crap ingredients. You could air fry some potatoes in a little spray of healthy oil for a dollar or two and do your wallet and your heart a solid AND you’re still getting your daily allotment of potatoes lol
You don’t need the gadget. You can make these things with a normal stove and oven. As someone who cooks a lot someone gave me one of these for xmas. It’s a damn convection oven. A tiny one worth way too much money. Learn to use the appliances you have and stop with the useless gadgets.
It is a convection oven but most people don’t have a fancy oven with a convection oven. Yeah you can make it in the oven but it comes out better in the air fryer and mine heats in literally one minute, I can use it in summer because it doesn’t add nearly as much heat to my house, etc. It’s way more convenient than using the massive oven for a plate of fries or something and I can even cook an entire pizza in the air fryer I got using the bake setting, which again is just much easier and more convenient for me.
asklemmy
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.