Am I the only one getting agitated by the word AI?

Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

KpntAutismus,

wait for the next buzzword to come out, it’ll pass.

used gpt3 once, but haven’t had a use case for it since.

i’ll use an “”“AI”“” assistant when they are legitimately useful.

BeigeAgenda,
@BeigeAgenda@lemmy.ca avatar

It’s still good to start training one’s AI prompt muscles and to learn what a LLM can and can’t do.

Nemo,

AI isn’t reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone’s text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.

Thorny_Insight,

Real AGI does not exist yet. AI has existed for decades.

intensely_human, (edited )

What would a “real AGI” be able to do that an LLM cannot?

edit: again, the smartest men in the room loudly proclaiming their smartness, until someone asks them the simplest possible question about what they’re claiming

Pipoca,

One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

ChatGPT is a giant cheater.

Hotzilla,

GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)

Thorny_Insight, (edited )

Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

esserstein,

Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

blanketswithsmallpox,
Thorny_Insight,

Have I claimed it has changed?

TrickDacy,
@TrickDacy@lemmy.world avatar

You’re not the only one but I don’t really get this pedantry, and a lot of pedantry I do get. You’ll never get your average person to switch to the term LLM. Even for me, a techie person, it’s a goofy term.

Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like “AI” but technically isn’t for decades. Not saying that’s the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.

Silentiea,

I don’t think the real problem is the fact that we call it AI or not, I think it’s just the level of hype and prevalence in the media.

viralJ,

I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don’t get agitated when I hear it, I don’t think it’s a marketing buzzword invented by capitalistic a-holes. I do think that using “intelligence” in AI is far too generous, whichever context it’s used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with “artificial intelligence”.

Rikj000,
@Rikj000@discuss.tchncs.de avatar

Thank you for reminding me about NPCs,
we have indeed been calling them AI for years,
even though they are not capable of reasoning on their own.

Perhaps we need a new term,
e.g. AC (Artificial Consiousness),
which does not exists yet.

The term AI still agitates me though,
since most of these are not intelligent.

For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

Or my co-workers,
who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.

To me it feels like they are pushing their bad beta products upon us,
in the hopes that we pay to use them,
so they can use our feedback to improve them.

To me they don’t feel intelligent nor consious.

Blueberrydreamer,

I would argue that humans also frequently give bad advice and incorrect information. We regurgitate the information we read, and we’re notoriously bad at recognizing false and misleading info.

More important to keep in mind is that the vast, vast majority of intelligence in our world is much dumber than people. If you’re expecting greater than human intelligence as your baseline, you’re going to have a wildly different definition than the rest of the world.

FooBarrington,

For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren’t intelligent/conscious? That humans in general aren’t?

Rikj000,
@Rikj000@discuss.tchncs.de avatar

That humans in general aren’t?

After working 2 years on an open source ML project, I can confidently say that yes, on average, lights aint that bright sadly.

OceanSoap,

My coworker just gave me this rant the other day about AI.

the_stat_man,

In my first AI lecture at uni, my lecturer started off by asking us to spend 5 minutes in groups defining “intelligence”. No group had the same definition. “So if you can’t agree on what intelligence is, how can we possibly define artificial intelligence?”

AI has historically just described cutting edge computer science at the time, and I imagine it will continue to do so.

PrinceWith999Enemies,

I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

Rikj000,
@Rikj000@discuss.tchncs.de avatar

But what do you call a robot that teaches itself how to walk

In it’s current state,
I’d call it ML (Machine Learning)

A human defines the desired outcome,
and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

Blueberrydreamer,

That definition would also apply to teaching a baby to walk.

PrinceWith999Enemies,

So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

NABDad,

My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

Nemo,

I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

Fedizen, (edited )

on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

PrinceWith999Enemies,

I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

Fedizen,

What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

PrinceWith999Enemies,

Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

Fedizen,

It’s the ‘why’. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I’d argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

Everything we call “AI” now should be called “EI” or “extended intelligence” because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

PrinceWith999Enemies,

Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.

Pipoca,

Exactly.

AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

Anti_Face_Weapon,

I saw a streamer call a procedurally generated level “ai generated” and I wanted to pull my hair out

infinitepcg,

I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern “AI” techniques.

FooBarrington,

I thought “Wavefunction Collapse” is just misnamed Monte Carlo. Where does it use training data?

Feathercrown, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

Edit: To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

FooBarrington, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

MC is a statistical method, it doesn’t have anything to do with map generation. If you apply it to map generation, you get a “full method of map generation”, and as far as I know that is what WFC is.

To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

Could you share the paper? Everything I read about WFC is “you have tiles that are stitched together according to rules with a bit of randomness”, which is literally MC.

Feathercrown, (edited )

Ok so you are just talking about MC the statistical method. That doesn’t really make sense to me. Every random method will need to “roll the dice” and choose a random outcome like a MC simulation. The statement “this method of map generation is the same as Monte Carlo” (or anything similar, ik you didn’t say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).

The original Github repo, with examples of how the rules are generated from a “training set”: github.com/mxgmn/WaveFunctionCollapseA paper referencing this repo as “the original WFC algorithm” (ref. 22): long google link to a PDF

Note that I don’t think the comparison to AI is particularly useful-- only technically correct that they share some similarities.

infinitepcg,

I don’t think WFC can be described as an example of a Monte Carlo method.

In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don’t have a closed form. The more you sample, the more accurate the result.

In WFC, the number of random experiments depends on your map size and is not variable.

FooBarrington,

Sorry, I should have been more specific - it’s an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you’re done - is there anything beyond this in WFC?

infinitepcg,

I’m not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn’t fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can’t be changed.

FooBarrington,

I’m not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.

infinitepcg,

it doesn’t train or self-improve like ML does

I think the training (or fitting) process is comparable to how a support vector machine is trained. It’s not iterative like SGD in deep learning, it’s closer to the traditional machine learning techniques.

But I agree that this is a pretty academic discussion, it doesn’t matter much in practice.

topperharlie,

“somewhat old” person opinion warning ⚠️

When I was in university (2002 or so) we had an “AI” lecture and it was mostly "if"s and path finding algorithms like A*.

So I would argue that us the engineers have been using the term to define a wider use cases long before LLM, CEO and marketing people did it. And I think that’s fine, as categorising algorithms/solutions as AI helps understand what they will be used for, and we (at least the engineers) don’t tend to assume an actual self aware machine when we hear that name.

nowadays they call that AGI, but it wasn’t always like that, back in my time it was called science fiction 😉

liwott,

@Rikj000

which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

How do you know that?

oce,
@oce@jlai.lu avatar

Yes, AI term is used for marketing, though it didn’t start with LLMs, a couple of years before, any ML algorithm was called AI together with the trendy data scientist job.

However, I do think LLMs are very useful, just try them for your daily tasks, you’ll see. I’m pretty sure they will become as common as a web search in the future.

Also, how can you tell that the human brain is not mostly a very powerful LLM hosting machine?

LainTrain,

The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.

fidodo,

What agitates me is all the people misusing the words and then complaining about what they don’t actually mean.

bilboswaggings, (edited )

This has been a thing for a long time

Clippy was an assistant, Cortana was an intelligent assistant and Copilot is AI

None of these are accurate, it’s always like a generation behind

Clippy just was, Cortana was an assistant, And copilot is an intelligent assistant

The next one they make could actually be AI

alien,

It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could’ve called a machine that can solve arithmetic problems “AI” and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we’ll never have “true” AI?

So, the point is, what is AI for you?

vodkasolution,

Adobe Illustrator

alien,

hahaha couldn’t resist huh?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #