Am I the only one getting agitated by the word AI?

Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

the_stat_man,

In my first AI lecture at uni, my lecturer started off by asking us to spend 5 minutes in groups defining “intelligence”. No group had the same definition. “So if you can’t agree on what intelligence is, how can we possibly define artificial intelligence?”

AI has historically just described cutting edge computer science at the time, and I imagine it will continue to do so.

topperharlie,

“somewhat old” person opinion warning ⚠️

When I was in university (2002 or so) we had an “AI” lecture and it was mostly "if"s and path finding algorithms like A*.

So I would argue that us the engineers have been using the term to define a wider use cases long before LLM, CEO and marketing people did it. And I think that’s fine, as categorising algorithms/solutions as AI helps understand what they will be used for, and we (at least the engineers) don’t tend to assume an actual self aware machine when we hear that name.

nowadays they call that AGI, but it wasn’t always like that, back in my time it was called science fiction 😉

aulin,

LLMs are AI. Lots of things are. They’re just not AGI.

platypus_plumba,

I have no idea what makes them say LLMs are not AIs. These are definetely simulated neurons in the background.

VR20X6,

Right? Computer opponents in Starcraft are AI. Nobody sane is arguing it isn’t. It just isn’t GAI nor is it even based on neural networking. But it’s still AI.

paddirn,

I think we’ll be so desensitized by the term “A.I.”, that when it actually does happen we won’t realize what’s happened until after the fact. It’ll happen so gradually that we’ll just be like, “Wait… I think it’s actually thinking real thoughts.”

viralJ,

I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don’t get agitated when I hear it, I don’t think it’s a marketing buzzword invented by capitalistic a-holes. I do think that using “intelligence” in AI is far too generous, whichever context it’s used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with “artificial intelligence”.

Rikj000,
@Rikj000@discuss.tchncs.de avatar

Thank you for reminding me about NPCs,
we have indeed been calling them AI for years,
even though they are not capable of reasoning on their own.

Perhaps we need a new term,
e.g. AC (Artificial Consiousness),
which does not exists yet.

The term AI still agitates me though,
since most of these are not intelligent.

For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

Or my co-workers,
who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.

To me it feels like they are pushing their bad beta products upon us,
in the hopes that we pay to use them,
so they can use our feedback to improve them.

To me they don’t feel intelligent nor consious.

Blueberrydreamer,

I would argue that humans also frequently give bad advice and incorrect information. We regurgitate the information we read, and we’re notoriously bad at recognizing false and misleading info.

More important to keep in mind is that the vast, vast majority of intelligence in our world is much dumber than people. If you’re expecting greater than human intelligence as your baseline, you’re going to have a wildly different definition than the rest of the world.

FooBarrington,

For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren’t intelligent/conscious? That humans in general aren’t?

Rikj000,
@Rikj000@discuss.tchncs.de avatar

That humans in general aren’t?

After working 2 years on an open source ML project, I can confidently say that yes, on average, lights aint that bright sadly.

suodrazah,

I call it a probability box.

31337,

AI is simply a broad field of research and a broad class of algorithms. It is annoying media keeps using the most general term possible to describe chatbots and image generators though. Like, we typically don’t call Spotify playlist generators AI, even though they use recommendation algorithms, which are a subclass of AI algorithms.

kandoh,

Businesses always do this. AI is popular? Insert that word into every page of the deck. It sucks.

swordsmanluke,

AI is a forever-in-the-future technology. When I was in school, fuzzy logic controllers were an active area of “AI” research. Now they are everywhere and you’d be laughed at for calling them AI.

The thing is, as soon as AI researchers solve a problem, that solution no longer counts as AI. Somehow it’s suddenly statistics or “just if-then statements”, as though using those techniques makes something not artificial intelligence.

For context, I’m of the opinion that my washing machine - which uses sensors and fuzzy logic to determine when to shut off - is a robot containing AI. It contains sensors, makes judgements based on its understanding of “the world” and then takes actions to achieve its goals. Insofar as it can “want” anything, it wants to separate the small masses from the large masses inside itself and does its best to make that happen. As tech goes, it’s not sexy, it’s very single purpose and I’m not really worried that it’s gonna go rogue.

We are surrounded by (boring) robots all day long. Robots that help us control our cars and do our laundry. Not to mention all the intelligent, disembodied agents that do things like organize our email, play games with us, and make trillions of little decisions that affect our lives in ways large and small.

Somehow, though, once the mystery has yielded to math, society doesn’t believe these decision-making machines are AI any longer.

Dasnap,
@Dasnap@lemmy.world avatar

I assume you’re referring to the sci-fi kind of self-aware AI because we’ve had ‘artificial intelligence’ in computing for decades in the form of decision making algorithms and the like. Whether any of that should be classed as AI is up for debate as again, it’s still all a facade. In those cases, people only really cared about the outputs and weren’t trying to argue they were alive or anything.

But yeah, I get what you mean.

Phoonzang,

Part of my work is to evaluate proposals for research topics and their funding, and as soon as “AI” is mentioned, I’m already annoyed. In the vast majority of cases, justifiably so. It’s a buzzword to make things sound cutting edge and very rarely carries any meaning or actually adds anything to the research proposal. A few years ago the buzzword was “machine learning”, and before that “big data”, same story. Those however quickly either went away, or people started to use those properly. With AI, I’m unfortunately not seeing that.

Despair,
@Despair@lemmy.world avatar

A lot of the comments I’ve seen promoting AI sound very similar to ones made around the time GME was relevant or cryptocurrency. Often, the conversations sounded very artificial and the person just ends up repeating buzzwords/echo chamber instead of actually demonstrating that they have an understanding of what the technology is or its limitations.

Kedly,

People keep saying this, but AI has been used for subroutines nowhere near actual artificial intelligence since at LEAST as long as video games have existed

Skyhighatrist,

Much much longer than that. The term has been used since AI began as a field of study in the 50s. And it’s never referred to human level intelligence. Sure, that was the goal, but all of the different sub branches of AI are still AI. Whether it’s expert systems, LLMs, decision trees, etc, etc, etc. AI is a broad term that covers the entire spectrum, and always has been. People that complain about it just want AI to only refer to AGI, which already has a term. AGI.

oce,
@oce@jlai.lu avatar

Yes, AI term is used for marketing, though it didn’t start with LLMs, a couple of years before, any ML algorithm was called AI together with the trendy data scientist job.

However, I do think LLMs are very useful, just try them for your daily tasks, you’ll see. I’m pretty sure they will become as common as a web search in the future.

Also, how can you tell that the human brain is not mostly a very powerful LLM hosting machine?

pl_woah,

I’m pissed that large corps are working hard on propaganda to say that LLMs and theft of copyright is good if they do it

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #