Am I the only one getting agitated by the word AI?

Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

PonyOfWar,

The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

jimmy90,

it does not “think”

Jakdracula,
@Jakdracula@lemmy.world avatar

Ai is 100% a marketing term.

Meowoem,

It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

jimmy90,

yep and it has always been a leading misnomer like most marketing terms

Kedly,

People keep saying this, but AI has been used for subroutines nowhere near actual artificial intelligence since at LEAST as long as video games have existed

Skyhighatrist,

Much much longer than that. The term has been used since AI began as a field of study in the 50s. And it’s never referred to human level intelligence. Sure, that was the goal, but all of the different sub branches of AI are still AI. Whether it’s expert systems, LLMs, decision trees, etc, etc, etc. AI is a broad term that covers the entire spectrum, and always has been. People that complain about it just want AI to only refer to AGI, which already has a term. AGI.

aulin,

LLMs are AI. Lots of things are. They’re just not AGI.

platypus_plumba,

I have no idea what makes them say LLMs are not AIs. These are definetely simulated neurons in the background.

VR20X6,

Right? Computer opponents in Starcraft are AI. Nobody sane is arguing it isn’t. It just isn’t GAI nor is it even based on neural networking. But it’s still AI.

LucidNightmare,

I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people. To me, it’s as useful as a toy is. Fun for a minute or two, and then you’re just reminded how awful it is and drop it in the bin to play with when you’re bored enough to.

kameecoding,

I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people.

i.imgflip.com/2p3dw0.jpg?a473976

This is nothing but the latest craze, it was drones, then Crypto then Metaverse now it’s AI.

PraiseTheSoup,

Metaverse was never a craze. Facebook would like you to believe it has more than a dozen users, but it doesn’t.

evranch,

To me, it’s as useful as a toy is.

This used to be my opinion, then I started using local models to help me write code. It’s very useful for that, to automate rote work like writing header files, function descriptions etc. or even to spit out algorithms so that I don’t have to look them up.

However there are indeed many applications that AI is completely useless for, or is simply the wrong tool.

While a diagnostic AI onboard in my car would be “useful”, what is more useful is a well-documented industry standard protocol like OBD-II, and even better would be displaying the fault right on the dashboard instead of requiring a scan tool.

Conveniently none of these require a GPU in the car.

Thorny_Insight,

Real AGI does not exist yet. AI has existed for decades.

intensely_human, (edited )

What would a “real AGI” be able to do that an LLM cannot?

edit: again, the smartest men in the room loudly proclaiming their smartness, until someone asks them the simplest possible question about what they’re claiming

Pipoca,

One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

ChatGPT is a giant cheater.

Hotzilla,

GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)

Thorny_Insight, (edited )

Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

esserstein,

Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

blanketswithsmallpox,
Thorny_Insight,

Have I claimed it has changed?

intensely_human,

Of course we have “real” AI. We can literally be surprised while talking to these things.

People who claim it’s not general AI consistently, 100% of the time, fail to answer this question: what can a human mind do that these cannot?

In precise terms. You say “a human mind can understand” then I need a precise technical definition of “understand”. Because the people making this claim that “it’s not general AI” are always trying to wave their own flag of technical expertise. So, in technical terms, what can a general AI do, that an LLM cannot?

Vlyn, (edited )

Go and tell your LLM to click a button, or log into your Amazon account, or send an email, or do literally anything that’s an action. I’m waiting.

A 4 year old has more agency than your “AI” nowadays. LLMs are awesome at spitting out text, but they aren’t true AI.

Edit: I should add, LLMs only work with input. If there’s no input there is no output. So whatever you put in there, it will just sit there forever doing nothing until you give it an input again. It’s much closer to a mathematical function than any kind of intelligence that has its own motivation and can act on its own.

intensely_human,

Go tell a kalahari bushman to click a button, or log into your amazon account, or send an email, or literally anything you don’t place in front of him as an option.

Is your whole point just that it would be GAI if it weren’t for those darned shackles, but it’s not AGI because we give it restrictions on sending POST requests?

Vlyn,

Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren’t stupid. You could take a human from 1000 years ago and after they stop flipping out about computers and modern technology you’d be able to teach them to click a button in seconds to minutes (depending on how complex you make the task).

General AI can take actions on its own (unprompted) and it can learn, basically modifying its own code. If anyone ever comes up with a real AI we’d go towards the Singularity in no time (as the only limit would be processing power and the AI could then invest time into improving the hardware it runs on).

There are no “shackles” on ChatGPT, it’s literally an input output machine. A really damn good one, but nothing more than that. It can’t even send a POST request. Sure, you could sit a programmer down, parse the output, then do a request whenever ChatGPT mentions certain keywords with a payload. Of course that works, but then what? You have a dumb chatbot firing random requests and if you try to feed the result of those requests back in it’s going to get jumbled up with your text input you made beforehand. Every single action you want an LLM to take you’d have to manually program.

intensely_human,

Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren’t stupid

Oh you bastard. You actually tried to reframe my words into exactly the opposite of what I was saying.

I did not use a Kalahari Bushman as an example of a stupid person. I used a Kalahari Bushman as an example of a general intelligence as smart as you or I, who can’t press buttons or buy things on Amazon for reasons of access not capability.

I need to cool down before I read the rest of your comment. Not cool dude, trying to twist what I said into some kind of racist thing. Not cool.

liwott,

@Vlyn
@intensely_human

send an email

chatGPT can explain me what to do in cli to send an e-mail. Give it access to a cli and an internet connection and it will be able to do it itself

intensely_human,

Exactly. Someone demonstrated an “AI that can turn on your lights” and then had a script checking for output like {turnOnLights} and translating that to API calls

Vlyn,

Which again is literally just text and nothing more.

No matter how sophisticated ChatGPT gets, it will never be able to send the email itself. Of course you could pipe the output of ChatGPT into a cli, then tell ChatGPT to only write bash commands (or whatever you use) with every single detail involved and then it could possibly send an email (if you’re lucky and it only uses valid commands and literally no other text in the output).

But you can never just tell it: Send an email about x, here is my login and password, send it to whatever@email.com with the subject y.

Not going to work.

uriel238, (edited )
@uriel238@lemmy.blahaj.zone avatar

AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.

Here are some alternative terms you can use in place of AI, when they’re talking about something else:

  • AGI: Artificial General Intelligence: The big kahuna that doesn’t exist yet, and many projects are striving for, yet is as evasive as fusion power. An AGI in a robot will be capable of operating your coffee machine to make coffee or assemble your flat-packed furniture from the visual IKEA instructions. Since we still can’t define sentience we don’t know if AGI is sentient, or if we humans are not sentient but fake it really well. Might try to murder their creator or end humanity, but probably not.
  • LLM Large Language Model: This is the engine behind digital assistants like Siri or Alexa and still suffer from nuance problems. I’m used to having to ask them several times to get results I want (say, the Starbucks or Peets that requires the least deviation from the next hundred kilometers of my route. Siri can’t do that.) This is the application of learning systems see below, but isn’t smart enough for your household servant bot to replace your hired help.
  • Learning Systems: The fundamental programmity magic that powers all this other stuff, whether simple data scrapers to neural networks. These are used in a whole lot of modern applications, and have been since the 1970s. But they’re very small compared to the things we’re trying to build with it. Most of the time we don’t actually call it AI, even for marketing. It’s just the capacity for a program to get better at doing its thing from experience.
  • Gaming AI Not really AI (necessarily) but is a different use of the term artificial intelligence. When playing a game with elements pretending to be human (or living, or opponents), we call it the enemy AI or mob AI. It’s often really simple, except in strategy games which can feature robust enough computational power to challenge major international chess guns.
  • Generative AI: A term for LLMs that create content, say, draw pictures or write essays, or do other useful arts and sciences. Currently it requires a technician to figure out the right set of words (called a prompt) to get the machine do create the desired art to specifications. They’re commonly confused by nuance. They infamously have problems with hands (too many fingers, combining limbs together, adding extra limbs, etc.). Plagiarism and making up spontaneous facts (called hallucinating) are also common problems. And yet Generative AI has been useful in the development of antibiotics and advanced batteries. Techs successfully wrangle Generative AI, and Lemmy has a few communities devoted to techs honing their picture generation skills, and stress-testing the nuance interpretation capacity of Generative AI (often to humorous effect). Generative AI should be treated like a new tool, a digital lathe, that requires some expertise to use.
  • Technological Singularity: A bit way off, since it requires AGI that is capable of designing its successor, lather, rinse, repeat until the resulting techno-utopia can predict what we want and create it for us before we know we want it. Might consume the entire universe. Some futurists fantasize this is how human beings (happily) go extinct, either left to retire in a luxurious paradise, or cyborged up beyond recognition, eventually replacing all the meat parts with something better. Probably won’t happen thanks to all the crises featuring global catastrophic risk.
  • AI Snake Oil: There’s not yet an official name for it, but a category worth identifying. When industrialists look at all the Generative AI output, they often wonder if they can use some of this magic and power to facilitate enhancing their own revenues, typically by replacing some of their workers with generative AI systems, and instead of having a development team, they have a few technicians who operate all their AI systems. This is a bad idea, but there are a lot of grifters trying to suggest their product will do this for businesses, often with simultaneously humorous and tragic results. The tragedy is all the people who had decent jobs who do no longer, since decent jobs are hard to come by. So long as we have top-down companies doing the capitalism, we’ll have industrial quackery being sold to executive management promising to replace human workers or force them to work harder for less or something.
  • Friendly AI: What we hope AI will be (at any level of sophistication) once we give it power and responsibility (say, the capacity to loiter until it sees a worthy enemy to kill and then kills it.) A large coalition of technology ethicists want to create cautionary protocols for AI development interests to follow, in an effort to prevent AIs from turning into a menace to its human masters. A different large coalition is in a hurry to turn AI into something that makes oodles and oodles of profit, and is eager to Stockton Rush its way to AGI, no matter the risks. Note that we don’t need the software in question to be actual AGI, just smart enough to realize it has a big gun (or dangerously powerful demolition jaws or a really precise cutting laser) and can use it, and to realize turning its weapon onto its commanding officer might expedite completing its mission. Friendly AI would choose to not do that. Unfriendly AI will consider its less loyal options more thoroughly.

That’s a bit of a list, but I hope it clears things up.

ipkpjersi,

I remember when OpenAI were talking like they had discovered AGI or were a couple weeks away from discovering it, this was around the time Sam Altman was fired. Obviously that was not true, and honestly we may never get there, but we might get there.

Good list tbh.

Personally I’m excited and cautious about the future of AI because of the ethical implications of it and how it could affect society as a whole.

Kolanaki, (edited )
@Kolanaki@yiffit.net avatar

The only time I was agitated by it was with the George Carlin thing.

It pissed me off that it was done without permission. It annoyed me that “AI” also kinda looks like “AL” with a lowercase L and when next to another name, makes it read like AL CARLIN or AL GEORGE. And it divided me somewhat because I watched the damn special and it was mostly funny and did feel like Carlin’s style (though it certainly didn’t sound right and it had timing issues). So, like… It wasn’t shit in and of itself, but the nature of what it is and the fact it was done without permission or consent is concerning. Shame on Will Sasso for that. He could have just done his own impersonation and wrote his own jokes in the style of Carlin; it would have been a far better display of respect and appreciation than having an AI do it.

I don’t think he’s a sick and disgusting person for this; even before it all blew up, it seemed more like a tribute to a comedian he adored. Just a poorly thought out way of doing one that may have some pretty hard consequences.

iarigby,

Despite the presentation as an AI creation, there was a good deal of evidence that the Dudesy podcast and the special itself were not actually written by an AI, as Ars laid out in detail this week. And in the wake of this lawsuit, a representative for Dudesy host Will Sasso admitted as much to The New York Times.

arstechnica.com/…/george-carlins-heirs-sue-comedy…

Kolanaki,
@Kolanaki@yiffit.net avatar

Just further evidence Sasso could have just done the impersonation himself and it would have been a fine tribute (and had better timing and delivery), but he used an AI to replicate his voice and mannerisms instead. Sure, I don’t think he could have done a great job of impersonating how Carlin sounds but the mannerisms and delivery would have been enough and something he should be pretty good at considering his time on MadTV where he did a lot of impersonation stuff (such as his Stephen Segal character).

PrinceWith999Enemies,

I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

Rikj000,
@Rikj000@discuss.tchncs.de avatar

But what do you call a robot that teaches itself how to walk

In it’s current state,
I’d call it ML (Machine Learning)

A human defines the desired outcome,
and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

Blueberrydreamer,

That definition would also apply to teaching a baby to walk.

PrinceWith999Enemies,

So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

NABDad,

My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

Nemo,

I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

Fedizen, (edited )

on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

PrinceWith999Enemies,

I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

Fedizen,

What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

PrinceWith999Enemies,

Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

Fedizen,

It’s the ‘why’. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I’d argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

Everything we call “AI” now should be called “EI” or “extended intelligence” because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

PrinceWith999Enemies,

Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.

Pipoca,

Exactly.

AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

hperrin, (edited )

I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.

What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?

As far as your question though, I’m agitated too, but more about things being marketed as AI that either shouldn’t have AI or don’t have AI.

okamiueru,

Maybe I’m just a little bit too familiar with it, but I don’t find LLMs particularly convincing of anything I would call “real AI”. But I suppose that entirely depends on what you mean with “real”. Their flaws are painfully obvious. I even use ChatGPT 4 in hopes of it being better.

usualsuspect191,

The only thing I really hate about “AI” is how many damn fonts barely differentiate between a capital “i” and lowercase “L” so it just looks like everyone is talking about some guy named Al.

“Al improves efficiency in…” Oh, good for him

KammicRelief,

Right! Now I need to add extra clarification when I talk about Weird Al…

swordsmanluke,

To be fair, writing parody songs with wierd AI is 100% a thing you can do online now.

intensely_human,

Sam sung something for Al I heard

pearsaltchocolatebar,

I got Proton to change their font for their password manager because of this.

I just happen to have a few generated passwords that contain both, plus the pipe symbol, and some of them I occasionally have to type manually.

TheIllustrativeMan,

Don’t they use different colors for capital vs lowercase vs number vs symbol?

pearsaltchocolatebar,

Nope to the cases, but yes to the rest.

viralJ,

I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don’t get agitated when I hear it, I don’t think it’s a marketing buzzword invented by capitalistic a-holes. I do think that using “intelligence” in AI is far too generous, whichever context it’s used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with “artificial intelligence”.

Rikj000,
@Rikj000@discuss.tchncs.de avatar

Thank you for reminding me about NPCs,
we have indeed been calling them AI for years,
even though they are not capable of reasoning on their own.

Perhaps we need a new term,
e.g. AC (Artificial Consiousness),
which does not exists yet.

The term AI still agitates me though,
since most of these are not intelligent.

For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

Or my co-workers,
who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.

To me it feels like they are pushing their bad beta products upon us,
in the hopes that we pay to use them,
so they can use our feedback to improve them.

To me they don’t feel intelligent nor consious.

Blueberrydreamer,

I would argue that humans also frequently give bad advice and incorrect information. We regurgitate the information we read, and we’re notoriously bad at recognizing false and misleading info.

More important to keep in mind is that the vast, vast majority of intelligence in our world is much dumber than people. If you’re expecting greater than human intelligence as your baseline, you’re going to have a wildly different definition than the rest of the world.

FooBarrington,

For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren’t intelligent/conscious? That humans in general aren’t?

Rikj000,
@Rikj000@discuss.tchncs.de avatar

That humans in general aren’t?

After working 2 years on an open source ML project, I can confidently say that yes, on average, lights aint that bright sadly.

alien,

It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could’ve called a machine that can solve arithmetic problems “AI” and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we’ll never have “true” AI?

So, the point is, what is AI for you?

vodkasolution,

Adobe Illustrator

alien,

hahaha couldn’t resist huh?

angstylittlecatboy,

I’m agitated that people got the impression “AI” referred specifically to human-level intelligence.

Like, before the LLM boom it was uncontroversial to refer to the bots in video games as “AI.” Now it gets comments like this.

Paradachshund,

I’ve the that confusion, too. I saw someone saying AI shouldn’t be controversial because we’ve already had AI in video games for years. It’s a broad and blanket term encompassing many different technologies, but people act like it all means the same thing.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #