asklemmy

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

PrinceWith999Enemies, in Am I the only one getting agitated by the word AI?

I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

Rikj000,
@Rikj000@discuss.tchncs.de avatar

But what do you call a robot that teaches itself how to walk

In it’s current state,
I’d call it ML (Machine Learning)

A human defines the desired outcome,
and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

Blueberrydreamer,

That definition would also apply to teaching a baby to walk.

PrinceWith999Enemies,

So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

NABDad,

My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

Nemo,

I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

Fedizen, (edited )

on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

PrinceWith999Enemies,

I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

Fedizen,

What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

PrinceWith999Enemies,

Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

Fedizen,

It’s the ‘why’. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I’d argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

Everything we call “AI” now should be called “EI” or “extended intelligence” because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

PrinceWith999Enemies,

Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.

Pipoca,

Exactly.

AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

daddyjones, in What songs do you like that are in a language which you don't speak?
@daddyjones@lemmy.world avatar

I really like the French national anthem.

Anti_Face_Weapon, in Am I the only one getting agitated by the word AI?

I saw a streamer call a procedurally generated level “ai generated” and I wanted to pull my hair out

infinitepcg,

I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern “AI” techniques.

FooBarrington,

I thought “Wavefunction Collapse” is just misnamed Monte Carlo. Where does it use training data?

Feathercrown, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

Edit: To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

FooBarrington, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

MC is a statistical method, it doesn’t have anything to do with map generation. If you apply it to map generation, you get a “full method of map generation”, and as far as I know that is what WFC is.

To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

Could you share the paper? Everything I read about WFC is “you have tiles that are stitched together according to rules with a bit of randomness”, which is literally MC.

Feathercrown, (edited )

Ok so you are just talking about MC the statistical method. That doesn’t really make sense to me. Every random method will need to “roll the dice” and choose a random outcome like a MC simulation. The statement “this method of map generation is the same as Monte Carlo” (or anything similar, ik you didn’t say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).

The original Github repo, with examples of how the rules are generated from a “training set”: github.com/mxgmn/WaveFunctionCollapseA paper referencing this repo as “the original WFC algorithm” (ref. 22): long google link to a PDF

Note that I don’t think the comparison to AI is particularly useful-- only technically correct that they share some similarities.

infinitepcg,

I don’t think WFC can be described as an example of a Monte Carlo method.

In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don’t have a closed form. The more you sample, the more accurate the result.

In WFC, the number of random experiments depends on your map size and is not variable.

FooBarrington,

Sorry, I should have been more specific - it’s an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you’re done - is there anything beyond this in WFC?

infinitepcg,

I’m not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn’t fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can’t be changed.

FooBarrington,

I’m not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.

infinitepcg,

it doesn’t train or self-improve like ML does

I think the training (or fitting) process is comparable to how a support vector machine is trained. It’s not iterative like SGD in deep learning, it’s closer to the traditional machine learning techniques.

But I agree that this is a pretty academic discussion, it doesn’t matter much in practice.

topperharlie, in Am I the only one getting agitated by the word AI?

“somewhat old” person opinion warning ⚠️

When I was in university (2002 or so) we had an “AI” lecture and it was mostly "if"s and path finding algorithms like A*.

So I would argue that us the engineers have been using the term to define a wider use cases long before LLM, CEO and marketing people did it. And I think that’s fine, as categorising algorithms/solutions as AI helps understand what they will be used for, and we (at least the engineers) don’t tend to assume an actual self aware machine when we hear that name.

nowadays they call that AGI, but it wasn’t always like that, back in my time it was called science fiction 😉

actual_patience, in How are you all making it right now with grocery store prices?

Bulk buys

cordlesslamp,

The fact that the battle against spending lots of money on groceries is to spend even more money in groceries. I hate that you’re right and we’re doomed.

lagomorphlecture,

Right, this is the worst part. The people who most desperately need to get cheaper groceries can’t afford to save money on groceries by buying in bulk. It’s shitty and sad.

idunnololz, in How are you all making it right now with grocery store prices?
@idunnololz@lemmy.world avatar

Not sure if it’s just me but my grocery spending hasn’t changed in the last year. It’s definitely more expensive then say 2 years ago but seems like prices have stabilized.

I cook often so most of what I buy are produce and it’s generally cheaper than other stuff.

Socsa,

Yeah if you actually cook food it’s not that bad. It’s frozen food and junk food which has exploded in price

lightnsfw,

Same. Junk food is higher it seems but that stuff is garbage anyway. My grocery bills have been leveled off for a long time buying staples.

viralJ,

I agree. On one hand I look at prices of stuff and think “damn is it really this much now? Was half this price last year”. But on the other hand, my shopping receipts really haven’t doubled since a year ago, I don’t feel like they increased at all… But I also buy produce and cook for myself most of the time.

RaoulDook,

I haven’t changed my shopping habits, but I definitely notice the ripoffs of significantly higher prices on some of the same food items I’ve been buying for years. Overall it’s still much cheaper to buy groceries and make your own food than the vast majority of restaurants and such.

Fast food prices have gotten more noticeably higher than groceries have in my area. So I assume that most of the people I hear complaining the loudest about “Inflation” are the ones who eat fast food as a staple of their diet.

hark,
@hark@lemmy.world avatar

Nah, I remember when I could fill an entire cart with food and it’d be about $75 way back in the ancient days of 2019. Now I’d have to pay double to do that and even then I might end up with less food.

Perhapsjustsniffit,

We are in Canada. I scratch cook everything and we grow the vast majority of our own food. Most grocery shopping is staple stuff like flour and sugar. Our grocery bill has trippled in the past 2 years and it’s still rising. Our gardens have gotten considerably bigger to make up for it.

liwott, in Am I the only one getting agitated by the word AI?

@Rikj000

which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

How do you know that?

oce, in Am I the only one getting agitated by the word AI?
@oce@jlai.lu avatar

Yes, AI term is used for marketing, though it didn’t start with LLMs, a couple of years before, any ML algorithm was called AI together with the trendy data scientist job.

However, I do think LLMs are very useful, just try them for your daily tasks, you’ll see. I’m pretty sure they will become as common as a web search in the future.

Also, how can you tell that the human brain is not mostly a very powerful LLM hosting machine?

LainTrain, in Am I the only one getting agitated by the word AI?

The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.

fidodo,

What agitates me is all the people misusing the words and then complaining about what they don’t actually mean.

bilboswaggings, (edited ) in Am I the only one getting agitated by the word AI?

This has been a thing for a long time

Clippy was an assistant, Cortana was an intelligent assistant and Copilot is AI

None of these are accurate, it’s always like a generation behind

Clippy just was, Cortana was an assistant, And copilot is an intelligent assistant

The next one they make could actually be AI

alien, in Am I the only one getting agitated by the word AI?

It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could’ve called a machine that can solve arithmetic problems “AI” and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we’ll never have “true” AI?

So, the point is, what is AI for you?

vodkasolution,

Adobe Illustrator

alien,

hahaha couldn’t resist huh?

Dasnap, in Am I the only one getting agitated by the word AI?
@Dasnap@lemmy.world avatar

I assume you’re referring to the sci-fi kind of self-aware AI because we’ve had ‘artificial intelligence’ in computing for decades in the form of decision making algorithms and the like. Whether any of that should be classed as AI is up for debate as again, it’s still all a facade. In those cases, people only really cared about the outputs and weren’t trying to argue they were alive or anything.

But yeah, I get what you mean.

PonyOfWar, in Am I the only one getting agitated by the word AI?

The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

jimmy90,

it does not “think”

slurp, in Am I the only one getting agitated by the word AI?

I’ve ranted about this to several people too. Intelligence is hard to define and trying to define it has a horrible history linked to eugenics. That said, I feel like a minimum definition is that it has the capacity to understand the meaning and/or impact of what it is saying and/or doing, which current “AI” is so far from doing.

Markimus,

Yep, it says things though has no understanding of what it is saying: much like strolling through a pet shop, passing the parrot enclosure, and hearing and recoiling at the little kid swear words it cheeps out.

sxan, in Anyone shop on Temu? What are your best finds?
@sxan@midwest.social avatar

I’ve bought only one thing, because it came up in a product search; I liked the design, and couldn’t find it elsewhere, so I ordered it. I wasn’t looking for a deal - I literally couldn’t find this thing elsewhere. While it was inexpensive, it was also cheap, and the quality was not worth even what I paid.

That was my introduction to Temu. Since then, I’ve looked for other things which I’d been browsing on Amazon, and which I’m pretty sure were made in China anyway. The price difference has been negligible, the options fewer, and the shipping on that one thing took so long that now I doubt I’d buy anything else from Temu.

I’ve bought stuff directly from Chinese manufacturers and been very satisfied, but never because of cost. Quality stuff from China (e.g.) is – IME – of comparable cost to what you find from US companies.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #