Am I the only one getting agitated by the word AI?

Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

Anti_Face_Weapon,

I saw a streamer call a procedurally generated level “ai generated” and I wanted to pull my hair out

infinitepcg,

I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern “AI” techniques.

FooBarrington,

I thought “Wavefunction Collapse” is just misnamed Monte Carlo. Where does it use training data?

Feathercrown, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

Edit: To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

FooBarrington, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

MC is a statistical method, it doesn’t have anything to do with map generation. If you apply it to map generation, you get a “full method of map generation”, and as far as I know that is what WFC is.

To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

Could you share the paper? Everything I read about WFC is “you have tiles that are stitched together according to rules with a bit of randomness”, which is literally MC.

Feathercrown, (edited )

Ok so you are just talking about MC the statistical method. That doesn’t really make sense to me. Every random method will need to “roll the dice” and choose a random outcome like a MC simulation. The statement “this method of map generation is the same as Monte Carlo” (or anything similar, ik you didn’t say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).

The original Github repo, with examples of how the rules are generated from a “training set”: github.com/mxgmn/WaveFunctionCollapseA paper referencing this repo as “the original WFC algorithm” (ref. 22): long google link to a PDF

Note that I don’t think the comparison to AI is particularly useful-- only technically correct that they share some similarities.

infinitepcg,

I don’t think WFC can be described as an example of a Monte Carlo method.

In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don’t have a closed form. The more you sample, the more accurate the result.

In WFC, the number of random experiments depends on your map size and is not variable.

FooBarrington,

Sorry, I should have been more specific - it’s an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you’re done - is there anything beyond this in WFC?

infinitepcg,

I’m not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn’t fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can’t be changed.

FooBarrington,

I’m not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.

infinitepcg,

it doesn’t train or self-improve like ML does

I think the training (or fitting) process is comparable to how a support vector machine is trained. It’s not iterative like SGD in deep learning, it’s closer to the traditional machine learning techniques.

But I agree that this is a pretty academic discussion, it doesn’t matter much in practice.

KpntAutismus,

wait for the next buzzword to come out, it’ll pass.

used gpt3 once, but haven’t had a use case for it since.

i’ll use an “”“AI”“” assistant when they are legitimately useful.

BeigeAgenda,
@BeigeAgenda@lemmy.ca avatar

It’s still good to start training one’s AI prompt muscles and to learn what a LLM can and can’t do.

TrickDacy,
@TrickDacy@lemmy.world avatar

You’re not the only one but I don’t really get this pedantry, and a lot of pedantry I do get. You’ll never get your average person to switch to the term LLM. Even for me, a techie person, it’s a goofy term.

Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like “AI” but technically isn’t for decades. Not saying that’s the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.

Silentiea,

I don’t think the real problem is the fact that we call it AI or not, I think it’s just the level of hype and prevalence in the media.

LainTrain,

The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.

fidodo,

What agitates me is all the people misusing the words and then complaining about what they don’t actually mean.

slurp,

I’ve ranted about this to several people too. Intelligence is hard to define and trying to define it has a horrible history linked to eugenics. That said, I feel like a minimum definition is that it has the capacity to understand the meaning and/or impact of what it is saying and/or doing, which current “AI” is so far from doing.

Markimus,

Yep, it says things though has no understanding of what it is saying: much like strolling through a pet shop, passing the parrot enclosure, and hearing and recoiling at the little kid swear words it cheeps out.

wabafee, (edited )
@wabafee@lemmy.world avatar

To be fair it’s still AI, If I remember correctly what I learned from uni LLM are in the category what we call expert systems. We could call them that way, then again LLM did not exist back then, and most of the public does not know all this techno mumbo-jumbo words. So here we are AI it is.

pl_woah,

I’m pissed that large corps are working hard on propaganda to say that LLMs and theft of copyright is good if they do it

BigTrout75,

LOL, ask anyone in IT marketing how they feel about AI.

31337,

AI is simply a broad field of research and a broad class of algorithms. It is annoying media keeps using the most general term possible to describe chatbots and image generators though. Like, we typically don’t call Spotify playlist generators AI, even though they use recommendation algorithms, which are a subclass of AI algorithms.

curiousaur,

You’re a fool if you think your own mind is any more than a large language model.

AlmightySnoo, (edited )
@AlmightySnoo@lemmy.world avatar

When I was doing my applied math PhD, the vast majority of people in my discipline used either “machine learning”, “statistical learning”, “deep learning”, but almost never “AI” (at least not in a paper or a conference). Once I finished my PhD and took on my first quant job at a bank, management insisted that I should use the word AI more in my communications. I make a neural network that simply interpolates between prices? That’s AI.

The point is that top management and shareholders don’t want the accurate terminology, they want to hear that you’re implementing AI and that the company is investing in it, because that’s what pumps the company’s stock as long as we’re in the current AI bubble.

kaffiene,

It’s an establish term in the field since the1950s.

Phoonzang,

Part of my work is to evaluate proposals for research topics and their funding, and as soon as “AI” is mentioned, I’m already annoyed. In the vast majority of cases, justifiably so. It’s a buzzword to make things sound cutting edge and very rarely carries any meaning or actually adds anything to the research proposal. A few years ago the buzzword was “machine learning”, and before that “big data”, same story. Those however quickly either went away, or people started to use those properly. With AI, I’m unfortunately not seeing that.

Seudo, (edited )

We have to work out what intelligence is before we can develop AI. Sentient AI? Forget about it!

Abucketofpuppies,

You are misunderstanding what AI means, probably due to its overuse in pop culture. What you are think of is a subcategory of AI. It goes: AI > Machine Learning > Artificial Life

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #