Am I the only one getting agitated by the word AI?

Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

intensely_human,

Of course we have “real” AI. We can literally be surprised while talking to these things.

People who claim it’s not general AI consistently, 100% of the time, fail to answer this question: what can a human mind do that these cannot?

In precise terms. You say “a human mind can understand” then I need a precise technical definition of “understand”. Because the people making this claim that “it’s not general AI” are always trying to wave their own flag of technical expertise. So, in technical terms, what can a general AI do, that an LLM cannot?

Vlyn, (edited )

Go and tell your LLM to click a button, or log into your Amazon account, or send an email, or do literally anything that’s an action. I’m waiting.

A 4 year old has more agency than your “AI” nowadays. LLMs are awesome at spitting out text, but they aren’t true AI.

Edit: I should add, LLMs only work with input. If there’s no input there is no output. So whatever you put in there, it will just sit there forever doing nothing until you give it an input again. It’s much closer to a mathematical function than any kind of intelligence that has its own motivation and can act on its own.

intensely_human,

Go tell a kalahari bushman to click a button, or log into your amazon account, or send an email, or literally anything you don’t place in front of him as an option.

Is your whole point just that it would be GAI if it weren’t for those darned shackles, but it’s not AGI because we give it restrictions on sending POST requests?

Vlyn,

Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren’t stupid. You could take a human from 1000 years ago and after they stop flipping out about computers and modern technology you’d be able to teach them to click a button in seconds to minutes (depending on how complex you make the task).

General AI can take actions on its own (unprompted) and it can learn, basically modifying its own code. If anyone ever comes up with a real AI we’d go towards the Singularity in no time (as the only limit would be processing power and the AI could then invest time into improving the hardware it runs on).

There are no “shackles” on ChatGPT, it’s literally an input output machine. A really damn good one, but nothing more than that. It can’t even send a POST request. Sure, you could sit a programmer down, parse the output, then do a request whenever ChatGPT mentions certain keywords with a payload. Of course that works, but then what? You have a dumb chatbot firing random requests and if you try to feed the result of those requests back in it’s going to get jumbled up with your text input you made beforehand. Every single action you want an LLM to take you’d have to manually program.

intensely_human,

Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren’t stupid

Oh you bastard. You actually tried to reframe my words into exactly the opposite of what I was saying.

I did not use a Kalahari Bushman as an example of a stupid person. I used a Kalahari Bushman as an example of a general intelligence as smart as you or I, who can’t press buttons or buy things on Amazon for reasons of access not capability.

I need to cool down before I read the rest of your comment. Not cool dude, trying to twist what I said into some kind of racist thing. Not cool.

liwott,

@Vlyn
@intensely_human

send an email

chatGPT can explain me what to do in cli to send an e-mail. Give it access to a cli and an internet connection and it will be able to do it itself

intensely_human,

Exactly. Someone demonstrated an “AI that can turn on your lights” and then had a script checking for output like {turnOnLights} and translating that to API calls

Vlyn,

Which again is literally just text and nothing more.

No matter how sophisticated ChatGPT gets, it will never be able to send the email itself. Of course you could pipe the output of ChatGPT into a cli, then tell ChatGPT to only write bash commands (or whatever you use) with every single detail involved and then it could possibly send an email (if you’re lucky and it only uses valid commands and literally no other text in the output).

But you can never just tell it: Send an email about x, here is my login and password, send it to whatever@email.com with the subject y.

Not going to work.

Anti_Face_Weapon,

I saw a streamer call a procedurally generated level “ai generated” and I wanted to pull my hair out

infinitepcg,

I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern “AI” techniques.

FooBarrington,

I thought “Wavefunction Collapse” is just misnamed Monte Carlo. Where does it use training data?

Feathercrown, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

Edit: To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

FooBarrington, (edited )

WFC is a full method of map generation. Monte Carlo is not afaik.

MC is a statistical method, it doesn’t have anything to do with map generation. If you apply it to map generation, you get a “full method of map generation”, and as far as I know that is what WFC is.

To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

Could you share the paper? Everything I read about WFC is “you have tiles that are stitched together according to rules with a bit of randomness”, which is literally MC.

Feathercrown, (edited )

Ok so you are just talking about MC the statistical method. That doesn’t really make sense to me. Every random method will need to “roll the dice” and choose a random outcome like a MC simulation. The statement “this method of map generation is the same as Monte Carlo” (or anything similar, ik you didn’t say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).

The original Github repo, with examples of how the rules are generated from a “training set”: github.com/mxgmn/WaveFunctionCollapseA paper referencing this repo as “the original WFC algorithm” (ref. 22): long google link to a PDF

Note that I don’t think the comparison to AI is particularly useful-- only technically correct that they share some similarities.

infinitepcg,

I don’t think WFC can be described as an example of a Monte Carlo method.

In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don’t have a closed form. The more you sample, the more accurate the result.

In WFC, the number of random experiments depends on your map size and is not variable.

FooBarrington,

Sorry, I should have been more specific - it’s an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you’re done - is there anything beyond this in WFC?

infinitepcg,

I’m not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn’t fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can’t be changed.

FooBarrington,

I’m not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.

infinitepcg,

it doesn’t train or self-improve like ML does

I think the training (or fitting) process is comparable to how a support vector machine is trained. It’s not iterative like SGD in deep learning, it’s closer to the traditional machine learning techniques.

But I agree that this is a pretty academic discussion, it doesn’t matter much in practice.

alien,

It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could’ve called a machine that can solve arithmetic problems “AI” and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we’ll never have “true” AI?

So, the point is, what is AI for you?

vodkasolution,

Adobe Illustrator

alien,

hahaha couldn’t resist huh?

slurp,

I’ve ranted about this to several people too. Intelligence is hard to define and trying to define it has a horrible history linked to eugenics. That said, I feel like a minimum definition is that it has the capacity to understand the meaning and/or impact of what it is saying and/or doing, which current “AI” is so far from doing.

Markimus,

Yep, it says things though has no understanding of what it is saying: much like strolling through a pet shop, passing the parrot enclosure, and hearing and recoiling at the little kid swear words it cheeps out.

kaffiene,

It’s an establish term in the field since the1950s.

Smoogs, (edited )

It’s really bugging me that it’s a catch all buzzword that combines any art on the computer into AI when there’s a very hard line from what makes digital art physically drawn by a human and what defines AI. It really annoys me that the whole actors guild cannot seem to understand what vfx stands for and what is AI. Vfx involves hundreds of humans with strong intention and artistic talent of doing literal back breaking work. The other is one wanky human with strong intention speaking loud in a room making shitty graphics that pales in comparison. This still isn’t ‘AI’. This is an asshole with too much power and thinks they are as good as an artist.

someone sketching on photoshop is a human generated image. And this has nothing to do with AI yet so many idiots sweep it into the same bin simply because a paint brush, which is still physically used by a human, was made from 1s and 0s

It also disturbs me me that people don’t hold people accountable for fake ‘AI GENERATED’ news stories or deep fakes and just shrug their shoulders calling it AI. like “oops, skynet is taking over”. No. That’s a human. A shitty horrible human, again, on a computer given too much power. No machine has intention. Only humans do.

If a mobster boss asks someone to take a hit out on someone, mobster boss goes to jail for just as much damage as a murderer. Probably even more so because it is intention. Meanwhile everyone pretends a computer itself is coming up with all this junk as if no human with terrible intention is driving at the wheel.

We gotta go back to naming names.

curiousaur,

You’re a fool if you think your own mind is any more than a large language model.

Kolanaki, (edited )
@Kolanaki@yiffit.net avatar

The only time I was agitated by it was with the George Carlin thing.

It pissed me off that it was done without permission. It annoyed me that “AI” also kinda looks like “AL” with a lowercase L and when next to another name, makes it read like AL CARLIN or AL GEORGE. And it divided me somewhat because I watched the damn special and it was mostly funny and did feel like Carlin’s style (though it certainly didn’t sound right and it had timing issues). So, like… It wasn’t shit in and of itself, but the nature of what it is and the fact it was done without permission or consent is concerning. Shame on Will Sasso for that. He could have just done his own impersonation and wrote his own jokes in the style of Carlin; it would have been a far better display of respect and appreciation than having an AI do it.

I don’t think he’s a sick and disgusting person for this; even before it all blew up, it seemed more like a tribute to a comedian he adored. Just a poorly thought out way of doing one that may have some pretty hard consequences.

iarigby,

Despite the presentation as an AI creation, there was a good deal of evidence that the Dudesy podcast and the special itself were not actually written by an AI, as Ars laid out in detail this week. And in the wake of this lawsuit, a representative for Dudesy host Will Sasso admitted as much to The New York Times.

arstechnica.com/…/george-carlins-heirs-sue-comedy…

Kolanaki,
@Kolanaki@yiffit.net avatar

Just further evidence Sasso could have just done the impersonation himself and it would have been a fine tribute (and had better timing and delivery), but he used an AI to replicate his voice and mannerisms instead. Sure, I don’t think he could have done a great job of impersonating how Carlin sounds but the mannerisms and delivery would have been enough and something he should be pretty good at considering his time on MadTV where he did a lot of impersonation stuff (such as his Stephen Segal character).

Seudo, (edited )

We have to work out what intelligence is before we can develop AI. Sentient AI? Forget about it!

KeefChief13,

Get the new samsung blah blah with the new galaxy AI!!!. ENOUGH.

KpntAutismus,

wait for the next buzzword to come out, it’ll pass.

used gpt3 once, but haven’t had a use case for it since.

i’ll use an “”“AI”“” assistant when they are legitimately useful.

BeigeAgenda,
@BeigeAgenda@lemmy.ca avatar

It’s still good to start training one’s AI prompt muscles and to learn what a LLM can and can’t do.

OceanSoap,

My coworker just gave me this rant the other day about AI.

liwott,

@Rikj000

which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

How do you know that?

BigTrout75,

LOL, ask anyone in IT marketing how they feel about AI.

dog_,

“AI” is the new “Innovate”, every time someone uses “innovate” in 2024, they’re just talking about how they’re stripping our rights away from things we owned.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #