It’s really bugging me that it’s a catch all buzzword that combines any art on the computer into AI when there’s a very hard line from what makes digital art physically drawn by a human and what defines AI. It really annoys me that the whole actors guild cannot seem to understand what vfx stands for and what is AI. Vfx involves hundreds of humans with strong intention and artistic talent of doing literal back breaking work. The other is one wanky human with strong intention speaking loud in a room making shitty graphics that pales in comparison. This still isn’t ‘AI’. This is an asshole with too much power and thinks they are as good as an artist.
someone sketching on photoshop is a human generated image. And this has nothing to do with AI yet so many idiots sweep it into the same bin simply because a paint brush, which is still physically used by a human, was made from 1s and 0s
It also disturbs me me that people don’t hold people accountable for fake ‘AI GENERATED’ news stories or deep fakes and just shrug their shoulders calling it AI. like “oops, skynet is taking over”. No. That’s a human. A shitty horrible human, again, on a computer given too much power. No machine has intention. Only humans do.
If a mobster boss asks someone to take a hit out on someone, mobster boss goes to jail for just as much damage as a murderer. Probably even more so because it is intention. Meanwhile everyone pretends a computer itself is coming up with all this junk as if no human with terrible intention is driving at the wheel.
I’ve the that confusion, too. I saw someone saying AI shouldn’t be controversial because we’ve already had AI in video games for years. It’s a broad and blanket term encompassing many different technologies, but people act like it all means the same thing.
Of course we have “real” AI. We can literally be surprised while talking to these things.
People who claim it’s not general AI consistently, 100% of the time, fail to answer this question: what can a human mind do that these cannot?
In precise terms. You say “a human mind can understand” then I need a precise technical definition of “understand”. Because the people making this claim that “it’s not general AI” are always trying to wave their own flag of technical expertise. So, in technical terms, what can a general AI do, that an LLM cannot?
Go and tell your LLM to click a button, or log into your Amazon account, or send an email, or do literally anything that’s an action. I’m waiting.
A 4 year old has more agency than your “AI” nowadays. LLMs are awesome at spitting out text, but they aren’t true AI.
Edit: I should add, LLMs only work with input. If there’s no input there is no output. So whatever you put in there, it will just sit there forever doing nothing until you give it an input again. It’s much closer to a mathematical function than any kind of intelligence that has its own motivation and can act on its own.
Go tell a kalahari bushman to click a button, or log into your amazon account, or send an email, or literally anything you don’t place in front of him as an option.
Is your whole point just that it would be GAI if it weren’t for those darned shackles, but it’s not AGI because we give it restrictions on sending POST requests?
Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren’t stupid. You could take a human from 1000 years ago and after they stop flipping out about computers and modern technology you’d be able to teach them to click a button in seconds to minutes (depending on how complex you make the task).
General AI can take actions on its own (unprompted) and it can learn, basically modifying its own code. If anyone ever comes up with a real AI we’d go towards the Singularity in no time (as the only limit would be processing power and the AI could then invest time into improving the hardware it runs on).
There are no “shackles” on ChatGPT, it’s literally an input output machine. A really damn good one, but nothing more than that. It can’t even send a POST request. Sure, you could sit a programmer down, parse the output, then do a request whenever ChatGPT mentions certain keywords with a payload. Of course that works, but then what? You have a dumb chatbot firing random requests and if you try to feed the result of those requests back in it’s going to get jumbled up with your text input you made beforehand. Every single action you want an LLM to take you’d have to manually program.
Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren’t stupid
Oh you bastard. You actually tried to reframe my words into exactly the opposite of what I was saying.
I did not use a Kalahari Bushman as an example of a stupid person. I used a Kalahari Bushman as an example of a general intelligence as smart as you or I, who can’t press buttons or buy things on Amazon for reasons of access not capability.
I need to cool down before I read the rest of your comment. Not cool dude, trying to twist what I said into some kind of racist thing. Not cool.
Exactly. Someone demonstrated an “AI that can turn on your lights” and then had a script checking for output like {turnOnLights} and translating that to API calls
Which again is literally just text and nothing more.
No matter how sophisticated ChatGPT gets, it will never be able to send the email itself. Of course you could pipe the output of ChatGPT into a cli, then tell ChatGPT to only write bash commands (or whatever you use) with every single detail involved and then it could possibly send an email (if you’re lucky and it only uses valid commands and literally no other text in the output).
But you can never just tell it: Send an email about x, here is my login and password, send it to whatever@email.com with the subject y.
I saw your post the other day and didn’t have anything constructive to add (my instinct was to say ‘just see where it goes, but don’t force it to be romantic’, but I know so little about the situation that it’s hollow advice), but I came across this article in the NY Times that might speak to your situation. It talks about limerence, which is a new word for me. I say might, because it might not be what you’re feeling, but it’s worth a read regardless, and the tips on how to overcome it in the article seem useful (and have backing by different researchers, so they’re bound to have more material on the subject that would be potentially related to what you’re going through).
Anything Jam Hsiao works, but I really like his song A Love Song For You. I also found out about a group called 製燥者StoryMaker and I absolutely love their song Sin of Sloth. I have so many different songs, mostly vocaloid and other related software, songs that I like that are in Chinese, Japanese, some in Korean, and even a song in Ukrainian that I like, but I ain’t doing a super list of songs.
The only thing I really hate about “AI” is how many damn fonts barely differentiate between a capital “i” and lowercase “L” so it just looks like everyone is talking about some guy named Al.
asklemmy
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.