intensely_human

@intensely_human@lemm.ee

This profile is from a federated server and may be incomplete. Browse more on the original instance.

I want to study psychology but won't AI make it redundant in a couple of years?

I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?...

intensely_human,

Dr Sbaitso was proven to be clinically effective in the 1980s.

intensely_human,

You realize that adds up to 60% right?

intensely_human,

The fields that will hold out the longest will be selected by legal liability rather than technical challenge.

Piloting a jumbo jet for example, has been automated for decades but you’ll never see an airline skipping the pilot.

intensely_human,

The web is one thing, but access to senses and a body that can manipulate the world will be a huge watershed moment for AI.

Then it will be able to learn about the world in a much more serious way.

intensely_human,

I was gonna say given how little we know about the inner workings of the brain, we need to be hesitant about drawing strict categorical boundaries between ourselves and LLMs.

There’s a powerful motivation to believe they are not as capable as us, which probably skews our perceptions and judgments.

intensely_human,

Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.

I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.

It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.

If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.

What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.

That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.

You could give it a few basic motivations then instruct it to act that out every day.

Now I’m not saying that they’re conscious, that they feel as we feel.

But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.

Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.

intensely_human,

We don’t have stable lifelong learning yet

I covered that with the long term memory structure of an LLM.

The only problem we’d have is a delay in response on the part of the robot during conversations.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #