intensely_human

@intensely_human@lemm.ee

This profile is from a federated server and may be incomplete. Browse more on the original instance.

intensely_human,

“So you’re saying if I openly announced I was gonna have sex with people, everyone would have sex with me?”

(no)

intensely_human,

No, it’s more like someone claiming that the sun is shining and me asking for a picture of the sun shining.

intensely_human,

An artist with a message is a propagandist

intensely_human,

Yeah Dave’s gotten preachy lately and it’s 🥱

intensely_human,

I mean, I can easily quote anti-semitic things that Hitler said, and published.

Nobody has ever made the claim Hitler shot Jews. But people are making the claim that Chapelle says anti-trans things. In fact this entire story is about things he has said.

I’ve seen his specials, and Ive yet to hear him say anything anti-trans. So I don’t believe those things exist, though I’m open to being proved wrong.

Paleolithic humans may have understood the properties of rocks for making stone tools (phys.org)

A research group led by the Nagoya University Museum and Graduate School of Environmental Studies in Japan has clarified differences in the physical characteristics of rocks used by early humans during the Paleolithic. They found that humans selected rock for a variety of reasons and not just because of how easy it was to break...

intensely_human,

Is there any theory that predicts these people would not understand the properties of different rocks?

intensely_human,

Yes, but how society responds to those challenges is really what matters

One of the key ways American society changed its response to those challenges is it stopped enslaving young men to fight wars involuntarily.

intensely_human,

That’s the situation for all of us: we are products of our environment, then we get to make some choices.

intensely_human,

It’s Rainbownomics!

intensely_human,

There’s no shame in being shameless

intensely_human,

It’s fucking terrifying to hear someone so opinionated about huge groups of people, say the words “my patients”.

What do you do?

intensely_human,

You really think OP’s dad is running the show?

intensely_human,

I’m autistic myself. Unwritten rules are generally far more complex than their written form, and the translation into words loses a lot of information. I’d encourage all other autistics to develop their attention and working memory, and then the unwritten rules will start to become apparent.

I want to study psychology but won't AI make it redundant in a couple of years?

I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?...

intensely_human,

The fields that will hold out the longest will be selected by legal liability rather than technical challenge.

Piloting a jumbo jet for example, has been automated for decades but you’ll never see an airline skipping the pilot.

intensely_human,

The web is one thing, but access to senses and a body that can manipulate the world will be a huge watershed moment for AI.

Then it will be able to learn about the world in a much more serious way.

intensely_human,

We don’t have stable lifelong learning yet

I covered that with the long term memory structure of an LLM.

The only problem we’d have is a delay in response on the part of the robot during conversations.

intensely_human,

I was gonna say given how little we know about the inner workings of the brain, we need to be hesitant about drawing strict categorical boundaries between ourselves and LLMs.

There’s a powerful motivation to believe they are not as capable as us, which probably skews our perceptions and judgments.

intensely_human,

Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.

I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.

It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.

If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.

What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.

That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.

You could give it a few basic motivations then instruct it to act that out every day.

Now I’m not saying that they’re conscious, that they feel as we feel.

But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.

Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #