hperrin, (edited )

I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.

What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?

As far as your question though, I’m agitated too, but more about things being marketed as AI that either shouldn’t have AI or don’t have AI.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #