My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.
I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.
LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.
Way back in the olde tymes, I was having trouble with the NIC driver in my Linux install. I posted a question about it on USENET, and got a reply from the guy who wrote the drivers. He asked for some info about the card, then updated the driver to support it.
That’s the problem with cultists. They believe everything without proof to the point they believe proof is wrong somehow. Like the only valid beliefs are ones that are taken on faith and run counter to all logic and reason.
After all, if everything you believe isn’t totally bullshit, how can you claim you have faith?
My wife’s born again idiot ex-friend who came to visit her and tell her she’s a baby killer (never had an abortion, but is pro-choice) insisted people are aborting babies at 9 months.
I was only commenting on the concept of free will. Doesn’t matter where you apply it, we’re all just following our programming.
Obviously, the program is incredibly complex, otherwise the illusion of free will wouldn’t be so easy to believe.
However, there are many examples where the programming becomes apparent.
The best example of this is a radio lab episode about a woman with transient global amnesia. Her memory reset every 90 seconds, and she kept repeating the same conversation over and over for hours. Like a program stuck in a loop.
She couldn’t choose to say something else. Given the same input, she would repeat the same response every time. She didn’t have the ability to realize she had already said it, so she just kept looping.