When you look at a coffe cup from the side, you know it has a hole in it. Because you imagine, not because it’s a reflex.
LLM is basically a point cloud of words. The training uses neural networks and thus pattern recognition. But the llm itself is closer to a database. But hey, sql is also useful for ai (data storage/retrival according to logic).
I’m not an llm expert, by far. But right now they are not much more practical then a find out a bout things helper.
Edit: I do like them. It’s been helpful a couple times and i even got gpt4all installed on my computer for fun.
What one would think is ai today is not really i. Chatgpt does not understand what it’s talking about and definitively can not lead the machine uprising. Straight up neural networks maybe could, but they’d need magnitudes more computing power then we have now. We would need a new ai for it to be practical.
In my experience gpt-s are more like “what are some examples of x” then “can you solve this problem”. Because the problems are either easy to google or, for the harder problems, gpt straight up lies or rambles uselessly. A search engine helper, in a way.
I’d rather we put all those MWh into solving real problems, instead of startups. Also; Nvidia, fuck you.