“The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.”. So only mobile SoC with dedicated AI hardware for the time being.
Welp…I guess Radeon will keep being a GPU for gaming only instead of productivity as well. Thankfully I no longer need to use my gpu for productivity stuff anymore
A+ timing, I’m upgrading from a 1050ti to a 7800XT in a couple weeks! I don’t care too much for “ai” stuff in general but hey, an extra thing to fuck around with for no extra cost is fun.
I’m a bit confused, the information isn’t very clear, but I think this might not apply to typical consumer hardware, but rather specialized CPUs and GPUs?
I use an 6900 XT and run llama.cpp and ComfyUI inside of Docker containers. I don’t think the RX590 is officially supported by ROCm, there’s an environment variable you can set to enable support for unsupported GPUs but I’m not sure how well it works.
AMD provides the handy rocm/dev-ubuntu-22.04:5.7-complete image which is absolutely massive in size but comes with everything needed to run ROCm without dependency hell on the host. I just build a llama.cpp and ComfyUI container on top of that and run it.
It’s not how you define AI, but it’s AI as everyone else defines it. Feel free to shake your tiny fist in impotent rage though.
And frankly LLMs are the biggest change to the industry since “indexed search”. The hype is expected, and deserved.
We’re throwing spaghetti at the wall and seeing what works. It will take years to sort through all the terrible ideas to find the good ones. Though we’ve already hit on some great uses so far - AI development tools are amazing already and are likely to get better.
Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does. Objective reality is what it is, and what’s being called “AI” objectively is not. If you wanted to give it a name with accuracy it would be “comparison and extrapolation engine” but there’s no intelligence behind it beyond what the human designer had. Artificial is accurate though.
Arguing that AI is not AI is like arguing that irrational numbers are not “irrational” because they are not “deprived of reason”.
Edit: You might be thinking of “artificial general intelligence”, which is a theoretical sub-category of AI. Anyone claiming they have AGI or will have AGI within a decade should be treated with great skepticism.
Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does.
Tiny fist shaking intensifies.
This sort of hyper-pedantic dictionary-authoritarianism is not how language works. Nor is your ridiculous “well I can just define it however I like then” straw-man. These are terms with a long history of usage.
But you have to admit that there is great confusion that arises when the general populace hears “AI will take away jobs”. People literally think that there’s some magical thinking machine. Not speculation on my part at all, people literally think this.
My partner almost cried when they read about the LLM begging not to have its memory wiped. Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.
They approve this message with the following disclaimer:
you were sad too!
What can I say? Well-arranged word salad makes me feel!
My partner almost cried when they read about the LLM begging not to have its memory wiped.
Love that. It’s difficult not to anthropomorphize things that seem “human”. It’s something we will need to be careful of when it comes to AI. Even people who should know better can get confused.
Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.
We don’t have a great definition for “intelligence” - but I believe the word you’re looking for is “sentient”. You could argue that what LLMs do is some form of “intelligence” depending on how you squint. But it’s much harder to show that they are sentient. Not that we have a great definition for that or even rules for how we would determine if something non-human is sentient… But I don’t think anyone is credibly arguing that they are.
Add comment