The only areas of machine learning that I expect to live up to the hype are in areas, where somewhat noisy input and output doesn’t ruin the usability, like image and audio processing and generation, or where you have to validate the output anyway, like the automated copy-paste from stackexchange. Anything that requires actual specifity and factuality straight from the output, like the language models attempting to replace search engines (or worse, professional analysis), will for the foreseeable future be tainted with hallucinations and misinformation.
The ol’ “philosophy is just applied history, history is just applied psychology, psychology is just applied biology, biology is just applied chemistry, chemistry is just applied physics, physics is just applied mathematics, mathematics is just applied philosophy.”
Which I saw as graffiti in a university bathroom back in Ye Olden Days and has stuck with me ever since.
Actually less than that because only around 10% of the gold created this way (assuming a natural distribution of Hg isotopes) would be stable, so you’d get a bunch of β particles too. I don’t even know how Au-201 would act, and it would comprise 30% of the output.
To stay obvious, what’s fascinating is that those networks are small, its members the most intelligent people available and they meet each other regularly in person at conferences.
They may be intelligent in their fields but that doesn’t mean they think thing through in every aspect of their lives. The status quo is the easiest thing to deal with they can devote more time to their careers/research
Unless their field is in social engineering, then yeah why are they going along with it?
science_memes
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.