The Fountain, it’s one of those movies that just absolutely blew me away and I had to have it but I’m just never quite in the emotional state to revisit yet I still recommend. It’s breathtakingly beautiful and devastating. What Dreams May Come is a very close second, but I feel like after Robin Williams’ death it’s even more so, but the movie itself is just so beautifully done and heart wrenching.
I loved the fountain, too. It is so incredibly beautiful, and I’m glad to own it for the same reasons. I also can’t imagine when I’ll want to watch it again…
Grave of the Fireflies by Studio Ghibli. I love Ghibli movies, and this one was very moving, but if you’ve seen it you know why I won’t watch it again. Very powerful.
I mostly buy ingredients and cook bulk batches of food. Before, we were splurging on instacart, but they got crazy expensive with their upcharges (MINIMUM 15% increase in item cost, + service charge, +delivery fee (or the annual delivery fee), +tip (it started to feel like 15% was too low, on top of the 15% grocery upcharge).
We stopped that and we actually spend less now even after this inflation.
The only time I was agitated by it was with the George Carlin thing.
It pissed me off that it was done without permission. It annoyed me that “AI” also kinda looks like “AL” with a lowercase L and when next to another name, makes it read like AL CARLIN or AL GEORGE. And it divided me somewhat because I watched the damn special and it was mostly funny and did feel like Carlin’s style (though it certainly didn’t sound right and it had timing issues). So, like… It wasn’t shit in and of itself, but the nature of what it is and the fact it was done without permission or consent is concerning. Shame on Will Sasso for that. He could have just done his own impersonation and wrote his own jokes in the style of Carlin; it would have been a far better display of respect and appreciation than having an AI do it.
I don’t think he’s a sick and disgusting person for this; even before it all blew up, it seemed more like a tribute to a comedian he adored. Just a poorly thought out way of doing one that may have some pretty hard consequences.
Despite the presentation as an AI creation, there was a good deal of evidence that the Dudesy podcast and the special itself were not actually written by an AI, as Ars laid out in detail this week. And in the wake of this lawsuit, a representative for Dudesy host Will Sasso admitted as much to The New York Times.
Just further evidence Sasso could have just done the impersonation himself and it would have been a fine tribute (and had better timing and delivery), but he used an AI to replicate his voice and mannerisms instead. Sure, I don’t think he could have done a great job of impersonating how Carlin sounds but the mannerisms and delivery would have been enough and something he should be pretty good at considering his time on MadTV where he did a lot of impersonation stuff (such as his Stephen Segal character).
AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.
Here are some alternative terms you can use in place of AI, when they’re talking about something else:
AGI: Artificial General Intelligence: The big kahuna that doesn’t exist yet, and many projects are striving for, yet is as evasive as fusion power. An AGI in a robot will be capable of operating your coffee machine to make coffee or assemble your flat-packed furniture from the visual IKEA instructions. Since we still can’t define sentience we don’t know if AGI is sentient, or if we humans are not sentient but fake it really well. Might try to murder their creator or end humanity, but probably not.
LLM Large Language Model: This is the engine behind digital assistants like Siri or Alexa and still suffer from nuance problems. I’m used to having to ask them several times to get results I want (say, the Starbucks or Peets that requires the least deviation from the next hundred kilometers of my route. Siri can’t do that.) This is the application of learning systems see below, but isn’t smart enough for your household servant bot to replace your hired help.
Learning Systems: The fundamental programmity magic that powers all this other stuff, whether simple data scrapers to neural networks. These are used in a whole lot of modern applications, and have been since the 1970s. But they’re very small compared to the things we’re trying to build with it. Most of the time we don’t actually call it AI, even for marketing. It’s just the capacity for a program to get better at doing its thing from experience.
Gaming AI Not really AI (necessarily) but is a different use of the term artificial intelligence. When playing a game with elements pretending to be human (or living, or opponents), we call it the enemy AI or mob AI. It’s often really simple, except in strategy games which can feature robust enough computational power to challenge major international chess guns.
Generative AI: A term for LLMs that create content, say, draw pictures or write essays, or do other useful arts and sciences. Currently it requires a technician to figure out the right set of words (called a prompt) to get the machine do create the desired art to specifications. They’re commonly confused by nuance. They infamously have problems with hands (too many fingers, combining limbs together, adding extra limbs, etc.). Plagiarism and making up spontaneous facts (called hallucinating) are also common problems. And yet Generative AI has been useful in the development of antibiotics and advanced batteries. Techs successfully wrangle Generative AI, and Lemmy has a few communities devoted to techs honing their picture generation skills, and stress-testing the nuance interpretation capacity of Generative AI (often to humorous effect). Generative AI should be treated like a new tool, a digital lathe, that requires some expertise to use.
Technological Singularity: A bit way off, since it requires AGI that is capable of designing its successor, lather, rinse, repeat until the resulting techno-utopia can predict what we want and create it for us before we know we want it. Might consume the entire universe. Some futurists fantasize this is how human beings (happily) go extinct, either left to retire in a luxurious paradise, or cyborged up beyond recognition, eventually replacing all the meat parts with something better. Probably won’t happen thanks to all the crises featuring global catastrophic risk.
AI Snake Oil: There’s not yet an official name for it, but a category worth identifying. When industrialists look at all the Generative AI output, they often wonder if they can use some of this magic and power to facilitate enhancing their own revenues, typically by replacing some of their workers with generative AI systems, and instead of having a development team, they have a few technicians who operate all their AI systems. This is a bad idea, but there are a lot of grifters trying to suggest their product will do this for businesses, often with simultaneously humorous and tragic results. The tragedy is all the people who had decent jobs who do no longer, since decent jobs are hard to come by. So long as we have top-down companies doing the capitalism, we’ll have industrial quackery being sold to executive management promising to replace human workers or force them to work harder for less or something.
Friendly AI: What we hope AI will be (at any level of sophistication) once we give it power and responsibility (say, the capacity to loiter until it sees a worthy enemy to kill and then kills it.) A large coalition of technology ethicists want to create cautionary protocols for AI development interests to follow, in an effort to prevent AIs from turning into a menace to its human masters. A different large coalition is in a hurry to turn AI into something that makes oodles and oodles of profit, and is eager to Stockton Rush its way to AGI, no matter the risks. Note that we don’t need the software in question to be actual AGI, just smart enough to realize it has a big gun (or dangerously powerful demolition jaws or a really precise cutting laser) and can use it, and to realize turning its weapon onto its commanding officer might expedite completing its mission. Friendly AI would choose to not do that. Unfriendly AI will consider its less loyal options more thoroughly.
That’s a bit of a list, but I hope it clears things up.
I remember when OpenAI were talking like they had discovered AGI or were a couple weeks away from discovering it, this was around the time Sam Altman was fired. Obviously that was not true, and honestly we may never get there, but we might get there.
Good list tbh.
Personally I’m excited and cautious about the future of AI because of the ethical implications of it and how it could affect society as a whole.
Part of my work is to evaluate proposals for research topics and their funding, and as soon as “AI” is mentioned, I’m already annoyed. In the vast majority of cases, justifiably so. It’s a buzzword to make things sound cutting edge and very rarely carries any meaning or actually adds anything to the research proposal. A few years ago the buzzword was “machine learning”, and before that “big data”, same story. Those however quickly either went away, or people started to use those properly. With AI, I’m unfortunately not seeing that.
asklemmy
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.