The old chef forged documents to take over the restaurant for himself, stealing from the protagonist, took advantage of the name Gustav to sell cheap food for profit.
He didn’t forge documents. He just didn’t want to tell the protagonist that he was the heir. I believe the letter also told him not to tell. So, in a way, he was fulfilling the dead mother’s wishes on that.
Gustav’s was failing. Selling cheap food for profit might have been the only way to keep the business afloat. Yes, he tarnishes the name, but sometimes things have to be done, and he might have had to make that hard decision.
Idk if you had intelligent talking people-rats like in the movie then being excluded from whole industries just because people think they’re icky would actually be pretty bigoted.
They aren’t icky simply because they’re rats, they’re ugly because they piss and shit everywhere.
Bigotry and such is unjustified because the people it’s targeted at aren’t the bigoted things people assume about them. They’re normal, same as any of us.
Rats are unsanitary. Even pets shouldn’t be in a food prep area. I’m not sorry for any rat furies out there.
I didn’t. Intelligent talking people rats are still rats, the rats in the movie behave entirely like normal biological rats, just they can “talk” and stuff too
So intelligent talking people-rats can run a five-star restaurant but can’t understand the concept of hygiene? Why? Because “Rats are unsanitary”, apparently. Sounds pretty bigoted. Sounds like you found a big-worded sciency way of calling them icky. You’re a scientific ratcist.
If they have to live in the walls and don’t have sanitary living conditions that’s a societal issue, also due to deeply entrenched anti-rodent bigotry.
I can’t wait for Ben Shapiro to clip this and use it as evidence of the crazy woke left.
Is an LLM machine learning? In ML you are usually predicting a value based on values in the training set. That’s not really what an LLM does it seems. Maybe it uses some ML under the hood.
In ML you are usually predicting a value based on values in the training set
No, that’s just a small part of ML: Supervised learning. There also is unsupervised learning, reinforcement learning and a whole bunch of other things in machine learning; It’s a way bigger field than just that.
And about your question: Yeah, LLMs are a prime example of machine learning. Very simplified, they use a kind of recurrent neural network to take inputs of arbitrary lengths and give outputs. They are trained on huge loads of data (text) to auto-complete the data (so that they get e.g. a sentence as input and give a second sentence that’s likely the next sentence in the data as output). E.g. “Today I went” as input could generate “to school.” as output.
ChatGPT is based on these LLMs like GPT-4 in the way that the start of the input data is commands in human language for the bot how to behave. (E.g. “You are called ChatGPT. You are not allowed to […]. You are helpful and friendly.”), then adding the user input. The LLM then generates what the chatbot described with the given characteristics would give as an output based on the training set and it’s returned as the output by ChatGPT.
I remember Gilbert Gottfried at a Friar’s Club roast. Can’t remember what the actual joke was, but I remember he lost the whole audience, and then won them back with a spontaneous telling of “The Aristocrats”
Kudos for Carlin, who made fun of government propaganda. Maybe not so much for Joan Rivers for making fun of FDNY widows.
(I’m not a boomer, though. Or a millennial. Or really that edgy anymore, if I ever was…)
lemmyshitpost
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.