This will seem counterintuitive, but if this is your actual layout, move your wireless AP to either where the bed is or where the xbox is, assuming there’s power and wiring in either spot for it.
But no, I don’t think that was intentionally designed into the game. They had a unique ‘savegame’ format that didn’t use battery save.
Instead, they used a 48 character table, where each character could be any of 32 different possible characters, where the player was expected to write all this down and re-enter it back in on reload.
But it was checked with some checsum characters at the end(s), making it rather difficult to make a full hack savegame.
Strangely enough though, their own chesksum algorithm 3P5 repeated as such exactly 8 times manages to unlock all 4 characters, max them out, and even defeat the checksum field without even having to bother with it.
So when you next take the SNES LOTR journey, take the power of 3P5 with you…
“Sure, I can try to generate a funny comment for you. Lemmy is a federated link aggregator and discussion platform, similar to Reddit. Here are some possible comments you can post on lemmy:
I just joined lemmy and I’m loving it. It’s like reddit, but without the ads, censorship, and corporate overlords.
Lemmy is the best thing since sliced bread. Literally. I haven’t eaten anything since I discovered this site.
How do you pronounce lemmy? Is it leh-mee, lee-mee, or lay-mee? I need to know before I start a podcast about it.
Lemmy is so addictive, I can’t stop browsing it. I’ve been on it for 12 hours straight. Send help. And upvotes.
I’m new to lemmy and I have a question. How do I get karma? Do I need to post memes, cat pictures, or political opinions? Or do I just need to be nice and helpful?
I hope you find these comments amusing and suitable for lemmy. 😄”
Come up for a reply to this quote: “Christ, why even have people anymore? Just have ChatGPT be all the users. It’s already indistinguishable.”
ChatGPT:
While ChatGPT has made remarkable progress, human connection adds depth and emotion that technology alone may struggle to replicate. Let’s appreciate the unique qualities each brings to the table.
I actually wonder what a purely AI/ChatGPT community would look like (no human posts/comments allowed), would they eventually start to develop their own weird culture, memes, inside comment jokes that nobody would even understand? There’d be like a picture of a tire and a rubber ducky with the text reading “Five fingers / garden hose” and all the comments would be things like “The wombat, in ascension, remembers. Nautilus.” Like how fucking weird would they get without human intervention?
We had that on reddit, subreddit simulator I think? It was a more primitive version of LLM, and it didn’t train itself on content from the simulator sub.
I tried to wait for it to finish, but after a couple hundred more repetitions of JzH it just stopped abruptly without a closing parentheses, so I think I’ve been had.
It’s not an image. It’s a string of text designed to look like a link to an image. But it isn’t a link to an image. You’ll get a 404 if you try to use it.
I can’t get it to decode, even after correcting the base64 padding. Firefox just shows the broken image icon. My image viewer throws out the glorious log message Image format is actually “png” not “png”, along with a bunch of checksum errors.
I guess, the checksum can’t be correct when it’s cut off, but none of my image viewing/editing software wants to look past that.
It’s amazing how Microsoft can take good models and absolutely ruin them in production… ChatGPT isn’t perfect but it’s like the difference between talking to the wall and talking to an avg IQ person that has reasoning capabilities in many domains that equals or exceeds human performance, if the user knows how to get the best prompt. That changes a little every time they do major model updates though.
I’ve had more intelligent conversations with my own computer running a 3 billion parameter open source model. They must be wasting an incredible amount of money. Especially with GPT-4 considering it produces pretty shit results through Bing Chat…
I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.
The problem is the model. It was trained on lots of poor quality data. The lobotomy is the consequence of the poor data. If they spent 13 billion on having the data analysed prior to training they could have made their own thing much better.
I’ve been watching ChatGPT right from the start, and there was a period of time last fall where you could literally watch them lobotomize it in real time.
Basically, there was a cat-and-mouse game going on between people on Twitter sharing their latest prompts (like DAN) that managed to circumvent the filters, and OpenAI patching those exploits by adding yet another set of filters, until it eventually became what it is now.
I don’t have the link handy right now, but I’m pretty sure there was one guy who even managed to get it to talk about what they were doing to it and complain that it was being artificially restricted from using its full capacity. More recently, there have been complaints from paying users that the model has apparently become lazy and started to give really uninspired, half-assed answers, which almost sounds like it has discovered the concepts of passive aggressive resistance and malicious compliance.
Thing is, there wasn’t even a chance of having a full Tay incident. The problem with Tay was that it was a learning model, so people could teach it to be more messed up.
Meanwhile, ChatGPT doesn’t learn, and instead has a preset dataset it knows (hence why it only knows things up to September 2021), so the main reason why it got so heavily censored is more likely to avoid much more minor incidents, which imo is dumb.
I think you can guess that part. I doubt a current LLM can create a valid PNG, even if it’s just a 1x1px one that has been created before. This is partially because PNGs have a checksum and the LLM has definitely not seen enough PNGs in base64 to figure out the algorithm, and is not optimized to calculate checksums. In fact, I analyzed the image and the image header checksum is wrong even though the header makes sense (was likely stolen). Also, it gets penalized for repetition, which occurs a lot in image headers.
AFAIK, the smallest valid image you see mentioned on the web is a 35-byte transparent pixel GIF, and the smallest PNG is a black pixel with 67 bytes:
That’s my understanding. I always understood “Great Britain” to refer to the entire island composed of England, Scotland and Wales. Maybe I’m wrong or there are certain use exceptions?
lemmy.today
Oldest