Games from that time were actually running mostly in your browser. Meaning that the host, for example Miniclip served you the JavaScript and other files of the game which were then executed locally. So technically you could archive those games as long as you can load them up at least once initially.
If you logged and saved all the files the first one requested you could potentially make it work. You could manually change of the file paths in the html if you only doing a few of them. There’s only like 10 or so paths that would need to be modified. The PHP ones are likely harder to make work as php is a server side language and you don’t likely have easy access to PHP server and everything that goes with it.
Anyway thanks for the link to to mynoise.net. It looks like a well designed, carefully crafted website.
It’s an open source solution designed to scale to what the web was originally designed for and excels at. Documents. Specifically hyperlinked documents or webpages. You can’t reasonably expect an archival service to archive something that is by definition not static like an interactive web app.
There’s a local llama subreddit with a lot of good information and 4chan’s /g/ board will usually have a good thread with a ton of helpful links in the first post. Don’t think there’s anything on lemmy yet. You can run some good models on a decent home pc but training and fine tuning will likely require renting out some cloud gpus.
I haven’t tried any of them but I did just listen to a podcast the other week where they talk about LlamaGPT vs Ollama and other related tools. If you’re interested it’s episode 540: Uncensored AI on Linux by Linux Unplugged
You could repurpose an old workstation, bought dirt cheap on eBay if you are lucky, but even then you’ll have to get yourself an HDD, maybe multiple of them if you want to have data redundancy.
For anything new your best bet is a 2 bay ready made NAS, but you’ll have to invest around 300€ for the cheapest one.
It is entirely possible to start with a 2-bay drive rack (not a caddy, we want something without the connections) and then run the SATA out the back of the computer to the drives. It’s a compromise for this low a budget, but it’s not a major sacrifice.
I’ve been working on this on and off for a few months now. The more I learn, the deeper the hole gets. Ports and VPNs and UPNP and TCP and UDP and hosts and containers and firewalls and on and on. It’s a lot.
Many times I can’t get things working properly, if at all, and other times it works perfectly one day and then several days later, after changing absolutely nothing, no longer works.
My current goal is to get a Mobilizon instance and a Jitsi server running, to hopefully get a community started up there that meets up regularly to help each other, and to make onboarding easier.
I tried to ask for help around here and, while a few kind people did offer to help (and disappeared shortly thereafter), I was overwhelmingly lambasted for daring to ask for personal help.
This. And, yt-dlp and/or youtube-dl used to have an issue where if the url started with the video ID instead of the playlist ID, it just downloaded the video not the whole playlist. Not sure if that is still around, then just be aware.
I have 3 Intel S3700’s, one for the OS and two 400GB ones for a mirror pool (might do a raidz1 as well). But getting anything in a serious capacity (8-12 TB of usable storage) with datacenter SSDs is really expensive. :(
Just rob a few banks, go to prison, meet a coke dealer, get out of prison and start selling coke, rise up the ranks until you can kill the current leader and become a drug kingpin, and finally realize that you still don’t have enough money for it because they are expensive as shit.
Leave Servarr as last thing to setup because it requires many services to work together and even small mistake in config will make it not work. Its not hard, but it will be easier after you learn how to setup jellyfin or audiobookshelf.
I have no experience with your hardware, but after you install docker and docker-compose get Portainer and get familiar with docker compose. Portainer is simple gui that lets you manage all containers.
So for example, you get docker-compose example for jellyfin, eddit PUID, GUID, path to your library folder, copy that in Portainer Stacks, hit deploy and BAM! Your jellyfin is available on localhost:8096
You might face many issues in the begining, but dont give up, its getting more and more easy over time. I still think Im a noob, but have no problems with my 40ish containers running on poor home server 😉. Dont forget this community is awesome and helpfull
And also get into proxmox. You can pass through part of your GPU into a “desktop” environment and also have another VM(s) running in the background. That way you can use your computer as normal with a type 1 hypervisor in the background.
Also get a mobo with 2 NICs. The fewer pcie cards you have the lower power draw.
My NVMe idle at 7w and my HDDs idle at about 15w I think. 45w is just for storage.
Backblaze B2 is 6$ a month for 1TB and first 10GB is free. You pay proportionally (it cost me 2-3$ for last 7-8 months for 20-150 GB that accumulated over time). Keep in mind that you will spend more if you download backup, but you should use cloud backup as last resort anyway. I backup to 2nd local disk and also to B2 daily with Kopia. Didnt need backup fortunately, downloading from B2 small files ocasionally just for testing setup
Its not just cheaper, I love it because I dont have to deal with Gshit company
just fyi, direct streaming isn’t really direct streaming as you may think of it if you have specified samba shares on your nas instead of something on the vm running jellyfin. it will still pull from the nas into jellyfin and then http stream from jellyfin, whihc is super annoying.
jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).
Hello and thank you all for your appreciation! For anyone who asks, there is a buymeacoffee page for donations. It’s really a pleasure to see my work recognized, especially when I’ve been practically stuck on Android Auto support for months… For the future, the plans are to fix some bugs already reported to me, add support for the OpenSubsonic API, and clean up the interface (giving the user the ability to show or hide elements as they wish). Fewer server calls should lighten up and speed up the app.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.