Yes. If you use Ghostery and are looking for an alternative I highly recommend Privacy Badger. It’s created by the Electronic Frontier Foundation and is free and open source. Great piece of software.
True. I did some rough math when I needed to right-size a UPS for my home server rack and estimated that running a Pi4 for a year would cost me about $8 worth of electricity and that running an x86 desktop would cost me about $40. Not insignificant for sure if you’re not going to use the extra performance that an x86 PC can offer.
This exactly. If you already have Pis they are still great. Back when they were $35 it was a pretty good value proposition with none of the power or space requirements of a full size x86 PC. But for $80-$100 it’s really only worth it if you actually need something small, or if you plan to actually use the gpio pins for a project.
If you’re just hosting software a several year old used desktop will outperform it significantly and cost about the same.
Exactly. It’s like hey… If the corpos take a dump in your mouth you can either leave or you can stick around and complain about the taste. And yet the people who left are the whiners?
I’m a sysadmin as well and I consider spinning up a new instance and rebuilding a system from scratch to be an essential part of the backup and recovery process.
Upgrades are fine, but they can sometimes be risky and over a long enough period of time your system is likely accumulating many changes that are not documented and it can be difficult to know exactly which settings or customizations are important to running your applications. VM snapshots are great but they aren’t always portable and they don’t solve the problem of accumulating undocumented changes over time.
Instead if you can reinstall an OS, copy data, apply a config and get things working again then you know exactly what configuration is necessary and when something breaks you can more easily get back to a healthy state.
Generally these days I use a preseed file for my Linux installs to partition disks, install essential packages, add users and set ssh keys. Then I use Ansible playbooks to deploy a config and install/start applications. If I ever break something that takes longer than 20 minutes to fix I can just reinstall the whole OS and be back up and running, no problem.
I’ve had exactly this same thought. Doing it client-side seems easy enough, it’s just like creating a multi-reddit and then when you want to post you have to choose which instance to post in.
The hard part is probably that these communities will have different moderators and different rules which complicates things substantially.
In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.
I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.
Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.
I watched a video from a guy who used machine learning to play Pokemon and he did a great analysis of the process. The most interesting part to me was how small changes to the reward system could produce such bizarre and unexpected behavior. He gave out rewards for exploring new areas by taking screenshots after every input and then comparing them against every previous one. Suddenly it became very fixated on a specific area of the game and he couldn’t figure out why. Turns out there was both flowers and water animating in that area so it triggered a lot of rewards without actually exploring. The AI literally got distracted looking at the beautiful landscape!
Anyway, that example helped me understand the challenges of this sort of software design. Super fascinating stuff.
Seamlessly syncing game saves between my Deck and my primary gaming PC is so nice. Before I travel I just make sure to wake up the deck long enough to get updates and sync saves.
For non steam games I use syncthing but that always requires just a little bit of work.
I jumped into Tumbleweed recently and have really been liking it. Last time I used Linux with a desktop environment I was using Gnome and KDE was a lot unglier. Things have definitely changed.
It’s like looking through a telescope. Everything within the lens is clear and detailed, but anything on the periphery might as well not exist. Very useful state of mind for certain coding tasks.
I use Ansible for all my deployments and just got a PXE boot set up with a preseed file to automate the install process and get the host ready to run playbooks.
I’ve been really pleased with this strategy overall. I think that Ansible works really well for programmatically generating config files which in turn makes moving applications between servers effortless. I control docker volume mounts with ansible variables and encrypt secrets with ansible vault so I can do everything all in one place.
Troubleshooting issues is a lot easier and recovering from a backup is faster and a requires less effort since I can just pull down the Ansible config from git and redeploy.