It’s a song that’s been played so many times the record is starting to get worn out.
Big manufacturer buys software company.
Big manufacturer does not understand software business, software company, or software company’s customers.
Big manufacturer makes a bunch of cost reductions based on incorrect assumptions.
Big shot at big corp customer calls peon (like me) at budget time to ask why we spend so much money on this “VMWare”.
Peon explains that "VMWare is very important software which used to be “Best in Class” but has become “Overpriced, second rate, yada yada…” And suggests we switch to Hyper-V.
Big shot asks (a little suspiciously) if we would save money without any negative impact to operations.
Peon says, “Yes.”
Big shot writes big check to Microsoft.
Other big shot at big manufacturer is stuck trying to figure out where all the customers went; not realizing that big manufacturer pissed all over the peons who actually have to use their [now] shitty software.
Big manufacturer decides the acquisition was a failure, learns nothing from it, and sells the shell of the once popular software company for a fraction of what they paid for it.
I’m not so sure the VMWare/Broadcom story is as much ignorance as many are, but rather intentional. They see the big bucks are in the large cloud providers, and knowing it’s not easy to switch away from your current virtualization, they can bend them over a barrel for a year or two and see massive profit gains. Those providers may consider transitioning to other products, but VMware will lock them in with new contracts first.
And for the resellers and SMB customers, it’s pennies compared to the cloud providers.
Fine, I can see the SMB space embracing things like Proxmox/KVM. It runs on x86 hardware, so if we see companies like Dell providing on server hardware, it’s game over in the SMB space for VMware. Imagine having to choose to renew a VMware license for 30% more, or just build new hosts running Proxmox, and transition. Especially since all hardware has a limited lifespan, often 3-5 years in SMB. So a server replacement is just around the corner… Good time to transition.
SMB has hit the point of being the “next market”. There’s a smaller set of enterprise environments, many more SMB’s, and there’s more volatility in the SMB space. So being able to support them, and manage mergers, etc, without worrying about licensing, is a huge benefit. Licensing in SMB is a hellscape, especially when dealing with mergers/transitions.
I hope to see Jellyfin support this too (Plex is already getting support apparently) and hopefully it will work desktop-to-desktop and not just between streaming devices and phones.
Although it’s probably not massively needed as Jellyfin can already control remote devices.
If something could cast from one of my devices to another of my devices using the cast button, that’s all I want. I can strap one of those devices to my TV and be golden.
So from what i get reading your question, i would recommend reading more about container, compose files and how they work.
To your question, i assume when you are talking about adding to container you are actually referring to compose files (often called ‘stacks’)? Containers are basically almost no computational overhead.
I keep my services in extra compose files. Every service that needs a db gets a extra one. This helps to keep things simple and modular.
I need to upgrade a db from a service? -> i do just that and can leave everything else untouched.
Also, typically compose automatically creates a network where all the containing services of that stack communicate. Separating the compose files help to isolate them a little bit with the default settings.
Aren’t containers the product of compose files? i.e. the compose files spin up containers. I understand the architecture, I’m just not sure about how docker streamlines separate containers running the same process (eg, mysql).
I’m getting some answers saying that it deduplicates, and others saying that it doesn’t. It looks more likely that it’s the former though.
A compose file is just the configuration of one or many containers. The container is downloaded from the chosen registry and pretty much does not get touched.
A compose file ‘composes’ multiple containers together. Thats where the name comes from.
When you run multiple databases then those run parallel. So every database has its own processes. You can even see them on the host system by running something like top or htop. The container images themself can get deduplicated that means that container images that contain the same layer just use the already downloaded files from that layer. A layer is nothing else as multiple files bundled. For example you can choose a ‘ubuntu layer’ for the base of your container image and every container that you want to download using that same layer will just simply use those files on creation time. But that basically does not matter. We are talking about a few 10th or 100th of MB in extreme cases.
But important, thoses files are just shared statically and changing a file in one container does not affect the other. Every container has its own isolated filesystem.
I understand the architecture, I’m just not sure about how docker streamlines separate containers running the same process (eg, mysql).
Quite simple actually. It gives every container its own environment thats to namespacing. Every process thinks (more or less) it is running on its own machine.
There are quite simple docker implementations with just a couple of hundreds lines of code.
Any reason the VPN can’t stay as-is? Unless you don’t want it on the unraid box at all anymore. But going to unraid over VPN then out the rest of the network from there is a perfectly valid use case.
Well, I didn’t realize that was an option to be honest, lol. I am having some issues with that box at the moment though so having a pi or my router acting as the gateway appealed to me with it’s longer uptime
I don’t know much about the snap but you can use docker compose to stand up a deployment pretty quickly. If you aren’t super confident you could use Nextcloud AIO
If you’re looking for tips, I’d try to set up Prowlarr first if you intend to use it, it’ll save some reconfiguration down the line.
Though I don’t find anything as complex as mounting and permissions in the *arrs, haha.
But my favorite part about tinkering with home servers is just learning a little at a time, expanding naturally. It’s easy to find guides that are the “ultimate, best server configs”, but unless you understand what benefits they’re offering, you can’t really determine what fits best for YOUR needs.
I started with CouchPotato on Windows years ago and now have *arrs running through docker on headless boxes and keep adding on fun services.
The auction servers are not really that different from the others. You get the same support. Every few years I hop onto a new auction server when it’s cheaper than my current one. Never had any problems. When a HDD dies I get a new one as quickly as with the normal dedicated servers.
What you do with it is up to you. I run most of my services on bare metal. I did some virtualisation years ago but didn’t see any benefits. I have one or two services running through Docker. That might go up with time, as it seems to be the easiest way to get something up with the optimal configuration.
One of main reason’s I love docker is that migration is really easy, I just go ahead and tar up the docker compose directory and move to another distro and done, migration is done and everything is on another system.
When it comes to performance you get bare metal performance while keeping virtualitization benefit’s like container’s.
There’s nothing really bad with PiHole but I moved from it to AdGuard, both on proxmox. The UI brought me in, makes management a bit easier. It also supports DoH right out of the box.
Over the years, as I’ve learned more and gotten better at things, I’ve occasionally had the need to try new Linux distros or remake a VM to fix a bigger problem that I’m not skilled enough to detangle yet. I could probably get away with backups and restores now, but Plex’s account management has saved my butt several times over the years, so I figured it was worth checking to see if there was something similar out there.
I’m a network guy, so everything in my labs use SNMP because it works with everything. Things that don’t support SNMP are usually replaced and yeeted off the nearest bridge.
For that I use librenms. Simple, open source, and I find it easy to use, for the most part. I put it on a different system than what I’m monitoring because if it shares fate with everything else, it’s not going to be very useful or give me any alerts if there’s a full outage of my main homelab cluster.
Of course, access from the internet to it, is forbidden, and any SNMP is filtered by my firewall. Nothing really gets through for it, so I’m unconcerned about it becoming a target. For the rest of my systems security is mostly reliant on a small set of reverse proxies and firewall rules to keep everything secure.
I use a couple of VPN systems to access the servers remotely, all running on odd ports (if they need port forwards at all). I have multiple to provide redundancy to my remote access, so if one VPN isn’t working due to a crash or something, I have others that should get me some measure of access.
If you’re only using it for Plex and nothing else, it probably won’t make a lot of difference which you use.
My old setup was Ubuntu running Plex as an install… if you just run a server without a gui, it’s like 3 lines to install Plex
I also have a pi as a portable setup running the docker version which works pretty well but I don’t think it will handle hardware encoding very well, but I could be wrong
Yeah Ubuntu came up in a few searches, I’ll read more about that, Desktop was 25gb which was a bit excessive given the age of the PC, will look at server, ty
Debian is another popular choice for servers (Ubuntu is based on Debian, with a few things bolted on top which are in my opinion not worth it). The default Debian installation only consumes 1-2GB disk space (just deselect any desktop environment during the installation process)
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.