I used zabbix at some point, but I never looked at the data so I stopped. Zabbix shows all kind of stuff.
I have cockpit on my bare-metal that has some stats, and netdata on my firewall, I do not track any of my VM’s (except vnstat that runs on everything device).
not only hosting lots of sleazebags, but also having tons of compromised mail machines, so their machines were, according to what I’d read there, the source of much of the world’s spam, and they wouldn’t fix things.
EasyDNS was recommended by one of the SysAdmin reporters on The Register, a few years ago.
He also recommended Linode & Vultr, back then, too.
This stuff in this comment is just my opinion, and my memory of what trustworthy people were reporting a few years ago.
I’m using Headscale for something similar. I have a VPS and a server at home. Both are on the same Headscale network. On the home server I set up a Matrix server. On the VPS I set up Caddy as a reverse proxy for the home server with its Headscale IP. It works nicely.
I mean I think it really depends on the type of website you’re trying to host. A static blog would use way less bandwidth than a media server for example. Traffic would have the same effect too, where 1 concurrent visitor to a blog would probably be fine but 10,000 would be a problem.
You should try out all the options you listed and the other recommendations and find what works best for you.
I personally use Kubernetes. It can be overwhelming but if you’re willing to learn some new jargon then try a managed kubernetes cluster. Like AKS or digital ocean kubernetes. I would avoid managing a kubernetes cluster yourself.
Kubernetes gets a lot of flack for being overly complicated but what is being overlooked with that statement is all the things that kubernetes does for you.
If you can spin up kubernetes with cert-manager, external-dns, and an ingress controller like istio then you got a whole automated data center for your docker containers.
Thanks. Yeah I’m temped to try kubernetes because of what you mentioned. I really like that every part that I need (ingress controller, certs, etc) are considered part of the core service and are built in. Right now I just have to run that stuff like it’s own service and wire everything up by hand. I don’t think I mind the extra overhead of kubernetes either, I love to tinker with that sort of thing anyway!
I think I will try a couple of things though. Maybe find a set of services to deploy with each and compare the experiences.
Well the kubernetes API has all the necessary parts built in mostly, although sometimes you may want to install a custom resource which often comes with complex service installs.
But I think the biggest strength of kubernetes is all the foss projects that are available for it. Specifically external-dns, cert-manager, and istio. These are separate projects and will have to be installed after the cluster is up.
Caution, not all cloud providers support istio. I know that Google’s GKS doesn’t, they make you use their own fork of it
I would also recommend you avoid helm if possible as it obfuscates what the cluster is doing and might make learning harder. Try to just stick to using kubectl if possible.
I have heard good things about nomad too but I have yet to try it.
My nas is a low powered atom board that runs unraid.
My dockets run on a ryzen CPU with proxmox. I don’t have a cluster, just 1.
In proxmox I run a VM that runs a all my dockets.
I use portainer to run all my services as stacks. So the arr stack has all the arrs together in a docker compose file. The docker compose files are stored in gitea (one of the few things I still run on unraid) and Everytime I make a change to the git, I press one button on portainer and it pulls down the latest docker compose.
For storage, on proxmox I use zfs with ssds only. The only thing that needs HDDs is the media on my unraid.
When a docker needs to access the media it uses an NFS mount to the unraid server.
Everything else is on my zfs array on proxmox. I have auto zfs snapshots every hour. Borg backup also takes hourly incremental backups of the zfs array and sends it to the unraid server locally and borg base for off-site backup.
The whole setup works very well and it very stable.
The flexibility of using proxmox means that things that work better in a VM (HaOS) I can install as a VM. Everything else is docker.
Everyone’s saying fstab but if Navidrome is in a docker container, just mount it as a volume on your container. I found this guide that seems to document it fairly well.
I do this. Personally I use cloudflaire for my domain and dns, not that I’m committed to them it’s just what I use. I then use protonmail for my email and point the relivent records to them.
Self-hosting email is not at all easy, and I’d recommend paying for hosted email from a service that lets you use a custom domain. Most will let you have multiple inboxes, although this may cost extra.
Then, just buy a domain (NameCheap is fine) and point your MX records at the email provider.
fstab will do it, but the more important question is, what do you want to happen when it doesn’t mount properly? Do you want the system to fail to boot? Do you want navidrome to not run?
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.