Thanks thats good to know! I have got onto tailscale and have a test lab setup with a digital ocean vps for the public IP(exit node) and a ubuntu machine with a tunnel to it. Its working, just need to translate that to pfsense…
no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.
docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.
much easier to maintain once you get the hang of things.
Well docker tends to be more secure if you configure it right. As far as images go it really is just a matter of getting your images from official sources. If there isn’t a image already available you can make one.
The big advantage to containers is that they are highly reproducible. You no longer need to worry about issues that arise when running on the host directly.
Also if you are looking for a container runtime that runs as a local user you should check out podman. Podman works very similarly to docker and can even run your containers as a systemd user service.
About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)
The alternative to “don’t really know what’s in the image” usually is: “just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root”. Requires much more trust than a container that you can delete with no traces in literally seconds
If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.
Downgrading to a previous version (or a beta preview) of the app you’re running due to bugs it’s trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru
Finally, migrating to a new fresh server is just docker compose down, then rsync to new server, and then docker compose up -d. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.
Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes
I use node exporter for host metrics (Proxmox/VMs/SFFs/RaspPis/Router) and a number of other *exporters:
exportarr
plex-exporter
unifi-exporter
bitcoin node exporter
I use the OpenTelemetry collector to collect some of the above metrics, rather than Prometheus itself, as well as docker logs and other log files before shipping them to Prometheus/Loki.
Oh, I also scrape metrics from my Traefik containers using OTEL as well.
What does having OpenTelemetry improve? I have a setup similar to yours but data goes from Prometheus to Grafana and I never thought I would need anything else.
Not a whole lot to be honest. But I work with OpenTelemetry everyday for my day job, so it was a little exercise for me.
Though, OTEL does have some advantages in that It is a vendor agnostic collection tool. allowing you to use multiple different collection methods and switch out your backend easily if you wish.
Podman pods + systemd units to manage pods lifecycle. Ansible to deploy the base OS requirements, the ancillary services (SSH, backups, monitoring…), and the pods/containers/services themselves.
selfhosted
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.