selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

possiblylinux127, in Self-hosted or personal email solutions?

My father still has a gmail account for all of our last names.

oranki, in Why docker

Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.

vegetaaaaaaa, (edited ) in How often do you back up?
@vegetaaaaaaa@lemmy.world avatar

7 daily backups, 4 weekly backups, 6 monthly backups (incremental, using rsnapshot). The latest weekly backup is also copied to an offline/offsite drive.

forwardvoid, in Hosting websites over 4g

If you’re hosting websites and not applications, perhaps you can use SSGs like Hugo/Gatsby. You could deploy your site in a bucket and put cloudflare in front. They can also be used on your own server of course. If you are hosting applications and want to keep them on 4g, you could put a CDN (CloudFlare or …) in frint of it. That would cache all static resources and greatly improve response times.

forwardvoid, in Hosting websites over 4g

If you’re hosting websites and not applications, perhaps you can use SSGs like Hugo/Gatsby. You could deploy your site in a bucket and put cloudflare in front. They can also be used on your own server of course. If you are hosting applications and want to keep them on 4g, you could put a CDN (CloudFlare or …) in frint of it. That would cache all static resources and greatly improve response times.

lemann, in How do you monitor your servers / VPS:es?

I used to pass all the data through to Home Assistant and show it on some dashboards, but I decided to move over to Zabbix.

Works well but is quite full-featured, maybe moreso than necessary for a self hoster. Made a mediatype integration for my announciator system so I hear issues happening with the servers, as well as updates on things, so I don’t really need to check manually. Also a custom SMART template that populates the disk’s physical location/bay (as the built in one only reports SMART data).

It’s notified me of a few hardware issues that would have gone unnoticed on my previous system, and helped with diagnosing others. A lot of the sensors may seem useless, but trust me, once they flag up you should 100% check on your hardware. Hard drives losing power during high activity because of loose connections, and a CPU fan failure to name two.

It has a really high learning curve though so not sure how much I can recommend it over something like Grafana+Prometheus - something I haven’t used but the combo looks equally as comprehensive as long as you check your dashboard regularly.

Just wish there were more android apps

specseaweed, in Why docker

I know enough to be dangerous. I know enough to follow faqs but dumb enough to not backup like I should.

So I’d be running my server on bare metal and have a couple services going and sooner or later, shit would get borked. Shit that was miles past my competence to fix. Sometimes I’d set up a DB wrong, or break it, or an update would screw it up, and then it would all fall apart and I’m there cursing and wiping and starting all over.

Docker fixes that completely. It’s not perfect, but it has drastically lowered my time working on my server.

My server used to be a hobby that I loved dumping hours into. Now, I just want shit to work.

corroded, in Why docker

My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I’d have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.

I just don’t get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.

Sethayy,

Though this is more of a proxmox ease of use issue than docker, personally I swapped from it to pure debian server/host to run a similar manual setup with podman - so everything runs right on the host.

In theory I think you can achieve this with proxmox ssh’ing into the host and just treating it like a usual debian

DeltaTangoLima, in Why docker
@DeltaTangoLima@reddrefuge.com avatar

To answer each question:

  • You can run rootless containers but, importantly, you don’t need to run Docker as root. Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
  • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
  • It’s the opposite - you don’t really need to care about docker networks, unless you have an explicit need to contain a given container’s traffic to it’s own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I’ve created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

It’s not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

Why? I like to play.

Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

Let’s say there’s a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos… hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

lemmyvore, (edited )

Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.

There is no daemon in rootless mode. Instead of a daemon running containers in client/server mode you have regular user processes running containers using fork/exec. Not running as root is part and parcel of this approach and it’s a good thing, but the main motivator was not “what if someone breaks out of the container” (which doesn’t necessarily mean they’d get all the privileges of the running user on the host and anyway it would require a kernel exploit, which is a pretty tall order). There are many benefits to making running containers as easy as running any kind of process on a Linux host. And it also enabled some cool new features like the ability to run only partial layers of a container, or nested containers.

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

Yep, all true. I was oversimplifying in my explanation, but you’re right. There’s a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

Hexarei, in Why docker
@Hexarei@programming.dev avatar

Others have addressed the root and trust questions, so I thought I’d mention the “mess” question:

Even the messiest bowl of ravioli is easier to untangle than a bowl of spaghetti.

The mounts/networks/rules and such aren’t “mess”, they are isolation. They’re commoditization. They’re abstraction - Ways to tell whatever is running in the container what it wants to hear, so that you can treat the container as a “black box” that solves the problem you want solved.

Think of Docker containers less like pets and more like cattle, and it very quickly justifies a lot of that stuff because it makes the container disposable, even if the data it’s handling isn’t.

paws,
@paws@cyberpaws.lol avatar

I ended up using Docker to set up pict-rs and y’all are making me happy I did

b1g_bake, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)
@b1g_bake@sh.itjust.works avatar

I personally graduated from a Rpi3b to an Intel NUC years ago and never looked back. Real RAM slots and Storage options internally and you can get as nice a processor as your budget allows. So my vote is to move to the SFF PC and let your Pi stick around for other projects.

sylverstream,

Thanks for your insights. Thought about a NUC as well, but AFAIK it doesn’t have pcie slots? So I won’t be able to install eg a graphics card or pcie coral?

b1g_bake,
@b1g_bake@sh.itjust.works avatar

I wouldn’t go NUC if you need a PCIe slot. The HP you were talking about would fit the bill though.

I believe they make a Coral that fits where the wifi chip goes too. As long as you are ok ditching the wifi/bt functionality for a TPU. For a server doing image processing that’s almost a no-brainer to me.

sylverstream,

Interesting re the wifi chip, as all posts I’ve found said it only works for a wifi card. Do you have a source for that?

Yeah, no wifi is no problem. It’ll be connected via cable

b1g_bake,
@b1g_bake@sh.itjust.works avatar

Oh I’m not sure if it actually works. I thought they just made one to fit that slot

three,

Three versions that I know of have it but you’re right it’s not common. www.intel.com/content/www/us/en/…/intel-nuc.html

avidamoeba, in Why docker
@avidamoeba@lemmy.ca avatar

In short, yes, yes it’s worth it.

Vendetta9076, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)
@Vendetta9076@sh.itjust.works avatar

Migrating is your best move. Hardware go brr

sylverstream,

Okay thanks!

vegetaaaaaaa, in Kubernetes? docker-compose? How should I organize my container services in 2024?
@vegetaaaaaaa@lemmy.world avatar

Podman pods + systemd units to manage pods lifecycle. Ansible to deploy the base OS requirements, the ancillary services (SSH, backups, monitoring…), and the pods/containers/services themselves.

scrchngwsl, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)

Not sure exactly what you’re asking but I have a Coral mini pcie with frigate and it works great. Hardly any cpu and tiny power consumption.

sylverstream,

Okay thanks!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #