I would start at the github repo and check if that issue has been documented.
If yes, follow the instructions. If not, check your dns since 502 often came from dns in my case.
If both doesnt reveal anything, you could open an issue in the repo and post your (sanitized) logs and wait for answers.
The error suggests a problem with the lemmy-lemmy-ui-1 container. Maybe it needs an update or has pulled a wrong update. When did you update last? Did you try restarting the stack?
I used to pass all the data through to Home Assistant and show it on some dashboards, but I decided to move over to Zabbix.
Works well but is quite full-featured, maybe moreso than necessary for a self hoster. Made a mediatype integration for my announciator system so I hear issues happening with the servers, as well as updates on things, so I don’t really need to check manually. Also a custom SMART template that populates the disk’s physical location/bay (as the built in one only reports SMART data).
It’s notified me of a few hardware issues that would have gone unnoticed on my previous system, and helped with diagnosing others. A lot of the sensors may seem useless, but trust me, once they flag up you should 100% check on your hardware. Hard drives losing power during high activity because of loose connections, and a CPU fan failure to name two.
It has a really high learning curve though so not sure how much I can recommend it over something like Grafana+Prometheus - something I haven’t used but the combo looks equally as comprehensive as long as you check your dashboard regularly.
If you’re hosting websites and not applications, perhaps you can use SSGs like Hugo/Gatsby. You could deploy your site in a bucket and put cloudflare in front. They can also be used on your own server of course. If you are hosting applications and want to keep them on 4g, you could put a CDN (CloudFlare or …) in frint of it. That would cache all static resources and greatly improve response times.
7 daily backups, 4 weekly backups, 6 monthly backups (incremental, using rsnapshot). The latest weekly backup is also copied to an offline/offsite drive.
I’ll say that as someone who stopped using docker and went back to deploying from source in lxc containers: dockers is a great tool for the majority of people and that is exactly what it aims to be, easily reusable in as many different setups as possible.
On the flip side, yes it may happen that you would not benefit from docker for a reason or another. I don’t, in my case docker only adds another layer over my already containerized setup and many of the services I deploy are already built from source in a CI/CD workflow and deployed through ansible.
I do have other issues with docker but those are usually less with the tool and more with how some project use docker as a mean to replace proper deployment documentations.
I’ll answer your question of why with your own frustration - bare metal is difficult. Every engineer uses a different language/framework/dependencies/whathaveyou and usually they’ll conflict with others. Docker solves this be containing those apps in their own space. Their code, projects, dependencies are already installed and taken care of, you don’t need to worry about it.
Take yourself out of homelab and put yourself into a sysadmin. Now instead of knowing how packages may conflict with others, or if updating this OS will break applications, you just need to know docker. If you know docker, you can run any docker app.
So, yes, volumes and environments are a bit difficult at first. But it’s difficult because it is a standard. Every docker container is going to need a couple mounts, a couple variables, a port or two open, and if you’re going crazy maybe a GPU. It doesn’t matter if you’re running 1 or 50 containers on a system, you aren’t going to get conflicts.
As for the security concerns, they are indeed security concerns. Again imagine you’re a sysadmin - you could direct developers that they can’t use root, that they need to be built on OS’s with the latest patches. But you’re at home, so you’re at the mercy of whoever built the image.
Now that being said, since you’re at their mercy, their code isn’t going to get much safer whether you run it bare-iron or containerized. So, do you want to spend hours for each app figuring out how to run it, or spend a few hours now to learn docker and then have it standardized?
Icinga2 works reasonably well for us. It is easy to write new checks as small shell scripts (or any other binary that can print and set and exit status code).
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.