Also the ability to snapshot an image, goof around with changes, and if you don’t like them restore the snapshot makes it much easier to experiment than trying to unwind all the changes you make.
Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer
It depends what I’m backing up and where it’s backing up to.
I do local/lan backups at a much higher rate because there’s more bandwidth to spare and effectively free storage. So for those as often as every 10 mins if there are changes to back up.
For less critical things and/or cloud backups I have a less frequent schedule as losing more time on those is less critical and it costs more to store on the cloud.
I use Kopia for backups on all my servers and desktop/laptop.
I’ve been very happy with it, it’s FOSS and it saved my ass when Windows Update corrupted my bitlocker disk and I lost everything. That was also the last straw that put me on Linux full-time.
Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.
DHCP is a really stupid* service for the most part. Unless you are working with multiple subnets or have some very specific settings you need to pass to your clients, it’s probably not worth it to manage it yourself. I don’t want to discourage you though! Assigning static IP addresses by MAC can be extremely useful and is not always an option on routers. If you want static names and dynamic addresses, that is really where you need to manage both DNS and DHCP. It really depends on how and where you want names to be resolved and what you are trying to accomplish. (*stupid as in, it’s a really simple service. You want it simple because when DHCP breaks, you have other serious issues going on.)
Setting up your own DNS is worth its weight in gold. You can put it just about anywhere on your network (before your gateway, after, in China, whatever.) and your network won’t even know the difference if setup correctly. You can point BIND at the root servers and bypass your ISP completely if you want. ISP DNS services suck ass, so regardless of you resolve yourself, or forward all name queries to your anon DNS server of choice you have a really decent level of control on your network. It is the service to learn if you want to keep an eye on where your network wants to talk.
Your Unifi USG must play nice with your own server, by the laws of DNS. There may be some nuances when it comes to internal protocols like WINS, but other than that, it should be just fine.
I would setup a simple VM somewhere first, to answer your actual question. It’s good practice to keep core services isolated on their own, dedicated instances. This is to speed up recovery time and minimize down time. Even on your home network, DNS and DHCP are services you do not want going down. It’s always a pain when they do go down.
If only everyone was on IPv6, then everything could use SLAAC and worrying about IP assignment for client systems would be a thing of the past. IPv6 on a home LAN generally only uses DHCPv6 for configuring the DNS servers - client systems get IPs using SLAAC and learn their gateway using RAs (router advertisements).
Damn, I didn’t realize the amount of hate for DHCP. Ive used an already configured system with a DHCP/DNS server set up and it was super easy to manage. Want to change or add a static IP? Edit the text file, add the MAC, reload.
I didn’t know this wasn’t reflective of the overall experience.
Meh, I didn’t mean to hate on DHCP. It’s just a service I have learned to keep running all by itself somewhere in a dark corner of my network. DNS and DHCP are just services that I don’t like going down. Ever.
Unifi is specific about expecting the controller address to not change. You have several options: There’s the “override controller address” setting, which you can use to point the devices at a dns name, instead of an ip address. The dns can then track your controller. It doesn’t exactly solve your issue, though, as USG doesn’t assign dns names to dynamic allocations.
Another option is to give the controller a static IP allocation. This way, in case you reboot everything, USG will come up with the latest good config, then will (eventually) allocate the IP for controller, and adopt itself.
Finally, the most bulletproof option is to just have a static IP address on the controller. It’s a special case, so it’s reasonable to do so. Just like you can only send NetFlow to a specific address and have to keep your collector in one place, basically.
I’d advise against moving dhcp and dns off unifi unless you have a better reason to do so, because then you lose a good chunk of what unifi provides in terms of the network management. USG is surprisingly robust in that regard (unlike UDMs), and can even run a nextdns forwarding resolver locally.
So this is where I’m a little confused. The USG had the option to assign a static IP (which I’ve done), but if you ever need to CHANGE that IP… Chaos. From what I understand the USG needs to propagate that IP to all your devices, but it uses the controller to do that. Then you also run into issues with IP leases having to time out. Same problem occurs if I ever upgrade my server and change out the MAC address. Because now the IP is assigned to the old MAC.
I’m not sure if there’s any way around this. But it basically locks me in to keeping the controller (and thus my server) at a single, fixed IP, without any chance of changing it.
Here’s how it works: unifi devices need to communicate with the controller over tcp/8080 to maintain their provisioned state. By default, the controller adopts the device with http://controller-ip:8080/inform, which means that if you ever change the controller IP, you’ll must adopt your devices again.
There are several other ways to adopt the device, most notably using the DHCP option 43 and using DNS. Of those, setting up DNS is generally easier. You’d provision the DNS to point at your controller and then update the inform address on all your devices (including the USG).
Now, there’s still a problem of keeping your controller IP and DNS address in sync. Unifi, generally, doesn’t do DNS names for its DHCP leases, and devices can’t use mDNS, so you’ll have to figure a solution for that. Or, you can just cut it short and make sure the controller has a static IP―not a static DHCP lease, but literally, a static address. It allows your controller to function autonomously from USG, as long as your devices don’t reach to it across VLANs.
Generally speaking, any device (“server”) hosting a “service” NEEDS to be assigned a static IP. It simplifies routing significantly and avoids random break problems because DHCP is incredibly stupid at times…
Is there any specific reason you need DHCP to assign an IP to your main hosting server vs setting it all statically?
Moving it to it’s own system will not fix the routing problem. You can probably still leave it on the USG.
You should be able to set a fixed static IP on your server, and then also statically assign that same IP to your server in your USG DHCP config- as long as they both are “thinking about” the same IP I think routing should work correctly.
If that breaks, try just assigning the static IP only from the USG side or only from the server’s side. I’m 90% sure that even if the USG does not have your server machine in it’s client list, if it sends broadcast packets to an entered IP looking for the unifi server, and the unifi server is listening on that manually set IP, they should be able to talk.
disclaimer: i am high as shit right now and this may be bullshit
Is there any specific reason you need DHCP to assign an IP to your main hosting server vs setting it all statically?
I’ve done this. I think the real problem is if I ever change the server MAC or IP, as now the unifi server isn’t picked up by the USG, which means I can’t change the static address.
I would start at the github repo and check if that issue has been documented.
If yes, follow the instructions. If not, check your dns since 502 often came from dns in my case.
If both doesnt reveal anything, you could open an issue in the repo and post your (sanitized) logs and wait for answers.
The error suggests a problem with the lemmy-lemmy-ui-1 container. Maybe it needs an update or has pulled a wrong update. When did you update last? Did you try restarting the stack?
If you’re only using it for Plex and nothing else, it probably won’t make a lot of difference which you use.
My old setup was Ubuntu running Plex as an install… if you just run a server without a gui, it’s like 3 lines to install Plex
I also have a pi as a portable setup running the docker version which works pretty well but I don’t think it will handle hardware encoding very well, but I could be wrong
Yeah Ubuntu came up in a few searches, I’ll read more about that, Desktop was 25gb which was a bit excessive given the age of the PC, will look at server, ty
Debian is another popular choice for servers (Ubuntu is based on Debian, with a few things bolted on top which are in my opinion not worth it). The default Debian installation only consumes 1-2GB disk space (just deselect any desktop environment during the installation process)
I’ve got HA with Frigate + USB Coral w/4 cams, FlightRadar24 receiver/feeder, ESPHome, NodeRed, InfluxDB, Mosquitto, and Zwave-JS on a refurbished Lenovo ThinkCenter M92p Tiny, rigged with an i5 3.6GHz, 8GB RAM and 500GB spindle drive. It’s almost overkill.
Frigate monitors 2 RTSP and 2 MJPEG cams (sometimes up to 3 RTSP and 5 MJPEG, depending of if I’m away for the weekend) with hardware video conversion. FR24 monitors a USB SDR dongle tracking several hundred aircraft per hour. I live under one.of the main approaches to a major US hub.
Processor sits at 10% or less most of the time, and really only spikes when I compile new binaries for the ESP32 widgets I have around the house. It uses virtually none of the available disk. It’s an awesome platform for HA for the price.
Thanks for your reply! So that is a 3rd gen Intel chip if I kagi’d correctly? I was planning to get a 8th gen or later. Not sure though if it’s worth it, I’m not too familiar with the differences between all generations.
I think the i5 is Ivy Bridge, but I couldn’t tell you what gen that is. My main use of HA aside from the automation is Frigate, which apparently needs the hardware AVX flags. This chip supports AVX512, where my older AMD did not, so that’s why I went with it. Its an i5-3470T, if that helps.
I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.
Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.
Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.
If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.
selfhosted
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.