Cloudflare has been controversial for dragging their feet when it was time to stop providing protection to nazi websites like The Daily Stormer, 8chan and Kiwi Farms. Also the Taliban, ISIS and so on More about this.
For this reason, a lot of fediverse servers do not use CloudFlare.
Restic is another option, but it’s a little less user friendly and is all CLI, if i recall correctly. However I’m pretty sure you can send backups straight to a server via Restic.
I checked a couple of times time shift, but it’s a shame not even ftp is allowed as a backup destination.
As for restic, will give a check later
EDIT: just read about restic, and I think this can be the solution I was looking for. Docker image is available and all, so for me that is a big plus. Once I have the chance I will test drive it and see where it goes. Thanks!
I am very happy with my Omada setup. It’s an ecosystem, not a single device. I use an er605 as router and eap610 as AP. I also have a switch, probably you don’t need that, and I now have an Omada controller (you can also host that in as a docker container, so not strictly needed). For wifi you can simply throw another ap somewhere and have excellent Mesh wifi. It’s more complex than a simple consumer router, but also has a lot more functionality.
The controller does not need to run 24/7. The controller configures the devices and the config remains on the devices. Though, when your devices are adapted by a controller, you cannot access any settings on the devices themselves, only via the controller.
Maybe should add: depending on the network set-up, I’d strongly recommend getting a hardware controller. For me, I have one server hosting all my stuff. I also hosted the controller with docker in this server. Which ends up being a single point of failure, and no way to look into your routing if your server is down/unreachable. I got a hardware controller (oc200) eventually just to separate my interner and network infrastructure from my hosting and service infrastructure.
The controller also handles roaming, as I understand it. I have a software controller on a VM. They provide a .deb! I have 3 EAP670s and an EAP-655-Wall. Roaming works perfectly on phones and laptops. I have a hidden SSID on each individual AP that I use to lock dumber stuff. Some devices fight the AP Lock on Omada.
I see the value in going 100% omada, but I couldn’t justify the cost of the switches I’d need. Their routers look good for the price too, but my use case is a notch or two above their target market.
About the trust issue. There’s no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won’t, and you might trust your distro package manager, but that still has a similar problem.
I’m in a very similar situation, too many windows services running in the background. Casaos is supposed to be a user friendly way to set up a bunch of docker containers on linux for Plex and the arrs amongst other things. I can’t speak for how easy it is as it’s something I’m going to be exploring in the coming weeks.
I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.
Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.
Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.
If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.
My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I’d have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.
I just don’t get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.
Though this is more of a proxmox ease of use issue than docker, personally I swapped from it to pure debian server/host to run a similar manual setup with podman - so everything runs right on the host.
In theory I think you can achieve this with proxmox ssh’ing into the host and just treating it like a usual debian
Netdata (agent only/not the cloud-based features), and a bunch of scanners running from cron/systemd timers, rsyslog for logs (and graylog for larger setups)
Since your question is also related to securing your setup, inspect and harden the configuration of all running services and the OS itself. Here is my common ansible role for basic stuff. Find (prefereably official) hardening guides for your distribution and implement hardening guidelines such as DISA STIG, CIS benchmarks, ANSSI guides, etc.
I concur with most of your points. Docker is a nice thing for some use cases, but if I can easily use a package or set up my own configurations, then I will do that instead of use a docker container every time. My main issues with docker:
Containers are not updated with the rest of the host OS
firewall and mounting complexities which make securing it more difficult
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.