I can’t help you, but i had a similar experience with similar technology. I spent lots of time recovering content. I succeeded, but i have software development experience. I didn’t want a repeat of that.
In the end I picked windows file sharing for home networking and filebrowser for occasional access to data files from outside of home networking.
Web dav is a very good alternative. Apache web server will be easier, but nginx can be made to work.
Joplin is another good alternative, which is based on web dav.
You can pay around $2-$3 a month to shared services provider for hosted web dav.
I went with a pi running pi-hole. I got it as a project where the tool is the project. But, it’s essential infrastructure now and I don’t want to mess with it incase I break it. I’m an idiot with a poor history with pi guides so far, so I will break it. It’s running the adblock fine, I assume it’s doing the tracking and malware blocking fine too.
Sadly, that’s where I leave the project for now, I had intended to give it a HDD and some… other… software but I really don’t want to break it. I tried convincing the better half that I obviously need to N+1 but she wisely did not see reason.
If you want to try setting it up in high availability with failover, give me a poke. And until then - go to Teleporter in the settings, and download the backup. You can restore from there.
One thing worth saying is this - you can grab a cheap refurbished ssd (the smaller - the better), check it’s SMART data for any red flags, and attach it to the pi as OS disk. It will be much more reliable than SD, but overkill if you only run pi on the box. Alternatively look into log2ram, it keeps your SD card alive for longer :D but backup first!
Thanks. I already have Log2Ram running to prolong the life of the SD. My planned disaster relief is a spare SD, already set up and taped to the box ready to swap and reboot in case of emergency. SD cards are cheap so chucking <£10 at the setup once in a while is no big thing. A fresh install on the new SD allows me to improve on what I’ve already done, for example the new SD I’ll run DietOS instead of Raspbian, and reinforce skills. Less time efficient but that’s no matter when the box is working and it’s a hobby. I can then keep the old SD card taped inside the case as a physical back up. Perhaps more expensive in the long run, but an SD card taped to the inside of the case with simple instructions is an easy sell to the fiancée.
My experience with guides has shaken my confidence quite a bit. Which is fine, I’ll get over myself and the point is to learn, so me hitting snags is a good thing. But, until I have a functioning back up I’m not going to be fucking with it. Facebook cannot go down on account of my education.
But if I may, I have one question, a bunch of recommendations have the setup “segregated” (I dunno the word) in Docker and Portainers but I don’t understand the rationale. I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?
My current set up is Raspbian OS running Pi-hole as ad, tracker, malware block and DHCP (the ISP router is a Sky2 box so no IP or DNS customisation), Log2Ram and UncomplicatedFireWall.
I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?
So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.
The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).
Basically I have a very simple host setup with only a few packages installed. Then I would remotely configure and start up my containers, expose ports etc. And I can cleanly define where my configuration is, back up only that particular folder for example and keep the rest of the setup easy to redeploy.
I have nothing to add, and an upvote isn’t enough. Truly, thank you for your time, there’s a lot to think about.
I think for this initial iteration I’m going to direct install in the name of keeping it simple. Next go around I’ll try containerising, just to learn if nothing else. If I out-grow the Pi4 they’ll be good skills to have.
I’m aware of LWT and LUTE with the same concept. Neither comes with predefined languages or texts, so they should work for any language as long as you have some texts you want to read.
I’ve only heard that name once, and it was when plex blocked them for hosting many plex servers against plexs ToS (selling access to private/pirate libraries).
I have already went ahead and bought a hetzner dedicated box, I just couldn’t find a similar performance dedicated box on any other provider’s for what hetzner provided at this moment and I really needed one now.
In your case, instead of getting a dedicated server and putting proxmox on it, I would check if it might not be cheaper to just get individual virtual servers directly.
Other than that, sure, I have been a customer for many years now, and I have always been a fan of Hetzner’s price to quality ratio.
If you’re looking for tips, I’d try to set up Prowlarr first if you intend to use it, it’ll save some reconfiguration down the line.
Though I don’t find anything as complex as mounting and permissions in the *arrs, haha.
But my favorite part about tinkering with home servers is just learning a little at a time, expanding naturally. It’s easy to find guides that are the “ultimate, best server configs”, but unless you understand what benefits they’re offering, you can’t really determine what fits best for YOUR needs.
I started with CouchPotato on Windows years ago and now have *arrs running through docker on headless boxes and keep adding on fun services.
NextDNS is awesome if you want the simple solution, and don’t have any hardware to install services on. Thee free version is somehwta limited to queries(300k per month), but personally didn’t hit those when I was using the free tier.
NextDNS has a lot of nice customization and can easily had custom block lists. The pro version is 2euros a month I believe. I personally stick with NextDNS due to never having to worry about updating the service and it always just works. I also have it hooked to my Tailnet, that way all my devices use it by default.
But ofc, Pihole, Adguard and the rest are also awesome. Best to just pick one that looks good for you. The end goal here is to just have something running in the background rather than nothing.
Throwing my +1 behind Hetzner, it’s so much more bang for your buck than with a VPS and I’ve been pleased with the stability and uptime I get out of my auction box.
I use both. Pi-hole running in a docker container on one of my home servers which my gateway is configured to assign as the default DNS for all clients, and uBlock Origin on all my browsers to catch everything else.
Pihole is pretty good at catching ads on platforms that are not suited to browser based blockers (IoT devices, streaming boxes etc) but it isn’t perfect and is best used in conjunction with another solution.
Of the programs in that list, the only one I’ve heard of before is LibreLingo, and I’m not sure how good or bad it is. (It seems different enough from LinguaCafe that they might be complement each other more than compete.)
PiHole and similar services just use DNS blocking, which only works if the ads are served via a third-party ad server. Sites with their own ad inventory (YouTube, Facebook, Twitter, etc) can’t be blocked this way since they can just serve the ads from the same domain as their regular content.
Not sure of any downside yet but setting your country to Albania via vpn removes all YouTube ads on Apple TV. Was just informed of this yesterday and as mentioned there may be reasons to not do this.
If you’re comfortable self hosting you can use isponsorblocktv to block ads/sponsorship on YouTube on AppleTv and various smart TVs. I use this + Pi-Hole github.com/dmunozv04/iSponsorBlockTV
selfhosted
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.