I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.
Fireproof safes don’t protect against heat except what’s high enough to combust paper. Temps will still probably be high enough to destroy a drive with a regular fireproof safe.
AdGuard Home and blocky are other popular options. I switched over to AdGuard Home a while back because it supported DNS over HTTPS although I’m not sure if that’s still a relevant reason. I run AGH as a docker container but it is easy to run in a LXC or VM. There’s also a tool to sync configs if you need multiple instances. Notice: AGH block lists are formatted like uBlock Origin lists so you will not be able to use PiHole style lists.
DNS based ad blockers won’t work when ads are served from the same place as the content. Which is why DNS based ad blockers don’t work against Twitch or YouTube. So YMMV.
If you’re looking to block interface ads and select streaming service ads there are block lists available like this one. The game with smart TVs is blocking the ads breaks the TV a little because sometimes it calls back to the same servers for updates and misc info like weather.
no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.
docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.
much easier to maintain once you get the hang of things.
Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?
I use and love Kopia for all my backups: local, LAN, and cloud.
Kopia creates snapshots of the files and directories you designate, then encrypts these snapshots before they leave your computer, and finally uploads these encrypted snapshots to cloud/network/local storage called a repository. Snapshots are maintained as a set of historical point-in-time records based on policies that you define.
Kopia uses content-addressable storage for snapshots, which has many benefits:
Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.
Multiple copies of the same file will be stored once. This is known as deduplication and saves you a lot of storage space (i.e., saves you money).
After moving or renaming even large files, Kopia can recognize that they have the same content and won’t need to upload them again.
Multiple users or computers can share the same repository: if different users have the same files, the files are uploaded only once as Kopia deduplicates content across the entire repository.
There’s a ton of other great features but that’s most relevant to what you asked.
I’ve used rclone with backblaze B2 very successfully. rclone is easy to configure and can encrypt everything locally before uploading, and B2 is dirt cheap and has retention policies so I can easily manage (per storage pool) how long deleted/changed files should be retained. works well.
also once you get something set up. make sure to test run a restore! a backup solution is only good if you make sure it works :)
As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.
Just want to clarify - after looking at Porkbun’s DNS offerings, it does not appear they do DDNS either. Is that correct? So they are not any better than SquareSpace for that service. Porkbun does have an API interface.
It looks like Namecheap has DDNS support (at least I get valid-looking results when I search for that on their website).
I haven’t changed registrars in 10+ years. I am in the same boat re. Google -> SquareSpace. Is DDNS deprecated in favor of API’s across the board? It looks more complicated to set up.
Pro tip: If you use Porkbun, don’t leave your domain’s authoritative DNS with Porkbun nameservers.
Over the year or so I had my stuff configured this way, on at least one occasion (that I know about… I was still setting up my observability stack during this year), the servers were flapping hard for over a day, causing my records to magically vanish from existence intermittently.
I tried contacting them every way I could, hell I even descended into the quagmire of Twitter and created an account so I could tweet at them… and got silence.
Pretty disappointing. I ended up moving all my DNS to AWS Route 53 after a few hours of pulling out my hair. They did eventually respond to my email like a day later, after I’d already moved everything over.
But idk maybe I’m wrong expecting an indie domain registrar to have super high availability on their nameservers… oh well
I second obsidian. I was on the verge to jump onto logseq, but found its way of handling notes to be… different. I also felt a dislike of anytype where I don’t really have control over my notes. Obsidian clicked with me from the start and felt right. So I went with it, even though it’s not FOSS (which is usually a hard requirement from me).
But jokes aside, it is mostly about syncing notes for the selfhosting part. You either go with the official offer, no self-hosting and costs money, or you use a community plug-in, self-hosted, or you use a third program like syncthing, selfhosted.
Syncthing is the way, I had tried setting on nextcloud but never could get it to store how I wanted, but syncthinf was ridiculously easy and should work for anything that uses a folder
There is a plugin for obsidian to work with syncthing, but it seems to still be in development, implementing through the app and selecting the folders also gave me a reason for syncing my camera as well, and was super easy, no portfowarding or anything required
I also switched from Joplin to Obsidian after about half a year. There’s an open-source plugin that lets you self-host a syncing server.
What I found paradoxical is how easy it is to mod and write plugins for Obsidian compared to Joplin. I would’ve thought that modifying the open-source candidate would’ve been easier, but nope.
It is not a unique feature. But as a non-FOSS program its notes are not hidden behind proprietary filesystems, so any time you want you can still switch if they go in a direction thr user does not like.
Not every one stores the files as plain text files in markdown format like Obsidian. Logseq does I believe, but Joplin stores it all in database files which require an export should you decide to leave that app in favor of a other. With Obsidian you just point the new app at the folders full of .md files and away you go. That was the main selling point for me.
I don’t know where you’re getting that from. Here is my Joplin folder on my NC server, stuffed with md files from my notes. There are some database driven references in them if you do things like add pictures, and obviously the filename is a UID format, but it’s markdown all the way, baby.
Have you looked at the contents of those md files? In addition to creating its own hexadecimal file name, it appends the text with a bunch of metadata info. If you were to then take that folder of notes to any other markdown editor like Obsidian, it would be a mess to organize. That is why I’m a stickler for file format agnosticism. There is no vendor lock in and more importantly, no manipulation of the text filenames or contents.
Screenshot of my phone copy of the Obsidian vault directory as an example:
What do I use this for? Do I install it on my NAS or my gaming pc?
My best guess is this is a self hosted network storage for games and other computers run the games from there? Or do they download the game from there? Is it a way to store game saves? Does it have any use for emulators like yuzu?
Sorry for all the questions, I’m only asking because the software looks really interesting but I just can’t figure out its uses.
Seems like the intro clears some things: gamevau.lt/docs/introIt looks like you install the server component on your NAS/server etc and store your game files/binaries/installers there. Then you can download client applications and download from that location to install on your gaming PC or whatever
DokuWiki for simplicity. Everything is a text file that can just be copied to a web server. It doesn’t even require a database. And since all the wiki pages are plaintext markdown files, they can still be easily accessed and read even when the server is down. This is great and why I use DokuWiki for my server documentation as well.
I was going to say that the big downside to that would be a lack of any kind of version control, but I guess if you need that you can always use git and just commit changes there and (optionally) push them to a repository somewhere.
Learn the fundamentals of IPv4 and IPv6. (Absolutely learn to count bits for IPv4)
Learn and understand the use-cases for routers, switches, and firewalls.
Learn about DNS. (Domain Name System)
Learn about DHCP. (Dynamic Host Configuration Protocol)
Learn important Port Numbers for important Services. (SSH is Port 22, for example. The range of port numbers from 1024 to 49151 are “registered ports” that are generally always the same)
Learn about address classes. (A, B, C are the main ones)
Learn about hardware addresses (MAC address) and how to use ARP to find them.
And more! This is just off the top of my head. Until you’ve studied a lot more, please, for your own sake, don’t open your selfhosted ervices to the wider internet and just keep them local.
And just for fun, a poem:
The inventor of the spanning tree protocol, Radia Perlman, wrote a poem to describe how it works. When reading the poem it helps to know that in math terms, a network can be represented as a type of graph called a mesh, and that the goal of the spanning tree protocol is to turn any given network mesh into a tree structure with no loops that spans the entire set of network segments.
I mean, isn’t it important to understand the fundamentals so you can understand VLSM better?
Like math, a lot of this knowledge works better when you know the fundamentals and basics, which help you conceptualize the bigger ideas.
On a personal level, I would have had a lot harder time understanding VLSM if I hadn’t had the basic fundamentals of traditional subnetting and classful networking under my belt.
There’s nothing inherently important to classful networking you learn that’s necessary for VLSM. They amount to common convention based on subnet size, and even then nearly nobody actually uses A or B sized subnets except as summary routes, which again, is not inherent to classful networking.
Classful networking has been obsolete for thirty years for good reason, you gain nothing from restricting yourself in that way.
How are you “restricting” yourself by learning that it exists? Nobody is saying “learn about it and use it and never consider anything else.” They asked what fundamentals they should know for networking, and I dumped what I considered the “fundamentals.”
Nothing actually uses classful networking anymore. Any situation where classful network concepts are implemented is necessarily limiting the capabilities of the network. As such it’s completely useless to bother spending time learning it.
You can do this with custom formats. You’d want to create a custom format that gives a score if the file is below a certain size threshold (say 1.5GB per hour), then add minimum custom scores to the release profiles you use (e.g. Bluray 1080p). You can also add custom filters for release groups that prioritise file size. YTS for example keeps their releases as small as possible.
I don’t know about you but I want the companies to take self hosted and Foss solutions seriously. The fact that they are wanting to work with him is a major step in the right direction. It would be dumb to discourage companies from supporting foss.
Are they supporting FOSS, or looking to buy out the project to make it a closed in-house solution and avoid the bad publicity they created this last week?
I was aware of forgejo back when I first started hosting Gitea. Didn’t see much of a diff back then so I just went with arguably more popular option at that time.
About few months after it’s mostly just because I’m too lazy of a person.
Forgejo is a fork of Gitea. As of now I don’t think they have diverged much. So they’re (still) about the same. It was mainly created because of the takeover of the domain and trademark by a for profit company. Not because of different functionality.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.