If you’re asking this you should probably use one to be more safe if you’re exposing stuff to the web, there are other ways of doing it including just VPNing into your home network or using a VPS or cloudflare tunnels, but using a reverse proxy manager in combo with cloudflare DNS is a good place to start and is probably good enough if you use good enough security with it: long unique passwords, two factor, security keys, etc.
I use it to manage my subdomains, something like notes.mywebsite.com would point at my trillium instance while photos.mywebsite.com would point at my my immich container it has more uses but that’s my extent. I just have an instance of a cloud flare dns updater keeping my domain in sync with my ip so I don’t have to do that manually when it changes.
So in my scenario cloud flare is just part of my setup.
So, you do want to run rsnapshot on the Borg repository (the destination to which is backed up)? Both rsnapshot and Borg keep a history, so you are keeping a history of when the Borg repository had which history. This will not be particularly efficient nor “as intended”.
be aware that Borg does incremental backups on file chunks, while rsnapshot works on whole files. So if a large file changes, rsnapshot will duplicate the storage used.
a Borg repository is more like a database of chunks (similar to git), while rsnapshot recreates the original backup data.
As far as I know the borg backup store should only add new blocks as new files and remove them when you purge the last backup that uses that block. Obviously some of the metadata files are going to change and will be backed up more frequently but the main data should not.
As far as motherboards go, you would probably be fine with any consumer desktop brand but you should probably look for something with dual NIC. If you want something a bit more robust AsRock Rack has some really great options. I’ve been using the X470D4U for about 4 years now without any issues.
For your CPU I recommend Ryzen 5700G. Powerful enough for everything you want to do, the TDP is only 65 watts so it’s not going to destroy your power bill, has a decent integrated GPU, and costs only about $200. Another positive is that it uses DDR 4 so you can load up on that for pretty cheap too.
You’ll be fine enough as long as you enable MFA on your Nas, and ideally configure it so that anything “fun”, like administrative controls or remote access, are only available on the local network.
Synology has sensible defaults for security, for the most part. Make sure you have automated updates enabled, even for minor updates, and ensure it’s configured to block multiple failed login attempts.
You’re probably not going to get hackerman poking at your stuff, but you’ll get bots trying to ssh in, and login to the WordPress admin console, even if you’re not using WordPress.
A good rule of thumb for securing computers is to minimize access/privilege/connectivity.
Lock everything down as far as you can, turn off everything that makes it possible to access it, and enable every tool for keeping people out or dissuading attackers.
Now you can enable port 443 on your Nas to be publicly available, and only that port because you don’t need anything else.
You can enable your router to forward only port 443 to your Nas.
It feels silly to say, but sometimes people think “my firewall is getting in the way, I’ll turn it off”, or “this one user needs read access to one file, so I’ll give read/write/execute privileges to every user in the system to this folder and every subfolder”.
So as long as you’re basically sensible and use the tools available, you should be fine.
You’ll still poop a little the first time you see that 800 bots tried to break in. Just remember that they’re doing that now, there’s just nothing listening to write down that they tried.
However, the person who suggested putting cloudflare in front of GitHub pages and using something like Hugo is a great example of “opening as few holes as possible”, and “using the tools available”.
It’s what I do for my static sites, like my recipes and stuff.
You can get a GitHub action configured that’ll compile the site and deploy it whenever a commit happens, which is nice.
I wish I could’ve like next cloud more, but it seemed bloated as all hell and was slow regardless of what machine I tried running it on :(. I might give it another go one day.
If it’s a static site, you can host that anywhere for free on the big cloud providers, aws has s3 storage, Microsoft has blobs, github has pages, all which can be configured to run a site well under the paid tiers.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.