I think you need to learn more about how databases work. They don’t typically reclaim deleted space automatically for performance reasons. Databases like to write to a single large file they can then index into. Re-writing those files is expensive so left to the DBA (you) to determine when it should be done.
And how are you backing up the database? Just backing up /var/lib/postgres? Or are you doing a pg_dump? If the former then it’s possible your backups won’t be coherent if you haven’t stopped your database and it will contain that full history of deleted stuff. pg_dump would give you just the current data in a way that will apply properly to a new database should you need to restore
You can also consider your backup retention policy. How many backups do you need for how long?
You are right, I should. They are a bit more complicated than I anticipated, and apparently I’m doing everything wrong, haha. I have backups set up to go 2 years back, but I’m checking backblaze occasionally to check, so it shouldn’t be an issue. I have two months so far lol Thanks for the write-up :)
On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas
On the Nas I have a normal docker installation which runs my databases
On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances
I have no issues with performance or read/write or timeouts.
As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.
Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say sonarr:8989 and it will make life much easier
As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first
With arr services try to limit network throughput and disk throughput on them, as if either are maxed out for too long (like moving big linux iso files) it can cause weird timeouts and failures
I believe I would be fine on the network part, I am just guessing writing them to an SSD cache drive on my NAS would be fine? Im currently writing to the SSD and have a move script run twice a day to the HDDs
I remember years ago it already was like this in the forums. It actually made me stop using it and running a custom made web based reader for some time.
I wouldn’t use it anymore nowadays.
FreshRSS is the way to go. It even has plugins (and a plugin for YouTube channels as RSS feeds, very convenient).
I have been looking into a way to copy files from our servers to our S3 backup-storage, without having the access-keys stored on the server. (as I think we can assume that will be one of the first thing the ransomware toolkits will be looking for).
Perhaps a script on a remote machine that initiate a ssh to the server and does a “s3cmd cp” with the keys entered from stdin ? Sofar, I have not found how to do this.
A data cloud backup loss should be fine, because it’s a backup. Just re-up your local backup to a new cloud/second physical location, that’s the whole point of two.
I don’t see a need to run two conccurent cloud backups.
In this case, it is not you -as a customer- that gets hacked, but it was the cloud-company itself. The randomware-gang encrypted the disks on server level, which impacted all the customers on every server of the cloud-provider.
Yeah absolutely, but tonyou as an individual , it’s the same net effect of your cloud backup is lost. Just re-up your local backup to a different cloud provider.
Does budget include storage? Tight budget without storage, even tighter with…
If power usage not a concern then used x86/x64 gear is probably the way to go. Surplus gear (corporate, university…) possibly an option for you. That’s a very tight budget though, so I don’t think it really gives you the luxury of choosing specs unfortunately. That said: I might go fot the best bones/least RAM/storage if you think you might upgrade it down the road. 4GB RAM with an upgrade path to 32 is preferable to 8GB non-upgradable IMHO. Likewise, 500GB spinny disk with extra bays and an NVME slot is nicer than 500GB SSD with no upgrade path. Again… really tight budget so this may all be out of the question.
I’m a fan of low power gear, so I’d recommend something like a Raspberry Pi 5 8GB, or another SBC (I just grabbed an Orange Pi 5 Plus and I like it so far — NVME, 16GB RAM, dual NIC). However these will be out of your budget, especially once you add case, power supply, and storage.
however the packages for nginx-rtmp are quite abandoned in arch linux.
Maybe you should switch to Debian? I’ve been doing it for a long time that way and playing to VLC without issues. What repositories are you using btw? Official ones at nginx.org/en/linux_packages.html or some 3rd party garbage?
Update: I finally installed RYOT, and wow is it slow, and resource intensive. It’s using more than 20% of the CPU on my NAS when it isn’t even open! Might switch to Media Tracker…
I have a 2N+C backup strategy. I have two NASes, and I use rclone to backup my data from one NAS to the other, and then backup (with encryption) my data to Amazon S3. I have a policy on that bucket in S3 that shoves all files into Glacier Deep Archive at day 0, so I pay the cheapest rate possible.
For example, I’m storing just shy of 400GB of personal photos and videos in one particular bucket, and that’s costing me about $0.77USD per month. Pennies.
Yes, it’ll cost me a lot more to pull it out and, yes, it’ll take a day or two to get it back. But it’s an insurance policy I can rely on and a (future) price I’m willing to pay should the dire day (lost both NASes, or worse) ever arrive when I need it.
Why Amazon S3? I’m in Australia, and that means local access is important to me. We’re pretty far from most other places around the world. It means I can target my nearest AWS region with my rclone jobs and there’s less latency. Backblaze is a great alternative, but I’m not in the US or Europe. Admittedly, I haven’t tested this theory, but I’m willing to bet that in-country speeds are still a lot quicker than any CDN that might help get me into B2.
Also, something others haven’t yet mentioned is, per Immich’s guidance on their repo (Disclaimer right at the top) is not NOT rely on Immich as your sole backup. Immich is under very active development, and breaking changes are a real possibility all the time right now.
So, I use SyncThing to also backup all my photos and videos to my NAS, and that’s also backed up to the other NAS and S3. That’s why I have nearly 400GB of photos and videos - it’s effectively double my actual library size. But, again, at less than a buck a month to store all that, I don’t really mind double-handling all that data, for the peace of mind I get.
Not aware of any such project. I’d assume you’ll need some hardware anyways as you need it for the level of access (ATX etc.). Not sure how that would be preferable to this.
I was thinking more about the basics, like USB input and getting the image+sound. For that you could get away with a special USB cable and a capture card. I’m just not aware of any software for it, I don’t think the original PiKVM stuff was ever ported to PC.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.