selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

atzanteol, in Joplin alternative needed

I think you need to learn more about how databases work. They don’t typically reclaim deleted space automatically for performance reasons. Databases like to write to a single large file they can then index into. Re-writing those files is expensive so left to the DBA (you) to determine when it should be done.

And how are you backing up the database? Just backing up /var/lib/postgres? Or are you doing a pg_dump? If the former then it’s possible your backups won’t be coherent if you haven’t stopped your database and it will contain that full history of deleted stuff. pg_dump would give you just the current data in a way that will apply properly to a new database should you need to restore

You can also consider your backup retention policy. How many backups do you need for how long?

jaykay,
@jaykay@lemmy.zip avatar

You are right, I should. They are a bit more complicated than I anticipated, and apparently I’m doing everything wrong, haha. I have backups set up to go 2 years back, but I’m checking backblaze occasionally to check, so it shouldn’t be an issue. I have two months so far lol Thanks for the write-up :)

tristan, in Planning on setting up Proxmox and moving most services there. Some questions

My current setup is 3x Lenovo m920q (soon to be 4) all in a proxmox cluster, along with a qnap nas with 20gb ram and 4x 8tb in raid 5.

The specs on the m920q are: I5 8500T 32gb ram 256gb sata SSD 2tb nvme SSD 1gbe nic

Pic of my setup

On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas

On the Nas I have a normal docker installation which runs my databases

On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances

I have no issues with performance or read/write or timeouts.

As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.

Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say sonarr:8989 and it will make life much easier

As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first

Cooljimy84, in Planning on setting up Proxmox and moving most services there. Some questions
@Cooljimy84@lemmy.world avatar

With arr services try to limit network throughput and disk throughput on them, as if either are maxed out for too long (like moving big linux iso files) it can cause weird timeouts and failures

Edgarallenpwn,
@Edgarallenpwn@midwest.social avatar

I believe I would be fine on the network part, I am just guessing writing them to an SSD cache drive on my NAS would be fine? Im currently writing to the SSD and have a move script run twice a day to the HDDs

Cooljimy84,
@Cooljimy84@lemmy.world avatar

Should be fine, I’m writing to spinning rust, so if I was playing back a movie it could cause a few “dad the tv is buffering again” problems

BearOfaTime, in Exposing Myself (with Filebrowser)

Use Tailscale with the Funnel option.

It provides a fully encrypted connection for external devices that don’t have the Tailscale client. Pretty impressive.

Similar to using Cloudflare tunnels but easier to setup.

gitamar, in Any good RSS Feed service for self-hosting?

I heard good things about Newsblur. They offer a service and an open source version for self hosting

github.com/samuelclay/NewsBlur

bjoern_tantau, in Any good RSS Feed service for self-hosting?
@bjoern_tantau@swg-empire.de avatar

I’ve used tt-rss in the past. Don’t know what state it’s in currently.

If you have Nextcloud they also have an RSS app.

bluetoque,

It is very stable. Just don’t visit the forum for help. The dev regularly roasts people, which leads to a very toxic environment.

bisby,

I switched to FreshRSS which works just as well, and doesn’t have a toxic dev

Dirk, (edited )
@Dirk@lemmy.ml avatar

I remember years ago it already was like this in the forums. It actually made me stop using it and running a custom made web based reader for some time.

I wouldn’t use it anymore nowadays.

FreshRSS is the way to go. It even has plugins (and a plugin for YouTube channels as RSS feeds, very convenient).

MNByChoice, in what if your cloud=provider gets hacked ?

I wonder if the specifics of the hack would make backing up elsewhere fail. Possibly by spreading the hack to new machines.

In any case, testing backups is important.

kristoff,

I have been thinking the same thing.

I have been looking into a way to copy files from our servers to our S3 backup-storage, without having the access-keys stored on the server. (as I think we can assume that will be one of the first thing the ransomware toolkits will be looking for).

Perhaps a script on a remote machine that initiate a ssh to the server and does a “s3cmd cp” with the keys entered from stdin ? Sofar, I have not found how to do this.

Does anybody know if this is possible?

Nouveau_Burnswick, in what if your cloud=provider gets hacked ?

A data cloud backup loss should be fine, because it’s a backup. Just re-up your local backup to a new cloud/second physical location, that’s the whole point of two.

I don’t see a need to run two conccurent cloud backups.

kristoff,

In this case, it is not you -as a customer- that gets hacked, but it was the cloud-company itself. The randomware-gang encrypted the disks on server level, which impacted all the customers on every server of the cloud-provider.

Nouveau_Burnswick,

Yeah absolutely, but tonyou as an individual , it’s the same net effect of your cloud backup is lost. Just re-up your local backup to a different cloud provider.

qjkxbmwvz, in Help me build a home server

Does budget include storage? Tight budget without storage, even tighter with…

If power usage not a concern then used x86/x64 gear is probably the way to go. Surplus gear (corporate, university…) possibly an option for you. That’s a very tight budget though, so I don’t think it really gives you the luxury of choosing specs unfortunately. That said: I might go fot the best bones/least RAM/storage if you think you might upgrade it down the road. 4GB RAM with an upgrade path to 32 is preferable to 8GB non-upgradable IMHO. Likewise, 500GB spinny disk with extra bays and an NVME slot is nicer than 500GB SSD with no upgrade path. Again… really tight budget so this may all be out of the question.

I’m a fan of low power gear, so I’d recommend something like a Raspberry Pi 5 8GB, or another SBC (I just grabbed an Orange Pi 5 Plus and I like it so far — NVME, 16GB RAM, dual NIC). However these will be out of your budget, especially once you add case, power supply, and storage.

Good luck!

TCB13, (edited ) in Streaming local Webcam in a Linux machine, and acessing it when on vacations - which protocol to choose?
@TCB13@lemmy.world avatar

however the packages for nginx-rtmp are quite abandoned in arch linux.

Maybe you should switch to Debian? I’ve been doing it for a long time that way and playing to VLC without issues. What repositories are you using btw? Official ones at nginx.org/en/linux_packages.html or some 3rd party garbage?

savedbythezsh, in Self-hosted media tracker recommendations?

Update: I finally installed RYOT, and wow is it slow, and resource intensive. It’s using more than 20% of the CPU on my NAS when it isn’t even open! Might switch to Media Tracker…

stephaaaaan, in PSA: The Docker Snap package on Ubuntu sucks.
@stephaaaaan@feddit.de avatar

Did you see this already? :)

hperrin,

That’s a start, but I need access to both /home and /data.

savedbythezsh, in This Week in Self-Hosted (5 January 2024)

I just unrelatedly installed Homarr today actually, was getting a little annoyed at memorizing the ports for services running on my Synology…

DeltaTangoLima, (edited ) in worth selfhosting immich or similar? what about backups?
@DeltaTangoLima@reddrefuge.com avatar

I have a 2N+C backup strategy. I have two NASes, and I use rclone to backup my data from one NAS to the other, and then backup (with encryption) my data to Amazon S3. I have a policy on that bucket in S3 that shoves all files into Glacier Deep Archive at day 0, so I pay the cheapest rate possible.

For example, I’m storing just shy of 400GB of personal photos and videos in one particular bucket, and that’s costing me about $0.77USD per month. Pennies.

Yes, it’ll cost me a lot more to pull it out and, yes, it’ll take a day or two to get it back. But it’s an insurance policy I can rely on and a (future) price I’m willing to pay should the dire day (lost both NASes, or worse) ever arrive when I need it.

Why Amazon S3? I’m in Australia, and that means local access is important to me. We’re pretty far from most other places around the world. It means I can target my nearest AWS region with my rclone jobs and there’s less latency. Backblaze is a great alternative, but I’m not in the US or Europe. Admittedly, I haven’t tested this theory, but I’m willing to bet that in-country speeds are still a lot quicker than any CDN that might help get me into B2.

Also, something others haven’t yet mentioned is, per Immich’s guidance on their repo (Disclaimer right at the top) is not NOT rely on Immich as your sole backup. Immich is under very active development, and breaking changes are a real possibility all the time right now.

So, I use SyncThing to also backup all my photos and videos to my NAS, and that’s also backed up to the other NAS and S3. That’s why I have nearly 400GB of photos and videos - it’s effectively double my actual library size. But, again, at less than a buck a month to store all that, I don’t really mind double-handling all that data, for the peace of mind I get.

lemmyvore, in PiKVM Build and Deploy

So this board allows you to remotely control the PC you put it in?

Is there a reverse project, that allows a PC to act as a PiKVM for another PC or laptop so they can be controlled remotely?

Prizephitah,

Yes.

Not aware of any such project. I’d assume you’ll need some hardware anyways as you need it for the level of access (ATX etc.). Not sure how that would be preferable to this.

lemmyvore,

I was thinking more about the basics, like USB input and getting the image+sound. For that you could get away with a special USB cable and a capture card. I’m just not aware of any software for it, I don’t think the original PiKVM stuff was ever ported to PC.

Prizephitah,

PiKVM is based on Arch for ARM.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #