I’ve just finally and fully spun down a proxmox server I’ve been running and updating as my home lab for six years.
Every major update seemed to break something. Upgrades were always a roll of the dice as to whether it would even boot. It’s probably at least partially my fault for using an old R710 and running docker directly on the OS instead of within a container, but it was still by far my least reliable piece of kit.
The last apt update removed sudo, and I can’t be arsed to rebuild, so I’ve moved the critical bits to a fleet of SBCs. Powering that fucker down was a huge relief.
The new Linuxserver.io docker image at the very least has solved the annoying update cycle NextCloud has and seems to have fixed the need to do that every few months. I haven’t ever had it die but I don’t push it hard and I keep the plugins to a minimum because I just don’t trust it and it doesn’t run all that well.
For years, I had an unstable unraid server. I was fixing it every couple of days after a lockup. I had decided that unraid sucked. When it was up for a week I celebrated. Every one of my dockers was a suspect. I learned to hate all of them.
Never had a single functional problem with Nextcloud, other than the fact that it’s oppressively slow with the amount of files I’ve shoved into it. Mind you I also don’t use MySQL/MariaDB which I consider a garbage-tier DB. Despite Postgres not being the “Recommended DB” for Nextcloud it works perfectly for me. Maybe that’s the difference.
I use flauncher. It’s ugly and doesn’t have a lot of features but I don’t care. I only have two apps installed, so I’m rarely looking at or using the launcher anyway.
I use tdarr on my gaming machine and use the higher end GPU to do the work. I also use the trash guide for getting the audio profile I want in my downloads. Then in tdarr, I strip away audio and subtitle languages I don’t want and use the highest quality audio source to add a simple 2 channel audio to make it more compatible for more devices. That way I’m not needlessly transcoding 5.1 Dolby for people who are just watching on TV audio.
Yep, this is a good option for reducing file size at the expense of compatibility and CPU time. Every time OP downloads a file they’ll then have to reencode the file, which can take significant time, depending on the CPU of their NAS box, the file size, etc. It’s also worth noting that reencodes are lossy, so some amount of quality will be lost (although the quality difference may be imperceptible).
If disk space is the only variable we’re optimizing for, then you’re 100% correct, but I think it’s worth calling out that this definitely isn’t without tradeoffs.
It might also be worth considering how they’re consuming this media. If the client isn’t capable of playing back h265 then this will need to be transcoded again to play it back. Many media servers (like Plex) handle this automatically, but it’s definitely worth testing this out with your setup on a couple of files before doing this on your whole media collection.
Cloudflare tunnels are layer 7, so it’s not unlimited access by any means. This also means that certain things will break btw, for example if your website uses websockets to load information, that isn’t supported.
Next, I’d put the computer that is going to be hosting into an isolated vlan of its own and access via external URL only.
If you’re going to use docker images, make sure to vet that they’re updated often and always spin up the latest.
That document doesn’t say what layer. But it does say it supports Websockets.
Just odd that when I try to set it up using a named tunnel I don’t get an option to specify the WS service type. However it does require a service type if you want to connect to it.
Looking at this page it would seem that it’s a layer 7. Although I could be wrong, but my front end app has issues finding my backend service for websockets.
Granted I even tried to connect to my private computer using other protocols. I couldn’t get through. Anyway I’m most likely going to be taking that project offline soon.
No, but I thought I clarified that when I said it’s basically wireguard VPN which operates using tcp/udp (layer 3.) layer 7 is stuff like https. CF tunnels are lower level.
Page you linked is missing the layer between CF and source server so it doesn’t indicate layer. You can lookup wireguard protocol if you want more details.
To play off what others are saying i think a mini pc and a stand alone nas may be the better route for you. It may seem counter intuitive to break it out into two devices but doing so will allow room for growth. If you buy a creeper bare bones mini pc and put more of your budget towards a nas and storage you could expand the mini pc without messing with your nas. You could keep the pi in the mix for a backup if your main pc is down or offload some services to it to balance performance.
You know, I’m not sure why this didn’t cross my mind as I started doing research. I have seen this recommendation countless times around here and people seem to have great experiences going the mini pc route. Thanks for your insight. Do you have any specific mini pc or NAS in mind that you would recommend?
Most of that will be budget based and long term goal oriented. Do you want a 4 bay nas with 10tb drives set up in raid 5 or do you think you’d want a two bay system with 5tb drives set up in mirror raid? Do you want to start cheap and get a second hand thinkcenter off ebay or do you want to buy a brand new NUC and put a 2tb M.2 and 16gb of ram in one slot so you can add the other 16gb later? Some nuc can take up to 64gb of ram and have two 2tb drives in them.
I was originally thinking at least 4 drives (4 if I went the synology/other of the shelf option, or more if I went the DIY route). Not opposed to a secondhand computer, especially if the price and performance is good. It seems like a brand new NUC can get fairly expensive.
Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.
This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.
I went with the DS1621xs+, the main driving factors being:
that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.
If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.
If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.
And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.
Only jellyfin/Plex is on my NUC. On the nas I run:
I’m still using the self hosted docker image, the all in one is too bloated for me and my computing resources are quite limited. Why would I like an antivirus? Or a backup solution different than the one I use to backup the rest of my containers?
Cool initiative anyway for other kind of users though.
Just to make sure. Are you copying to your ZFS pool directory or a dataset? Check to male sure your paths are correct.
Push vs pull shouldn’t matter but I’ve always done push.
If your zpool is not accessible anymore after a transfer then there is a low-level problem here as it shouldn’t just disappear.
I would installe tmux on your ZFS system and have a window with htop running, dmesg, and zpool status running to check your system while you copy files. Something that severe should become self evedent pretty quickly.
Are you taking about security for your homelab? It essentially comes down to good key hygiene, network security and keeping everything updated.
Don’t open ports, use a good firewall at the border of the network, use a seedbox for torrenting. Use ACLs alongside VLANs in your network. Understand DNS in terms of how your requests are forwarded and how they are processed.
What does using a good firewall mean exactly? As I understand it a port is either open or closed right? So what does a good firewall do that a bad one doesn’t?
Projects like OpenWRT and OPNsense take care to maintain their code and address security issues in firewall/router software that can be exploited. Perhaps firewall might not have been the best way to put it, but companies like TP-Link aren’t really the most scrupulous with their software
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.