selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

root, in NAS/Media Server Build Recommendations
@root@lemmy.world avatar

I have a beefed up Intel NUC running Proxmox (and my self hosted services within those VMs) and a stand alone NAS that I mount on the necessary VMs via fstab.

I really like this approach, as it decouples my storage and compute servers.

Tinnitus,

Based on some of the other comments, it sounds like this might be the way to go. What NAS are you working with?

root,
@root@lemmy.world avatar

I was using a WD PR4100, but I upgraded to a Synology RS1221+ and it’s been fantastic :)

Lem453, (edited ) in This Week in Self-Hosted (29 December 2023)

github.com/photown/private-pdf

Self hosted PDF editor sounds great!

I wish it had thr ability to add or remove a password from a document. Other than that it looks perfect.

AnExerciseInFalling,

The PDF multitool I’ve been using is Stirling-pdf, which has support for adding/removing passwords

Funny enough, I also learned about this tool from a previous edition of this newsletter haha

Lem453,

Wow, this looks great!

Any idea if a self hosted all like this can be set as the default PDF viewer for a browser. Firefox and Chrome both have built in pdf viewer when clicking on a pdf, having it open in this instead would be amazing.

AnExerciseInFalling,

I have no idea, but that would be pretty cool

zerodawn, in NAS/Media Server Build Recommendations

To play off what others are saying i think a mini pc and a stand alone nas may be the better route for you. It may seem counter intuitive to break it out into two devices but doing so will allow room for growth. If you buy a creeper bare bones mini pc and put more of your budget towards a nas and storage you could expand the mini pc without messing with your nas. You could keep the pi in the mix for a backup if your main pc is down or offload some services to it to balance performance.

Tinnitus,

You know, I’m not sure why this didn’t cross my mind as I started doing research. I have seen this recommendation countless times around here and people seem to have great experiences going the mini pc route. Thanks for your insight. Do you have any specific mini pc or NAS in mind that you would recommend?

zerodawn,

Most of that will be budget based and long term goal oriented. Do you want a 4 bay nas with 10tb drives set up in raid 5 or do you think you’d want a two bay system with 5tb drives set up in mirror raid? Do you want to start cheap and get a second hand thinkcenter off ebay or do you want to buy a brand new NUC and put a 2tb M.2 and 16gb of ram in one slot so you can add the other 16gb later? Some nuc can take up to 64gb of ram and have two 2tb drives in them.

Tinnitus,

I was originally thinking at least 4 drives (4 if I went the synology/other of the shelf option, or more if I went the DIY route). Not opposed to a secondhand computer, especially if the price and performance is good. It seems like a brand new NUC can get fairly expensive.

BakedCatboy,

Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.

This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.

Tinnitus,

What synology model did you go with? Do you host any other services with that type of setup?

BakedCatboy, (edited )

I went with the DS1621xs+, the main driving factors being:

  • that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
  • I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.

If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.

If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.

And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.

Only jellyfin/Plex is on my NUC. On the nas I run:

  • Adguard
  • Sonarr/radarr/lidarr/prowlarr/transmission/overseerr
  • Castblock
  • Grocy
  • Nextcloud
  • A few nginx instances for websites
  • Uptime-kuma
  • Vaultwarden
  • Traefik and wire guard which connects to a vps as a reverse proxy for anything that needs to be accessible from the public internet
juli, in This Week in Self-Hosted (29 December 2023)

Experience with endurian?

github.com/joaovitoriasilva/endurain

It states that it’s strava like. The difference of strava compared to nextcloud, fitotrack, osmdashboard is that it’s a social running app. It is mastodon where people only post about their tracks.

I haven’t read anything about social on the website?

spez_,

I love projects which never include a screenshot. Skipped

juli,

Relax. It’s new. You can’t have everything at the beginning

carcus, (edited ) in NAS/Media Server Build Recommendations

You may want to consider a mini PC. That was my upgrade after torturing my raspberry pi for many years. I landed here after agonizing over building the perfect NAS media server. Still very low on power consumption, but the compute power per dollar is great these days. All this in only a slightly larger form factor over the pi. I brought over the drives from the pi setup and was up and running for a very low cost. The workload transferred from the pi (plex, NAS, backups, many microservices/containers) leaves my system extremely bored where the pi would be begging for mercy.

I don’t do a lot of transcoding, so I’m no expert here, but looking at the documentation I believe you would want a passmark score of 2000 per each 1080p transcode, so 8000+ for your 4+ streams, not including overhead for other processes.

Tinnitus,

Thanks for the great info! What mini PC did you end up going with? I’ve heard Beelink and a few others thrown around here and there, and most seem to be impressed with what they can do. Do you mind elaborating some on how you handle your drives with this type of setup? Do you just have some sort of NAS connected directly to the pc?

carcus, (edited )

No worries. I got a beelink S12, non-pro model with 8G RAM and 256G SSD. It was on sale for about $150 USD. Fit my use case, but maybe not yours, although you might be surprised. Perhaps those extra plex share users won’t be concurrently transcoding?

The drives are all USB, the portable type that requires no power source. Like you, I don’t need much. I have ~12T across 3, with a small hub that could provide more ports in a pinch. This model I believe also provides a SATA slot for a 2.5” drive, but I haven’t used it. All of these drives were previously connected to a rpi 3B+, haha!

The drive shares are done via samba and also syncthing. I have no need for a unified share via mergerfs, but I did take a look at this site for some ideas. I’m the type that rolls all their own services rather than using an NAS based distro. Everything is in an ansible playbook that pushes out my configs and containers.

Edit: I should make it clear the NAS is for other systems to access the drives. Drives are directly connected via USB. All my services are contained in this single host (media/backup/microservices/etc). My Pi’s are now clustered for a k3s lab for non critical exploration.

I’m a bit of a minimalist who designs for my current use with a little room to grow. I don’t find much value in “future proofing” as I’ve never had much success in accomplishing that.

Tinnitus,

I’ll probably start out with just letting my parents access Plex to see how it performs. They would be remotely streaming off an Apple TV, so I’m not entirely sure how much, if any, transcoding will be needed. My other issue is that transcoding is uncharted territory for me, so I should probably work on getting a better understanding of how/when it might come into play in my situation.

Everything else you described sounds like it would fulfill what I’m looking for. I don’t plan on solely hosting “mission critical” aspects of my life on this (at least for now while I continue to learn and possibly break things), but it would help me take the training wheels off my bike.

carcus,

Happy to help. As I have it configured, my local network is set to prefer direct play, so any transcoding gets done from connections that traverse the boundary of my network. If you don’t live with your parents this would likely apply.

Transcoding may also occur when you have subtitled content and I believe for certain audio formats, but the transcoding would be limited to the audio track.

Thermal_shocked, in Is this Seagate Exos drive too good to be true?

Hell of a deal. i started using refurb drives, still 5 year warranty, because I was going through so many. Sometimes you get them half off.

Blaze, in This Week in Self-Hosted (29 December 2023)

Nice initiative!

thejevans, in NAS/Media Server Build Recommendations
@thejevans@lemmy.ml avatar

If you live near Washington, DC, I’ve got a good system ready to go that I’m selling.

Chuckleberry_Finn,

I’m near DC and looking for a new system to replace my Synology NAS, what do you have?

thejevans,
@thejevans@lemmy.ml avatar

I put it all under a spoiler tag because it’s a lot. Let me know if you’re interested!

inventory/specs# UPS Eaton 5SC 1000 full sine-wave inverter # Rack 13U enclosed rack w/casters and magnetic front door # Networking ## TP-Link EAP225 Wifi AP ## Aruba Networks S2500-24P-US Switch - 24 Port Gigabit Switch - PoE - 4x SFP+ 10Gbit ports # Servers ## Dell R720xd ### Components - 2x Intel Xeon E5-2667 v2 @ 3.3GHz (8-core CPUs) - 14x 8GB Samsung ECC 2Rx4 Dual Rank DDR3 10600R 1333MHz RAM (112 GB) - Intel 4P X520 NIC (2x SFP+ 10Gbit, 2x 1Gbit) - 2x 750W PSU - IDSDM 6YFN5 dual SD module - 2x Sandisk 16GB UHS-1 Extreme SDHC SD cards - PERC H710P Mini Host Bus Adapter - PERC H310 Host Bus Adapter - Dual 2.5" Hotswap Drive Backplane 0JDG3 - 2x Crucial MX500 500GB SSD - 2x 2.5" Dell Hotswap Drive Caddies/Trays - Front Hotswap HDD Backplane - 12x HGST Ultrastar KP06 6TB 7200RPM HDDs - 12x 3.5" Dell Hotswap Drive Caddies/Trays - Rack Rails (They hold the server in place, but they’re missing some bearings. If the server is pulled out on the rails it may not go back. Replacing these should be less than $50) - Locking front panel - iDRAC Module ### Notes - Runs ~235W at idle - Can handle many VMs and multiple simultaneous 4k Plex transcodes - This is basically the best set of parts for the xx20 series Dell servers and is more capable than a lot of the xx30 units ## Dell R710 ### Components - Rack Rails - Locking front panel - CD Drive to 2.5" Drive Adapter - Samsung 860 Evo 250GB SSD - Front Hotswap HDD Backplane - 6x 3.5" Dell Hotswap Drive Caddies/Trays - 2x 870W PSU - PERC H216 Host Bus Adapter - 2x Intel Xeon L5640 @2.26GHz (6-core CPUs) - 18x 4GB Samsung ECC 2Rx8 Dual Rank DDR3 10600R 1333MHz RAM (72GB) - Dell 0KJYDB 2xSFP+ 10Gbit NIC ### Notes - Set up to be an ideal backup server - Just add hard drives and it will be ready to go - iDRAC modules are available on eBay if you would like out of band management

stown, (edited ) in Help needed setting up NGINX reverse Proxy / HA / Vaultwarden using Duckdns
@stown@sedd.it avatar

Are you absolutely sure that NPM has an IP from the subnet 172.22.0.0/24? Is there any way you can remove the trusted_proxies setting from homeassistant and then check if it will accept the connection from NPM?

stown,
@stown@sedd.it avatar

I did some reading and found that the trusted_proxies setting is required. Can you try setting it to 0.0.0.0/0?

Lobotomie, (edited )

I have set it but it wont change anything. You can access the docker inspect here pastebin.com/t1T98RCwI can imagine that this problem is before homeassistant as even if I ignore the certificate error , it will not forward me to homeassistant but to my router / a warning page from my router saying it has blocked me.

If I test the server reachability inside nginx manager it will ask me if npm is configured correctly, so you might be onto something with NPM configuration …

I have now set up duckdns over docker instead of over my router, but it hasnt helped anything. My Duckdns IP is the same (and its correct, if I just open this IPV4 Address it will redirect to my nginx landing page).

Okay I think here is the error. AFter doing the Test Server Reachability the following will come up in the nginx-db logs: 2023-12-29 21:06:25 3 [Warning] Aborted connection 3 to db: ‘npm’ user: ‘npm’ host: ‘172.22.0.8’ (Got an error reading communication packets)

Now I have no clue why this is ( I think this is the end for today as my head is about to explode). Docker inspect nginx reveals that this request for sure came from nginx (as it has the .0.8 ip).

stown, (edited ) in NAS/Media Server Build Recommendations
@stown@sedd.it avatar

As far as motherboards go, you would probably be fine with any consumer desktop brand but you should probably look for something with dual NIC. If you want something a bit more robust AsRock Rack has some really great options. I’ve been using the X470D4U for about 4 years now without any issues.

qjkxbmwvz, in NAS/Media Server Build Recommendations

How much do you care about power/energy usage?

Also, how important is having one do-it-all server vs. a few separate servers? Sounds like you’re ok with at least two servers (Pi turns into HA OS, and you get a new one for everything else).

Tinnitus,

I wouldn’t say energy usage/efficiency is super high on my list, but I am also not opposed to being somewhat conscious about that. Basically, a little bit extra on my electric bill won’t kill me.

Separate servers is also something I would be fine with. The Pi has been great, and I figured I could keep utilizing it the way I have been with some other services. It is currently running some form of Ubuntu server (can’t remember off the top of my head), and everything is containerized.

qjkxbmwvz,

Cool! I just got an Orange Pi 5 Plus, 16GB RAM**, but haven’t set it up yet so can’t give any recommendations. On paper though it looks great — significantly beefier than a RPi 4 (my current server), and supports M.2 NVME as well. Might be worth looking into for your use too, but the emphasis here is kinda on computing with a very low power budget, so I’m sure you could get more horsepower with e.g. an x64 NUC or similar.

Here’s a review, and note that this is without extra heatsink so it was probably thermally throttling (as was the RPi?): www.phoronix.com/review/orange-pi-5

**I first ordered the 32GB version but it got seized for counterfeit postage, and then some shenanigans ensued. If buying from Amazon I would suggest only buying units in stock and shipped from Amazon. May only apply to US though…

stown, in NAS/Media Server Build Recommendations
@stown@sedd.it avatar

For your CPU I recommend Ryzen 5700G. Powerful enough for everything you want to do, the TDP is only 65 watts so it’s not going to destroy your power bill, has a decent integrated GPU, and costs only about $200. Another positive is that it uses DDR 4 so you can load up on that for pretty cheap too.

linearchaos, in How safe is self-hosting a public website behind Cloudflare?
@linearchaos@lemmy.world avatar

The first worry are vectors around the Synology, It’s firmware, and network stack. Those devices are very closely scrutinized. Historically there have been many different vulnerabilities found and patched. Something like the log4j vulnerabilities back in the day where something just has to hit the logging system too hit you might open a hole in any of the other standard software packages there. And because the platform is so well known, once one vulnerability is found they already know what else exists by default and have plans for ways to attack it.

Vulnerabilities that COULD affect you in this case for few and far between but few and far between are how things happen.

The next concern you’re going to have are going to be someone slipping you a mickey in a container image. By and large it’s a bunch of good people maintaining the container images. They’re including packages from other good people. But this also means that there is a hell of a lot of cooks in the kitchen, and distribution, and upstream.

To be perfectly honest, with everything on auto update, cloud flares built-in protections for DDOS and attacks, and the nature of what you’re trying to host, you’re probably safe enough. There’s no three letter government agency or elite hacker group specifically after you. You’re far more likely to accidentally trip upon a zero day email image filter /pdf vulnerability and get bot netted as you are someone successfully attacking your Argo tunnel.

That said, it’s always better to host in someone else’s backyard than your own. If I were really, really stuck on hosting in my house on my network, I probably stand up a dedicated box, maybe something as small as a pi 0. I’d make sure that I had a really decent router / firewall and slip that hosting device into an isolated network that’s not allowed to reach out to anything else on my network.

Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.

Firewall drops everything between your home network and that box except SSH in, or maybe VNC in depending on your level of comfort.

Gooey0210,

Can i ask you to elaborate on this part

Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.

I used to have a separate box, but the only thing it did was port forwarding

Specifically i don’t really understand the topology of this setup, and how do i set it up

chiisana,

Cloudflare tunnel is a thin client that runs on your machine to Cloudflare; when there’s a request from outside to Cloudflare, it relays it via the established tunnel to the machine. As such, your machine only need outbound internet access (to Cloudflare servers) and no need for inbound access (I.e. port forwarding).

Gooey0210,

Thank you for your reply, but i actually was asking about the network stuff 😅

I used to use cloudflare tunnels for many years, now i’m a bit too tin foiled to use any cloudflare 😅

chiisana,

Ah sorry I went down the wrong rabbit hole.

I’d imagine an isolated VLAN should be sufficient good starting point to prevent anyone from stumbling on to it locally, as well as any potential external intruder stumbling out of it?

linearchaos,
@linearchaos@lemmy.world avatar

You need to have a rather capable router / firewall combo.

You could pick up a ubiquity USG. Or set up something with an isp router and a PF sense firewall.

You need to have separate networks in your house. And the ability to set firewall rules between the networks.

The network that contains the hosting box needs to have absolutely no access to anything else in your house except it’s route out to the internet. Don’t have it go to your router for DHCP set it up statically. Don’t have it go to your router for DNS, choose an external source.

The firewall rules for that network are allow outbound internet with return traffic, allow SSH and maybe VNC from your home network, then deny all.

The idea is that you assume the box is capable of getting infected. So you just make sure that the box can live safely in your network even if it is compromised.

Gooey0210,

(I just noticed i replied to your another comment, but still to you 😬)

Now i’m a little bit confused, what does it do then?

If the box doesn’t have access to anything on the network, how would it do anything?

linearchaos,
@linearchaos@lemmy.world avatar

The box you’re hosting on only needs internet access to connect the tunnel. Cloudflare terminates that SSL connection right in a piece of software on your web server.

Gooey0210,

I mean, what does it host if the only thing it has access to is the internet?

TedZanzibar,

Are you my brain? This exactly the sort of thing I think about when I say I’m paranoid about self-hosting! Alas, as much as I’d like to be able to add an extra box just for that level of isolation it’d probably take more of a time commitment than I have available to get it properly setup.

The attraction of docker containers, of course, is that they’re largely ready to go with sensible default settings out of the box, and maintenance is taken care of by somebody else.

linearchaos,
@linearchaos@lemmy.world avatar

Oh yeah, I totally get the allure of containers. I use them myself just not in production.

To be fair, python and node both suffer from the same kind of worries. And stuff gets slipped into those repos not too infrequently.

roofuskit, in Migrated my self-hosted Nextcloud to AIO and I absolutely love it
@roofuskit@lemmy.world avatar

This is all in one container? That is the exact wrong way to use docker.

vortexsurfer,

No, you give the AIO container access to your docker daemon and it will create / handle / supervise all the other containers nextcloud needs.

genie,

Love me some docker compose! I switched from a manually built VM over to the AIO setup about a year ago and never looked back. It’s been rock solid for me and my ~10 users so far.

haplo,

I appreciate the simplicity, but giving such broad permissions makes me unease and the main reason why I’m putting off moving to Nextcloud AIO. Am I the only one who thinks like this?

hempster,

Its OK if you have a dedicated VM just for nexcloud

synae,
@synae@lemmy.sdf.org avatar

Damn, why not use k8s at that point

ikidd, (edited )
@ikidd@lemmy.world avatar

It containerizes all the subcomponents under a mastercontainer, and even has support for community containers of things like pihole, caddy and dlna. So you have image control over each component, as well as codespace separation.

After 7 or 8 years of various forms of Nextcloud, I have to say this is the easiest one to maintain, upgrade and backup outside of my VM snapshots.

roofuskit,
@roofuskit@lemmy.world avatar

So it’s sub containers?

ikidd,
@ikidd@lemmy.world avatar

Not really, it just makes containers in your docker, accessible like any others. The mastercontainer can be used to control and update them, but you can just exec -dit them like any other containers you find in your docker ps

cybersandwich, in Migrated my self-hosted Nextcloud to AIO and I absolutely love it

I could never get the AIO setup to work well for some reason. It was also a couple versions behind it seemed.

I…uh…know it’s not popular on the fed, but I use the nextcloud snap package and it’s been rock solid. It’s always up -to-date and they have a backup/export feature too.

manos_de_papel,

People talk a lot of smack on snap, but installed the nextcloud snap 5 years ago to check out nextcloud and see if I liked it. I did, and the snap was so easy that it stuck around for 5 years. I didn’t do anything except update the underlying OS. It is really well maintained.

I just migrated off of it to get a little more flexability, but I have nothing but good things to say about it.

cybersandwich,

Any tips or tricks for your migration? I don’t have any plans in the near future but I never found a super clear path to migrate off.

That’s the only downside i have for the snap at the moment.

manos_de_papel,

I couldn’t make things easy for myself when I migrated, because I wanted to use postgres, while the snap uses mysql/mariadb and I wanted S3 storage instead of file system.

In the end I just pulled down all the user filed and exported the calendars and contacts manually, then imported them on the new instance.

There are some blog posts on migrating db types, but my install is very minimal and I just didn’t want the headache.

If you don’t want to change the database type, then you can just dump the db from the snap, backup the user file directory, then restore into the new database and rsync up all the files.

wer2,

I feel like that is what snaps are for, long running server applications.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #