It does have quite a bit of overhead, meaning it’s not the fastest out there, but as long as it’s fast enough to serve the media you need, that shouldn’t matter.
Also, you need to either mount it manually on the command line whenever you need it or be comfortable with leaving your SSH private key in your media server unencrypted. Since you are already concerned with needing to encrypt file share access even in the local network, the latter might not be a good option to you.
The good part about it is, as long as you can ssh from your media server to your NAS, this should just work with no additional setup needed.
Don’t worry about the UDP ports, they’re only needed on the LAN and only in certain conditions. Basically Jellyfin uses them to “announce” things to the LAN.
On 7359 it announces clients where to connect; this can help you when first starting a client to let it connect automatically instead of you having to enter IP or jellyfin.mydomain.com.
On 1900 it advertises itself as a DLNA server. This is only relevant if you have other DLNA-capable devices. DLNA is a cool protocol that allows devices to act as server, controller or renderer and to cooperate to cast streams. For example you can use your phone as a DLNA controller to get media from Jellyfin acting as a DLNA server and cast it to a TV acting as a DLNA renderer. If your TV has DLNA capability then you may be interested in the BubbleUPnP phone app which can act as a controller, and that’s when you may be interested in enabling 1900.
Or you can comment out the “ports:” section in your config and say “network_mode: host” instead and all 4 ports will be mapped automatically and work as intended (it’s what I do).
Good to know. I thought there was some issue with those ports and the reverse-proxy because the DLNA function doesn’t seem to be working but from some googling this seems to be more of a docker problem in general when you are not using host mode for networking.
I’ll assume you mean what I mean when I say I want to be safe with my self hosting – that is, “safe” but also easily accessible enough that my friends/family don’t balk the first time they try to log in or reset their password. There are all kinds of strategies you can use to protect your data, but I’ll cover the few that I find to be reasonable.
Port Forwarding – as someone mentioned already, port forwarding raw internet traffic to a server is probably a bad idea based on the information given. Especially since it isn’t strictly necessary.
Consumer Grade Tunnel Services – I’m sure there are others, but cloudflare tunnels can be a safer option of exposing a service to the public internet.
Personal VPN (my pick) – if your number of users is small, it may be easiest to set up a private VPN. This has the added benefit of making things like PiHole available to all of your devices wherever you go. Popular options include Tailscale (easiest, but relies on trusting Tailscale) or Wireguard/OpenVPN (bare bones with excellent documentation). I think there are similar options to tailscale through NordVPN (and probably others), where it “magically” handles connecting your devices but then you face a ~5 device limit.
With Wireguard or OpenVPN you may ask: “How do I do that without opening a port? You just said that was a bad idea!” Well, the best way that I have come up with is to use a VPS (providers include Digital Ocean, Linode to name a few) where you typically get a public IP address for free (as in free beer). You still have a public port open in your virtual private network, but it’s an acceptable risk (in my mind, for my threat model) given it’s on a machine that you don’t own or care about. You can wipe that VPS machine any time you want, the cost is time.
It’s all a trade-off. You can go to much further lengths than I’ve described here to be “safer” but this is the threshold that I’ve found to be easy and Good Enough for Me™.
If I were starting over I would start with Tailscale and work up from there. There are many many good options and only you can decide which one is best for your situation!
Port Forwarding – as someone mentioned already, port forwarding raw internet traffic to a server is probably a bad idea based on the information given. Especially since it isn’t strictly necessary.
I don’t mean to take issue with you specifically, but I see this stated in this community a lot.
For newbies I can agree with the sentiment “generally” - but this community seems to have gotten into some weird cargo-cult style thinking about this. “Port forwarding” is not a bad idea end of discussion. It’s a bad idea to expose a service if you haven’t taken any security precautions for on a system that is not being maintained. But exposing a wireguard service on a system which you keep up-to-date is not inherently a bad thing. Bonus points if VPN is all it does and has restricted local accounts.
In fact of all the services homegamers talk about running in their homelab wireguard is one of the safest to expose to the internet. It has no “well-known port” so it’s difficult to scan for. It uses UDP which is also difficult to scan for. It has great community support so there will be security patches. It’s very difficult to configure in an insecure way (I can’t even think of how one can). And it requires public/private key auth rather than allowing user-generated passwords. They don’t even allow you to pick insecure encryption algorithms like other VPNs do. It’s a great choice for a home VPN.
You make a great point. I really shouldn’t contribute to the boogeyman-ification of port forwarding.
I certainly agree there is nothing inherently wrong or dangerous with port forwarding in and of itself. It’s like saying a hammer is bad. Not true in the slightest! A newbie swinging it around like there’s no tomorrow might smack their fingers a few times, but that’s no fault of hammer :)
Port forwarding is a tool, and is great/necessary for many jobs. For my use case I love that Wireguard offers a great alternative that: completes my goal, forces the use of keys, and makes it easy to do so.
Glad you didn’t take my comment as being “aggressive” since it certainly wasn’t meant to be. :-)
Wireguard is a game-changer to me. Any other VPN I’ve tried to setup makes the user make too many decisions that require a fair amount of knowledge. Just by making good decisions on your behalf and simplifying the configuration they’ve done a great job of helping to secure the internet. An often overlooked piece of security is that “making it easier to do something the right way is good for security.”
There’s a vocal handful group of people disliking CloudFlare because of their irrelevant “privacy” concern here — you can absolutely use the registrar without using their CDN features. Also, reality check: with CloudFlare’s market reach, there’s zero chance nothing they do online isn’t already MITM’ed already. Having said that, Cloudflare uses their registrar as loss leader, so they give their wholesale price to end users registering, and as such you’ll have the cheapest price available for the domain extensions they support. You can then just set your DNS without their orange cloud and traffic on your domain aren’t going to flow through their CDN.
So they profit from high-profile commercial users to subsidize the free tier (proxy, tunnels) and cheap DNS. What’s wrong with that? It’s not like we absolutely need those (proxy is nice but you can use vps, tunnels are also offered by ngrok).
That’s rad, and you did an amazing job keeping them whole. Recently I have been wrapping them in cloth, then the kids form clay around them for various fridge and office magnets.
That’s a good idea. Yeah, the trick I discovered in getting them off the mounting bracket without the chrome plating peeling is to grab each end of the bracket with vice grips and/or pliers (after you unscrew it from the drive) and just bend it down and away from the magnet. They usually come off in one piece that way, too.
I’ve done some of that, recently I have an old putty knife and I will put it right against the crack and just hammer it which will unstick it enough that I can pull it off. Newer drives definitely have weaker magnets than some of my much older ones.
Cool, I’ll try this next time. So far the least damaging way I’ve tried is putting the thing in hot water. The magnet and the base expand by different amounts and it is relatively easy to pry the magnet off. But the thing cools down quickly so it takes a few tries.
I was doing some blacksmithing in high school, mostly knifes.
When reaching 800°C steel is not magnetic anymore, it’s also a good temperature to start forging the steel. So I needed a atrong magnet to know when the steel was hot enough, I used what I have available: a hard drive magnet.
It felt quite “mad-maxy” to disassemble a broken hard drive to use it as a tool to forge knifes
You’re going to get a lot of bad or basic advice with no reasoning (use a firewall) in here… And as you surmised this is a very big topic and you haven’t provided a lot of context about what you intend to do. I don’t have any specific links, but I do have some advice for you:
First - keep in mind that security is a process not a thing. 90% of your security will come from being diligent about applying patches, keeping software up-to-date, and paying attention to security news. If you’re not willing to apply regular patches then don’t expose anything to the internet. There are automated systems that simply scan for known vulnerabilities on the internet. Self-hosting is NOT “set it and forget it”. Figuring out ways to automate this help make it easy to do and thus more likely to be done. Checkout things like Ansible for that.
Second is good authentication hygiene. Choose good passwords. Better yet long passphrases. Or enable MFA and other additional protections. And BE SURE TO CHANGE ANY DEFAULT PASSWORDS for software you setup. Often there is some default ‘admin’ user.
Beyond that your approach is"security in depth" - you take a layered approach to security understanding what your exposure is and what will happen should one of your services / systems be hacked.
Examples of security in depth:
Proper firewalling will ensure that you don’t accidentally expose services you don’t intend to expose (adds a layer of protection). Sometimes there are services running that you didn’t expect.
Use things like “fail2ban” that will add IP addresses to temporary blocklists if they start trying user/passwords that don’t work. This could catch a bot from finding that “admin/password” user on your Nextcloud server that you haven’t changed yet…
Minimize your attack surface area. If it doesn’t need to be exposed to the internet then don’t expose it. VPNs can help with the “I want to connect to my home server while I’m away” problem and are easy to setup (tailscale and wireguard being two popular options). If your service needs to be “public” to the internet understand that this is a bigger step and that everything here should be taken more seriously.
Minimize your exposure. Think though the question of “if a malicious person got this password what would happen and how would I handle it?” Would they have access to files from other services running on the same server (having separation between services can help with this)? Would they have access to unencrypted files with sensitive data? It’s all theoretical, until it isn’t…
If you do expose services to the internet monitor your logs to see if there is anything “unusual” happening. Be prepared to see lots of bots attempting to hack services. It may be scary at first, but relatively harmless if you’ve followed the above recommendations. “Failed logins” by the thousands are fine. fail2ban can help cut that down a bit though.
Overall I’d say start small and start “internal” (nothing exposed to the internet). Get through a few update/upgrade cycles to see how things go. And ask questions! Especially about any specific services and how to deploy them securely. Some are more risky than others.
Going off of what you said, I am going to take what I currently have, scale it back, and attempt to get more separation between services.
Containerization and virtualization can help with the separation of services - especially in an environment where you can’t throw hardware at the problem. Containers like Docker/podman and LXD/LXC aren’t “perfect” (isolation-wise) but do provide a layer of isolation between things that run in the container and the host (as well as other services). A compromised service would still need to find a way out of the container (adding a layer of protection). But they still all share the same physical resources and kernel so any vulnerabilities in the kernel would potentially be vulnerable (keep your systems up-to-date). A full VM like VirtualBox or VMWare will provide greater separation at the cost of using more resources.
Docker’s isolation is generally “good enough” for the most part though. Your aggressors are more likely to be bot nets scanning for low-hanging fruit (poorly configured services, known exploits, default admin passwords, etc.) rather than targeted attacks by state-funded hackers anyway.
NFS over WireGuard is probably going to be the best when it comes to encrypted file shares without the need to set up Kerberos. Just set up the WireGuard tunnel and export over those ips.
I’ve setup wireguard, because it’s only me and an employee using the services. But with that, externally I don’t even seem to have a port open. But wireguard is so fast to be online, that I’m just always connected as soon as I’m online - using a domain and an IP update script
Something like Wireguard, Tailscale (uses Wireguard but provides easier administration), Reverse Proxy, VPN, are the best approaches.
Since OP doesn’t need for anyone else to access, I’d use Tailscale (Wireguard if you want a little more effort). Tailscale has a full self-host option with Headscale, though I have no problem with letting them provide discovery.
With Tailscale, you don’t even need the client on devices to access your Tailscale network, by enabling the Funnel feature. This does something similar to Reverse Proxy, by having a Web-exposed service hosted by Tailscale which then routes traffic (encrypted) to your Tailscale network.
Yeah, but then I’ve a web exposed service and I want keep a low profile as possible with what I’m exposing. So I guess as long as there aren’t many users to manage, wireguard (or a tailscale configuration) could work out for OP
As more of an artist than a techie for the most part — if you have your medium or at least part of it — the more interesting thing about art is what you have to say about it.
As an example, if you want to draw a distinction and comparison between the age of discovery and the age of technology, you could use the hard drives as a canvas on which to paint a portrait of something like Robert Scott / Lawrence Oates, or Jacques Cousteau, or Armstrong and Aldrin etc.
On that last one - if you could tie the size of the drive in comparison to the size of the code used in the moon landing that might also be interesting.
Anyway, all that to say - art is a mix of medium and message
I haven’t tried it but I’ve been thinking about it… Since NextCloud supports s3 storage it would seem its photo apps, such as Memories should work that way?
Yep, that’s pretty much it. I have it working with iDrive this way. Install Nextcloud and the Memories app. Add S3 as external storage. Point Memories to external storage. Done.
So first, I’m not really looking to change operating systems. I’ve got my system set up the way I like it, where it closely matches the production systems I run for my company.
Second, why do you say the answer is Proxmox? What benefit does that have over other solutions that can be more easily integrated into my existing operating system?
[Sorry for my not really well written reply, you really need to try different options, and in my opinion proxmox is like the only choice because of how many cool things you can do there]
Proxmox I just really good, and if you want to spin up VMs easily you will need to reshape your setup anyway
With proxmox you can do like everything with VMs, containers, etc. Not just managing only containers, or just showing status of the VMs
Also, proxmox is not really an operating system, it’s a service on top of Debian (in many cases you start installing proxmox by installing Debian)
Yo dawg, I put most of my services in a Docker container inside their own LXC container. It used to bug me that this seems like a less than optimal use of resources, but I love the management - all the VM and containers on one pane of glass, super simple snapshots, dead easy to move a service between machines, and simple to instrument the LXC for monitoring.
I see other people doing, and I’m interested in, an even more generic system (maybe Cockpit or something) but I’ve been really happy with this. If OP’s dream is managing all the containers and VM’s together, I’d back having a look at Proxmox.
I use Docker LXCs. Really just a Debian LXC with Docker and then Portainer as a UI. I have separate LXCs for common services. Arrs on one LXC, Nextcloud, Immich and SearXNG on another, Invidious on a third. I just separate them so I don’t need to kill all services if I need to restart or take down the LXC for whatever reason.
Thanks. I did check it out and it looks like it’s got some really cool benefits, like being able to cluster across two machines and take one down if it needs servicing, with zero down time.
I’m thinking about buying some rack mount servers and bringing everything I’m currently doing in the cloud for my business to on-premises servers. The one thing I was wary about was how I was going to handle hardware maintenance, and this looks like it would solve that issue nicely.
Proxmox does VMs and containers (LXC). You can run any docker / podman manager you want in a container.
Benefits of having Proxmox as the base is ZFS / snapshoting and easy setup of multiple boot drives, which is really nice when one drive inevitably fails 😏
Definitely go with K3s instead of K8s if you want to go the Kubernetes route. K8s is a massive pain in the ass to setup. Unless you want to learn about it for work I would avoid it for homelab usage.
I currently run Docker Swarm nodes on top of LXCs in Proxmox. Pretty happy with the setup except that I can’t get IPv6 to work in Docker overlay networks and the overlay network performance leaves things to be desired.
I previously used Rancher to run Kubernetes but I didn’t like the complexity it adds for pretty much no benefit. I’m currently looking into switching to K3s to finally get my IPv6 stack working. I’m so used to docker-compose files that it’s hard to get used to the way Kubernetes does things though.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.