That’s why people keep asking you for your nginx config since when you just say nginx, people are expecting that you are using just nginx, and configuring it through text files.
So, you’re going to run into some difficulties because a lot of what you’re dealing with is, I think, specific to casaOS, which makes it harder to know what’s actually happening.
The way you’ve phrased the question makes it seem like you’re following a more conventional path.
It sounds like maybe you’ve configured your public traffic to route to the nginx proxy manager interface instead of to nginx itself.
Instead of having your router send traffic on 80/443 to 81, try having it send the traffic to 80/443, which should be being listened to by nginx.
Systems that promise to manage everything for you are great for getting started fast, but they have the unfortunate side effect of making it so you don’t actually know what it’s doing, or what you have running to manage everything. It can make asking for help a lot harder.
I have personally been using OVH for $1.05/month. This offer is only available for their new customers and also is only offered in certain regions as well. I’ve been using it for my personal small projects to host my frontend projects and also as VPN server as well. It is now more than a month since I’ve been using it and haven’t had any issues so far.
Another platform I might suggest is Oracle cloud. They have a free VPS offer FOREVER. If you’re register for the first time and their system gives you an error it’s their “fair usage policy” of some sort where their system might think you’re someone that’s trying to abuse their free offerings(best not to waste time if it doesn’t work the first time). If you do register tho, you might need to do a few research when you’re starting out initially. The platform has lots of options and tools and it might get overwhelming if this is your first time. Nonetheless, I believe it is manageable tho. It just takes a little bit of time to get used to their interface.
+1 for the Oracle solution. I use one for my public IP, and port forward over WireGuard to my home. They claim something like 480Mbps, but it’s nowhere near that, at least for external traffic. But in any event I’ve been using it for a few months with no real complaints.
And yes, I fully appreciate the irony of trying to self-host services to get away from big corporations, but relying on Oracle to do so.
I just signed up to oracles free cloud service after watching a video where it was said it was always free, but the wording on oracle’s site made it seem like its a trial. Are there two free options?
I assume you’re referring to the two free instance you get with the x86 VMs. And that is correct. As far as I remember, they offer you two VMs with x86 1VCPU and 40GB of block storage minimum or you can create 5 VMs with ARM with CPUs with lots of cores and memory each having 40 GBs of storage(don’t remember the exact use-case). If you want to know what you are getting, please read the doc I have attached for you for always free resources. They offer you free 200GB in total and the bandwidth they offer is 20TB/month with 1Gbps speeds for each VMs (I might be wrong tho. It’s been a while since I’ve used their service). Also, in case you’re wondering why I’m not using a free service like this is, I used one of their servers to pirate copyrighted material. They’ll ban you without warning so read their terms of services and try not to be a fool like I am. docs.oracle.com/…/freetier_topic-Always_Free_Reso…
Basically when I was registering it had some wording like “start your free trial now” but once I got into the dashboard there was a message that cleared things up. So I have a free trial of what I assume is a higher tier of their service, which upon running out will revert me back to the Always Free tier.
Roku, Playstation, Xbox, streaming device from ISP (like the device from comcast), Fire stick, and I’m sure there are many more. They all do what you’re looking for.
Xbox sucks as a streaming box, especially with Plex. If you try to choose something from the watchlist, it can’t send a url to the related streaming app.
I don’t fully understand what you’re saying, but let’s break this down.
Since you say you get an NGINX page, what does your NGINX config look like? What exactly does the NGINX “login page” say? Is it an error or is it a directory listing or something else?
Yup, ended up going with oracle. Free is good for me, and im totally new to this so it doesnt really make a difference to me if theres a minor interface difference between 2 providers.
As in, I have Nginx running on my server and use it as a reverse proxy to access a variety of apps and services. But can’t get it playing nicely with AIO Nextcloud.
To play off what others are saying i think a mini pc and a stand alone nas may be the better route for you. It may seem counter intuitive to break it out into two devices but doing so will allow room for growth. If you buy a creeper bare bones mini pc and put more of your budget towards a nas and storage you could expand the mini pc without messing with your nas. You could keep the pi in the mix for a backup if your main pc is down or offload some services to it to balance performance.
You know, I’m not sure why this didn’t cross my mind as I started doing research. I have seen this recommendation countless times around here and people seem to have great experiences going the mini pc route. Thanks for your insight. Do you have any specific mini pc or NAS in mind that you would recommend?
Most of that will be budget based and long term goal oriented. Do you want a 4 bay nas with 10tb drives set up in raid 5 or do you think you’d want a two bay system with 5tb drives set up in mirror raid? Do you want to start cheap and get a second hand thinkcenter off ebay or do you want to buy a brand new NUC and put a 2tb M.2 and 16gb of ram in one slot so you can add the other 16gb later? Some nuc can take up to 64gb of ram and have two 2tb drives in them.
I was originally thinking at least 4 drives (4 if I went the synology/other of the shelf option, or more if I went the DIY route). Not opposed to a secondhand computer, especially if the price and performance is good. It seems like a brand new NUC can get fairly expensive.
Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.
This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.
I went with the DS1621xs+, the main driving factors being:
that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.
If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.
If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.
And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.
Only jellyfin/Plex is on my NUC. On the nas I run:
CloudFlare is a good place for beginners to start. Setting up a reverse proxy can be daunting the first time. Certainly better than no reverse proxy.
That being said, having your own reverse proxy is nice. Better security since the certificates are controlled by your server. Also complex stuff becomes possible.
My traefik uses keys encrypt wild card domains to provide HTTPS for internal LAN only applications (vault warden) while providing external access for other things like seafile.
I also use traefik with authentik for single sign on. Traefik allows me to secure apps like sonarr with single sign on from my authentik setup. So I login once on my browser and I can access many of my apps without any further passwords.
Authentik also allows oAuth so I can use that for seafile, freshrss and immich. Authentik allows jellyfin login with LDAP. (This last paragraph could be setup with CloudFlare as well).
This is the way. My setup is very similar except I only use authentik for Nextcloud. I don't expose my "arr" services to the Internet so I don't feel it necessary to put them behind authentik, although I could if I wanted.
Using Duo's free 10 personal licenses is also great as it can also plug into authentik for MFA through the solution.
The primary reason to put authentik in front of arrs is so I don’t have to keep putting in different password for each when logging in. I disable the authentication for each of them in the app itself and then disable the exposed docker port as well so the only way to access it it via traefik + authentik. It has local access only so isn’t directly exposed to the internet.
10 free accounts on duo is very nice but I hate being locked into things (not self hosted). An open source or self hosted alternative to duo would be great.
I use caddy and it does everything for me, but my limited understanding is that the dns entry for which the certs are requested must point to the ip address at which caddy is listening. So if I have a DNS entry like internal.domain.com which resolves to 10.0.0.123 and caddy is listening on that address I can get a http connection, but not an https connection, because letsencrypt can’t verify that 10.0.0.123 is actually under my control.
*.local.domain.com -> has its own cert but the * can be anything and the same cert can be used for anything in place of the star as many times as you want and therefore doesn’t need to be internet accessible to verify. That way vaultwarden.local.domain.com remains local only.
There is an alternate verification method using an API key to your DNS provider, if it’s a supported one. That method doesn’t need any IP to be assigned (doesn’t care if there are A/AAAA records or where they point because it can verify the domain directly).
deSEC.io is a good example of a good, reputable and free DNS provider that additionally allows you to manage API keys. The catch is that they require you to enable DNSSEC (their mission is similar to Let’s Encrypt, but for DNS).
I see that you want to use the cert for intranet apps btw.
What I did was get two LE wildcard certs, one for *.my.dom and one for *.local.my.dom. Both of them can be obtained and renewed with the API approach without any further care to what they actually point at.
Also, by using wildcards, you don’t give away any of your subdomains. LE requests are public so if you get a cert for a specific subdomain everybody will know about it. local.my.dom will be known but since that’s only used on my LAN it doesn’t matter.
Then what I do for externally exposed apps is to point my.dom to an IP (A record) and either make a wildcard CNAME for everything *.my.dom to my.dom, or explicit subdomain CNAME’s as needed, also to my.dom.
This way you only have one record to update for the IP and everything else will pick it up. I prefer the second approach and I use a cryptic subdomain name (ie. don’t use jellyfin.my.dom) so I cut down on brute force guessing.
The IP points at my router, which forwards 443 (or a different port of you prefer) to a reverse proxy that uses the *.my.dom LE cert. If whatever tries to access the port doesn’t provide the correct full domain name they get an error from the proxy.
For the internal stuff I use dnsmasq which has a feature that will override all DNS resolves for anything ending with .local.my.dom to the LAN IP of the reverse proxy. Which uses the *.local.my.dom LE cert for these ones but otherwise works the same.
You got a remux, which is uncompressed. You can turn those off in Radarr to avoid those surprises.
If you want to fine-tune your file sizes (and quality) further, you can set up custom formats and quality profiles. The Trash Guides explain it well, the “HD Blu-ray + Web” profile on that page is a solid starting point. It’ll usually grab 6-12GB movies, but you can tweak it if you want them smaller.
Doesn’t Trash Guides prefer larger files though? Iirc if you just do everything as they recommend you’ll always be grabbing the highest quality stuff available, which is the opposite of what this person wants.
The guide doesn’t set an upper bound on the UHD quality profiles, but that doesn’t mean you have to set up yours exactly the same.
I have mine set with reasonable limits and have never run into a problem with file size, just have to make sure you’re setting the values to something that’s a) realistic and b) that you can live with.
One thing to note: if you set your threshold cutoffs properly you don’t have to worry about downloading files that are always at the upper end of the limit. Once the service downloads a file that meets the threshold it stops downloading for that episode/movie. If it grabs a file that’s below the threshold, it will keep trying to upgrade the file until the threshold is met.
The first worry are vectors around the Synology, It’s firmware, and network stack. Those devices are very closely scrutinized. Historically there have been many different vulnerabilities found and patched. Something like the log4j vulnerabilities back in the day where something just has to hit the logging system too hit you might open a hole in any of the other standard software packages there. And because the platform is so well known, once one vulnerability is found they already know what else exists by default and have plans for ways to attack it.
Vulnerabilities that COULD affect you in this case for few and far between but few and far between are how things happen.
The next concern you’re going to have are going to be someone slipping you a mickey in a container image. By and large it’s a bunch of good people maintaining the container images. They’re including packages from other good people. But this also means that there is a hell of a lot of cooks in the kitchen, and distribution, and upstream.
To be perfectly honest, with everything on auto update, cloud flares built-in protections for DDOS and attacks, and the nature of what you’re trying to host, you’re probably safe enough. There’s no three letter government agency or elite hacker group specifically after you. You’re far more likely to accidentally trip upon a zero day email image filter /pdf vulnerability and get bot netted as you are someone successfully attacking your Argo tunnel.
That said, it’s always better to host in someone else’s backyard than your own. If I were really, really stuck on hosting in my house on my network, I probably stand up a dedicated box, maybe something as small as a pi 0. I’d make sure that I had a really decent router / firewall and slip that hosting device into an isolated network that’s not allowed to reach out to anything else on my network.
Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.
Firewall drops everything between your home network and that box except SSH in, or maybe VNC in depending on your level of comfort.
Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.
I used to have a separate box, but the only thing it did was port forwarding
Specifically i don’t really understand the topology of this setup, and how do i set it up
Cloudflare tunnel is a thin client that runs on your machine to Cloudflare; when there’s a request from outside to Cloudflare, it relays it via the established tunnel to the machine. As such, your machine only need outbound internet access (to Cloudflare servers) and no need for inbound access (I.e. port forwarding).
I’d imagine an isolated VLAN should be sufficient good starting point to prevent anyone from stumbling on to it locally, as well as any potential external intruder stumbling out of it?
You need to have a rather capable router / firewall combo.
You could pick up a ubiquity USG. Or set up something with an isp router and a PF sense firewall.
You need to have separate networks in your house. And the ability to set firewall rules between the networks.
The network that contains the hosting box needs to have absolutely no access to anything else in your house except it’s route out to the internet. Don’t have it go to your router for DHCP set it up statically. Don’t have it go to your router for DNS, choose an external source.
The firewall rules for that network are allow outbound internet with return traffic, allow SSH and maybe VNC from your home network, then deny all.
The idea is that you assume the box is capable of getting infected. So you just make sure that the box can live safely in your network even if it is compromised.
The box you’re hosting on only needs internet access to connect the tunnel. Cloudflare terminates that SSL connection right in a piece of software on your web server.
Are you my brain? This exactly the sort of thing I think about when I say I’m paranoid about self-hosting! Alas, as much as I’d like to be able to add an extra box just for that level of isolation it’d probably take more of a time commitment than I have available to get it properly setup.
The attraction of docker containers, of course, is that they’re largely ready to go with sensible default settings out of the box, and maintenance is taken care of by somebody else.
I’m pretty happy with Digital Ocean if I need a temporary VPS because I can pay by the minute. Anything that I want to stay alive for more than a month or two, I do on a single 6-core VPS rented long-term from Netcup, a low-cost German provider, deploying with Docker and Traefik.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.