If you’re exposing via cloudflare tunnels instead of pointing at your public IP then eveything other people have said covers it. If you are using your public IP then it’s worth blocking non-cloudflare IPs from accessing the site directly
Historically, reverse proxies were invented to manage a large number of slow connections to application servers which were relatively resource intensive. If your application requires N bytes of memory per transaction then the time between the request coming in and the response going out could pin those bytes in memory, as the web server can't move ahead to the next request until the client confirms it got the whole page.
A reverse proxy can spool in requests from slow clients, when they are complete, then hand them off to the app servers on the backend, the response is generated and sent to the reverse proxy, which can slowly spool the response data out while the app server moves onto the next request.
CloudFlare is a good place for beginners to start. Setting up a reverse proxy can be daunting the first time. Certainly better than no reverse proxy.
That being said, having your own reverse proxy is nice. Better security since the certificates are controlled by your server. Also complex stuff becomes possible.
My traefik uses keys encrypt wild card domains to provide HTTPS for internal LAN only applications (vault warden) while providing external access for other things like seafile.
I also use traefik with authentik for single sign on. Traefik allows me to secure apps like sonarr with single sign on from my authentik setup. So I login once on my browser and I can access many of my apps without any further passwords.
Authentik also allows oAuth so I can use that for seafile, freshrss and immich. Authentik allows jellyfin login with LDAP. (This last paragraph could be setup with CloudFlare as well).
This is the way. My setup is very similar except I only use authentik for Nextcloud. I don't expose my "arr" services to the Internet so I don't feel it necessary to put them behind authentik, although I could if I wanted.
Using Duo's free 10 personal licenses is also great as it can also plug into authentik for MFA through the solution.
The primary reason to put authentik in front of arrs is so I don’t have to keep putting in different password for each when logging in. I disable the authentication for each of them in the app itself and then disable the exposed docker port as well so the only way to access it it via traefik + authentik. It has local access only so isn’t directly exposed to the internet.
10 free accounts on duo is very nice but I hate being locked into things (not self hosted). An open source or self hosted alternative to duo would be great.
I use caddy and it does everything for me, but my limited understanding is that the dns entry for which the certs are requested must point to the ip address at which caddy is listening. So if I have a DNS entry like internal.domain.com which resolves to 10.0.0.123 and caddy is listening on that address I can get a http connection, but not an https connection, because letsencrypt can’t verify that 10.0.0.123 is actually under my control.
*.local.domain.com -> has its own cert but the * can be anything and the same cert can be used for anything in place of the star as many times as you want and therefore doesn’t need to be internet accessible to verify. That way vaultwarden.local.domain.com remains local only.
There is an alternate verification method using an API key to your DNS provider, if it’s a supported one. That method doesn’t need any IP to be assigned (doesn’t care if there are A/AAAA records or where they point because it can verify the domain directly).
deSEC.io is a good example of a good, reputable and free DNS provider that additionally allows you to manage API keys. The catch is that they require you to enable DNSSEC (their mission is similar to Let’s Encrypt, but for DNS).
I see that you want to use the cert for intranet apps btw.
What I did was get two LE wildcard certs, one for *.my.dom and one for *.local.my.dom. Both of them can be obtained and renewed with the API approach without any further care to what they actually point at.
Also, by using wildcards, you don’t give away any of your subdomains. LE requests are public so if you get a cert for a specific subdomain everybody will know about it. local.my.dom will be known but since that’s only used on my LAN it doesn’t matter.
Then what I do for externally exposed apps is to point my.dom to an IP (A record) and either make a wildcard CNAME for everything *.my.dom to my.dom, or explicit subdomain CNAME’s as needed, also to my.dom.
This way you only have one record to update for the IP and everything else will pick it up. I prefer the second approach and I use a cryptic subdomain name (ie. don’t use jellyfin.my.dom) so I cut down on brute force guessing.
The IP points at my router, which forwards 443 (or a different port of you prefer) to a reverse proxy that uses the *.my.dom LE cert. If whatever tries to access the port doesn’t provide the correct full domain name they get an error from the proxy.
For the internal stuff I use dnsmasq which has a feature that will override all DNS resolves for anything ending with .local.my.dom to the LAN IP of the reverse proxy. Which uses the *.local.my.dom LE cert for these ones but otherwise works the same.
Last time I checked renaming an empty text file to “Assassin’s Creed 2.zip” was legal in my jurisdiction but I now must fear a C&D Letter from Ubisoft it seems lmao.
Dear god, i fucking hate smart-asses like you. I bet OP could use gay porn in those thumbnails too to satisfy you aswell, but maybe images of popular games fit better right?
It’s not “false advertising”, idiot.
Because OP does not sell those goddamn games and only wanted to show off his UI capabilities. In fact it looks like he doesn’t even fucking sell anything, so what the hell should he advertise for?
Looks like OP’s been passionately working on this project for his own purposes since almost two years, shared it for good will and probably doesn’t give a singular fuck about you using it or not. Just take it or leave it.
What entitlement? I am just saying it’s weird that you would add games with DRM in mockup for a project that’s about DRM-free games. Gay porn or any porn for that matter would also not fit in DRM-free game category.
Entitled, because you’re blatantly calling posts about a free product “false advertisement” while OP is not advertising games.
It’s a MOCKUP, guy wont spend time to research the DRM Protection of every fucking game on Steam to provide you accurate thumbnails. Its a software for games and it showcases pictures of GAMES. He doesnt promote the games but his software.
You should definitely google the definition of the word “Mockup” before you continue being a retard in this forum. Have a nice day.
Maybe not false advertising, guess it’s true that only commercial products could be called that. Misleading then, bad marketing (probably also only for commercial things so not really), I don’t know how to call it.
Just a weird choice, that’s all. Same if you would add Metallica album if you showcase some hobby project meant to host royality free music :)
A reverse proxy takes all your web-based services, e.g.
plex on port 32400
octoprint on port 8000
transmission on port 8888
and allows you to map these to domain names, so instead of typing server.example.com:32400 you can type plex.example.com. I have simplified this quite a bit though - you need DNS configured as well, and depending on your requirements you may want to purchase a domain name if you intend on accessing content from outside your home without a self hosted VPN.
Cloudflare is a DDoS mitigation service, a caching web proxy, and a DNS nameserver. Most users here would probably be using it for Dynamic DNS. You can use it in combination with a reverse proxy as a means to mask your home IP address from people connecting to your self hosted web-based services remotely, but on its own it cannot be used as a reverse proxy (at least easily - would not recommend attempting to). Do note that Cloudflare can see all the data you transmit through their systems, something to bare in mind if you are privacy conscious.
In my opinion though, it would be much better for you to use a self hosted VPN to access your self hosted services (can be used in combination with the reverse proxy), unless there is a specific need to expose the services out to the internet
Edit: fix minor typo, add extra info about cloudflare
So a reverse proxy is a way to manage subdomains? I read somewhere that it allows multiple different services to be hosted on the same port and I think I know that that is probably a lie.
Depends what you mean by same port. A reverse proxy would allow you to expose everything of 443 and then the proxy would route to particular app ports and hosts.
Each service runs/listens on its own port, including the proxy (typically 80/443). When you connect to the proxy using its port, it will look at the domain name you used and proxy your connection to the port for the service that name is setup for.
So when you go to expose these to the network/internet, you only have to expose the port the proxy listens to and the clients only ever use that port regardless of how many services/domains you host.
Edit: whoops, got a little bit sidetracked and didn’t talk about cloudflare at all. I’ll leave it up nonetheless as it contains info.
The reverse proxy only listens on port 80 and 443, so yes, all your services will be accessible through just one/two ports.
The reverse proxy will parse the http request headers and ask the appropriate upstream service (e.g. jellyfin) on localhost:12345 what it should send as a reply. Yes, this means that you need to have a http header so that the reverse proxy can differentiate the services. You don’t need to buy a domain for that, you can use iPhone to make your made up domain map to a local IP address, but you need to call the reverse proxy as sub.domain.com. 192.168.0.123:80 won’t work, because the proxy has no idea which service you want to reach.
I found it really easy to set up with docker compose and caddy as a reverse proxy. Docker services on the same network automatically resolve their names so the configuration file for caddy (the reverse proxy) is literally just sub.mydomain.com { reverse_proxy jellyfin:12345 }. This will expose the jellyfin docker, which is listening on port 12345, as sub.mydomain.com on port 80.
That’s halfway correct - I’ll try and break it down a bit further into the various parts.
Your subdomains are managed in using DNS - if you want to create or change a subdomain, that happens here. For each of your services, you’ll create a type of DNS entry called an “A record”, containing your service’s full domain name, and the IP address of your reverse proxy (in this example, it is 10.0.0.1)
The DNS records would look like the following:
plex.example.com, 10.0.0.1
octoprint.example.com, 10.0.0.1
transmission.example.com, 10.0.0.1
With these records created, typing any of these domains in a browser on your network will connect to your reverse proxy on port 80 (assuming we are not using HTTPS here). Your reverse proxy now needs to be set up to know how to respond to these requests coming in to the same port.
In the reverse proxy config, we tell it where the services are running and what port they’re running on:
plex.example.com is at server.example.com:32400
octoprint.example.com is at server.example.com:8000
transmission.example.com is at server.example.com:8888
Now when you type the domain names in the browser, your browser looks in DNS for the “A record” we created, and using the IP in that record it will then connect to the reverse proxy 10.0.0.1 at port 80. The reverse proxy looks at the domain name, and then connects you on to that service.
What we’ve done here is taken all 3 of those web-based services, and put them onto a the same port, 80, using the reverse proxy. As long as the reverse proxy sees a domain name it recognises from its config, it will know what service you want.
One thing to note though, reverse proxies only work with web-based services
Another user already gave you the answer, but one thing to bear in mind is that Cloudflare only “speak” HTTP(S), and nothing else. So if for example you want to run Minecraft, CloudFlare’s free plan will not allow you to route it through port 80/443 as they don’t know how to “speak” the Minecraft protocol.
So far, I have WireGuard set up, and activate it when I need access.
This year I have considered Cloudflare tunnels to enable them only to issue SSL certificates (instead of signing my own like I did last year). But not sure if it is worth it or if I should just keep signing myself.
(Cert is mainly to avoid SSL warnings on iOS and browsers, so far I am the only one using what I host)
Might also be nice to not have to configure each device to use a different dns server (my own), but not sure the benefit is worth having that dns record “out there” and Cloudflare “in here”.
The DNS-01 challenge [1] allows for issuing SSL certificates without a publicly routable IP address. It needs API support from your DNS provider to automate it, but e.g. lego [2] supports many services.
I personally leave my Wireguard VPN always on, but as its only routing the local subnet with my services, it doesn’t even appear in my battery statistics.
Other than waf stuff, if you have multiple servers behind a small nat, a reverse proxy can service them all from a single exposed public address. You can also do rewrite rules on the proxy vs on each server.
Just throwing out an option if you aren’t aware, gohardrives on ebay and on their site sell used Hdds. 10Tb for $80. The catch is they’ve been used in data centers for 5 years. The company will guarantee the drives for an addition 5 years and it could save you a lot of money depending on how much you want to risk it. I went with 3, one being a parity drive in case other goes bad.
I currently have 6x10TB of these drives running in a gluster array. I’ve had to return 2 so far, with a 3rd waiting to send in for warranty also (click of death for all three). That’s a higher failure rate than I’d like, but the process has been painless outside of the inconvenience of sending it in. All my media is replaceable, but I have redundancy and haven’t lost data (yet).
Supporting hardware costs and power costs depending, you may find larger drive sizes to be a better investment in the long term. Namely, if you plan on seeing the drives through to their 5 year warranty, 18TB drives are pretty good value.
For my hardware and power costs, this is the breakdown for cumulative $/TB (y axis) over years of service (x axis):
The first two died within 30 days, the second one took about 4 months I think. Not a huge sample size, but it kind of matches the typical hard drive failure bathtub curve.
I just double checked, and mine were actually from a similar seller on Amazon - they all seem to be from the same supplier though - the warranty card and packaging are identical. So ymmv?
Warranty was easy, I emailed the email address included in the warranty slip, gave details on order number + drive serial number, and they sent me a mailing slip within 1 business day. Print that out, put the drive back in the box it shipped with (I always save these), tape it up and drop it off for shipping. In my case, it was a refund of the purchase pretty much as soon as it was delivered to the seller.
If you’re asking this you should probably use one to be more safe if you’re exposing stuff to the web, there are other ways of doing it including just VPNing into your home network or using a VPS or cloudflare tunnels, but using a reverse proxy manager in combo with cloudflare DNS is a good place to start and is probably good enough if you use good enough security with it: long unique passwords, two factor, security keys, etc.
I use it to manage my subdomains, something like notes.mywebsite.com would point at my trillium instance while photos.mywebsite.com would point at my my immich container it has more uses but that’s my extent. I just have an instance of a cloud flare dns updater keeping my domain in sync with my ip so I don’t have to do that manually when it changes.
So in my scenario cloud flare is just part of my setup.
Wouldn’t this tunnel everything? I just want 10.10.10.0/24 and 10.0.0.0/24 (VPN and LAN IP range to get tunneled). I also don’t know how this would mitigate this issue
Thanks for the pointer, it seems it’s an DNS issue after all (IT’S ALWAYS DNS). Routing all traffic through the tunnel forces the Clients to use the DNS server of the LAN. Without, my internal websites (which use a public domain namespace) are sometimes resolved with a public DNS. So the browser doesn’t request test.home.network (10.0.0.100) but test.home.network (1.19.72.59).
So, you do want to run rsnapshot on the Borg repository (the destination to which is backed up)? Both rsnapshot and Borg keep a history, so you are keeping a history of when the Borg repository had which history. This will not be particularly efficient nor “as intended”.
be aware that Borg does incremental backups on file chunks, while rsnapshot works on whole files. So if a large file changes, rsnapshot will duplicate the storage used.
a Borg repository is more like a database of chunks (similar to git), while rsnapshot recreates the original backup data.
As far as I know the borg backup store should only add new blocks as new files and remove them when you purge the last backup that uses that block. Obviously some of the metadata files are going to change and will be backed up more frequently but the main data should not.
selfhosted
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.