If you’re giving those companies personal info (name, phone, address, CC) they can track you regardless of what emails you use with each of them.
And if you’re not giving them personal info I don’t see how that works. Yeah so I register on both random site A and random site B with aliases @tfyuhegddssgvd.com, so what? How are they going to find out about each other? What will they tell each other even if they did? And why risk a GDPR violation for such silly reasons?
Can you explain what you mean by “visually lossless”? Is this a purely subjective classification, or is there a specific definition or benchmark you used?
Visually lossless means I couldn't tell an image difference even when pixel peeping with imgsli. Good enough means I couldn't tell a difference in video, but could occasionally see a compression artifact in imgsli.
Most “VPN” browser extensions (if not all of them) aren’t actually doing a VPN connection but just change the proxy setting in the browser. This is because as a browser extension they wouldn’t have enough permissions/power to establish a real VPN connection.
So if you want to use a browser extension you have to run a proxy server, or as other said, just use cloudflared as running a proxy server attracts bots from all over the world
Historically, reverse proxies were invented to manage a large number of slow connections to application servers which were relatively resource intensive. If your application requires N bytes of memory per transaction then the time between the request coming in and the response going out could pin those bytes in memory, as the web server can't move ahead to the next request until the client confirms it got the whole page.
A reverse proxy can spool in requests from slow clients, when they are complete, then hand them off to the app servers on the backend, the response is generated and sent to the reverse proxy, which can slowly spool the response data out while the app server moves onto the next request.
You may want to consider a mini PC. That was my upgrade after torturing my raspberry pi for many years. I landed here after agonizing over building the perfect NAS media server. Still very low on power consumption, but the compute power per dollar is great these days. All this in only a slightly larger form factor over the pi. I brought over the drives from the pi setup and was up and running for a very low cost. The workload transferred from the pi (plex, NAS, backups, many microservices/containers) leaves my system extremely bored where the pi would be begging for mercy.
I don’t do a lot of transcoding, so I’m no expert here, but looking at the documentation I believe you would want a passmark score of 2000 per each 1080p transcode, so 8000+ for your 4+ streams, not including overhead for other processes.
Thanks for the great info! What mini PC did you end up going with? I’ve heard Beelink and a few others thrown around here and there, and most seem to be impressed with what they can do. Do you mind elaborating some on how you handle your drives with this type of setup? Do you just have some sort of NAS connected directly to the pc?
No worries. I got a beelink S12, non-pro model with 8G RAM and 256G SSD. It was on sale for about $150 USD. Fit my use case, but maybe not yours, although you might be surprised. Perhaps those extra plex share users won’t be concurrently transcoding?
The drives are all USB, the portable type that requires no power source. Like you, I don’t need much. I have ~12T across 3, with a small hub that could provide more ports in a pinch. This model I believe also provides a SATA slot for a 2.5” drive, but I haven’t used it. All of these drives were previously connected to a rpi 3B+, haha!
The drive shares are done via samba and also syncthing. I have no need for a unified share via mergerfs, but I did take a look at this site for some ideas. I’m the type that rolls all their own services rather than using an NAS based distro. Everything is in an ansible playbook that pushes out my configs and containers.
Edit: I should make it clear the NAS is for other systems to access the drives. Drives are directly connected via USB. All my services are contained in this single host (media/backup/microservices/etc). My Pi’s are now clustered for a k3s lab for non critical exploration.
I’m a bit of a minimalist who designs for my current use with a little room to grow. I don’t find much value in “future proofing” as I’ve never had much success in accomplishing that.
I’ll probably start out with just letting my parents access Plex to see how it performs. They would be remotely streaming off an Apple TV, so I’m not entirely sure how much, if any, transcoding will be needed. My other issue is that transcoding is uncharted territory for me, so I should probably work on getting a better understanding of how/when it might come into play in my situation.
Everything else you described sounds like it would fulfill what I’m looking for. I don’t plan on solely hosting “mission critical” aspects of my life on this (at least for now while I continue to learn and possibly break things), but it would help me take the training wheels off my bike.
Happy to help. As I have it configured, my local network is set to prefer direct play, so any transcoding gets done from connections that traverse the boundary of my network. If you don’t live with your parents this would likely apply.
Transcoding may also occur when you have subtitled content and I believe for certain audio formats, but the transcoding would be limited to the audio track.
Also, how important is having one do-it-all server vs. a few separate servers? Sounds like you’re ok with at least two servers (Pi turns into HA OS, and you get a new one for everything else).
I wouldn’t say energy usage/efficiency is super high on my list, but I am also not opposed to being somewhat conscious about that. Basically, a little bit extra on my electric bill won’t kill me.
Separate servers is also something I would be fine with. The Pi has been great, and I figured I could keep utilizing it the way I have been with some other services. It is currently running some form of Ubuntu server (can’t remember off the top of my head), and everything is containerized.
Cool! I just got an Orange Pi 5 Plus, 16GB RAM**, but haven’t set it up yet so can’t give any recommendations. On paper though it looks great — significantly beefier than a RPi 4 (my current server), and supports M.2 NVME as well. Might be worth looking into for your use too, but the emphasis here is kinda on computing with a very low power budget, so I’m sure you could get more horsepower with e.g. an x64 NUC or similar.
Here’s a review, and note that this is without extra heatsink so it was probably thermally throttling (as was the RPi?): www.phoronix.com/review/orange-pi-5
**I first ordered the 32GB version but it got seized for counterfeit postage, and then some shenanigans ensued. If buying from Amazon I would suggest only buying units in stock and shipped from Amazon. May only apply to US though…
If it’s a static site, you can host that anywhere for free on the big cloud providers, aws has s3 storage, Microsoft has blobs, github has pages, all which can be configured to run a site well under the paid tiers.
One of the first services on my server was nextcloud in docker container from lsio. Never had problems so there was no need to try AIO, but so many people recommend that, it will be my next setup if this one fails me
I decided to go with this one because it’s now the official distribution channel and supported by the devs. But the lsio one looks pretty solid as well.
So next I’d be checking logs for sata errors, pcie errors and zfs kernel module errors. Anything that could shed light on what’s happening. If the system is locking up could it be some other part of the server with a hardware error, bad ram, out of memory, bad or full boot disk, etc.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.