I use tdarr on my gaming machine and use the higher end GPU to do the work. I also use the trash guide for getting the audio profile I want in my downloads. Then in tdarr, I strip away audio and subtitle languages I don’t want and use the highest quality audio source to add a simple 2 channel audio to make it more compatible for more devices. That way I’m not needlessly transcoding 5.1 Dolby for people who are just watching on TV audio.
Yep, this is a good option for reducing file size at the expense of compatibility and CPU time. Every time OP downloads a file they’ll then have to reencode the file, which can take significant time, depending on the CPU of their NAS box, the file size, etc. It’s also worth noting that reencodes are lossy, so some amount of quality will be lost (although the quality difference may be imperceptible).
If disk space is the only variable we’re optimizing for, then you’re 100% correct, but I think it’s worth calling out that this definitely isn’t without tradeoffs.
It might also be worth considering how they’re consuming this media. If the client isn’t capable of playing back h265 then this will need to be transcoded again to play it back. Many media servers (like Plex) handle this automatically, but it’s definitely worth testing this out with your setup on a couple of files before doing this on your whole media collection.
Cloudflare tunnels are layer 7, so it’s not unlimited access by any means. This also means that certain things will break btw, for example if your website uses websockets to load information, that isn’t supported.
Next, I’d put the computer that is going to be hosting into an isolated vlan of its own and access via external URL only.
If you’re going to use docker images, make sure to vet that they’re updated often and always spin up the latest.
That document doesn’t say what layer. But it does say it supports Websockets.
Just odd that when I try to set it up using a named tunnel I don’t get an option to specify the WS service type. However it does require a service type if you want to connect to it.
Looking at this page it would seem that it’s a layer 7. Although I could be wrong, but my front end app has issues finding my backend service for websockets.
Granted I even tried to connect to my private computer using other protocols. I couldn’t get through. Anyway I’m most likely going to be taking that project offline soon.
No, but I thought I clarified that when I said it’s basically wireguard VPN which operates using tcp/udp (layer 3.) layer 7 is stuff like https. CF tunnels are lower level.
Page you linked is missing the layer between CF and source server so it doesn’t indicate layer. You can lookup wireguard protocol if you want more details.
Be sure to avoid “remux” quality. I didn’t know what this meant at first - it’s a file with no compression an uncompressed 1:1 copy of the source, so even “low-resolution” video files can be truly massive. A 1080p movie should be between 2GB-10GB or so; I’ve found that remuxes are typically 15GB-50GB, or even larger.
So, what you said - it’s a 1:1 copy of the source. With no compression. Which is what I said, as far as I can tell?
What I don’t understand is why the article says it allows for smaller file sizes, when I’ve found without fail that remuxes are the largest variety by far. It made sense to me that a file produced without compression would be larger than the same file, compressed.
It can save data by excluding data streams that you don’t need. For instance, I don’t need French, Italian, Japanese, German 5.1 audio streams that each have 700Mbps bitrates or higher, nor do I need an English 1.5Gbps master audio stream, a 700 Mbps English stream, a 500 Mbps descriptive audio for the blind, and 5 different special edition commentary tracks for a film I’ll watch once or twice. All those tracks can really add up, and torrent sites are often country or language specific, so remuxes might have original language and/or native language audio only.
Ah, it looks like we have a small misunderstanding. I thought you were talking about uncompressed video, which is enormous. This is only used in HDMI cables for example. A 1080p60 uncompressed video is 2.98Gbit/s, or about 1.22 terabytes per hour.
A remux is “uncompressed” in the sense that it isn’t recompressed, or in this case transcoded. A remux is still compressed, just to a lesser degree than a transcode. This means the files are indeed larger, but the quality is also better than transcodes.
To clarify the article’s confusing statement: they claim that remuxes can reduce size by throwing away some audio streams, while keeping the original video. This is true, but the video itself hasn’t gotten any smaller: you are simply throwing away other information.
Most “VPN” browser extensions (if not all of them) aren’t actually doing a VPN connection but just change the proxy setting in the browser. This is because as a browser extension they wouldn’t have enough permissions/power to establish a real VPN connection.
So if you want to use a browser extension you have to run a proxy server, or as other said, just use cloudflared as running a proxy server attracts bots from all over the world
You can do this with custom formats. You’d want to create a custom format that gives a score if the file is below a certain size threshold (say 1.5GB per hour), then add minimum custom scores to the release profiles you use (e.g. Bluray 1080p). You can also add custom filters for release groups that prioritise file size. YTS for example keeps their releases as small as possible.
In the profiles? Not sure if that’s the correct term, but there’s settings for all the quality profiles 720p 1080p etc with a slider that can set minimum and maximum size for that quality.
Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot).
I had the same problem as OP. My solution was to port forward to my server but then block connections from all IP addresses accept from my work, which I added to an allowlist.
It’s working well so far, but I think the Cloudflare tunnel is the better option.
Figured it out. I had created 3 VMs but was trying to created shared storage between 2 xcp-ng instances. I assumed that this can be done and the 3 instance will only act as witness without contributing to the storage.
After going through many replies on the forum thread I understood that you need minimum 3 hosts participating in the storage. Modified my setup to match the requirements (was easy since they are just VMs) all the instructions worked correctly.
The error about missing linstor python module was that I hadn’t installed necessary packages on 3rd host in the pool since I assumed it doesn’t require XOSTOR instructions to be run on it since it would not be participating in shared storage. This is my understanding though and I can be wrong.
Having written above I think I can still have only 2 hosts participating in storage but just need to install necessary packages on third host also. Will try and see how it goes.
Like others have said, for a thousand dollars you can get a ton of stuff. For comparison my latest bud cost me around $200 and has about 6tb of raw storage. It runs proxmox and is paired with a mini PC I bought when I first started. I have btrfs raid for the system and then a separate controller for a TrueNAS VM. It even has a bluray drive that I picked up second hand and a RX590 that had to be cut down to fit in the case.
$1000 dollars can buy you a mini data center with used hardware. I honestly don’t know what to recommend but what ever you do make sure its flexible down the road so you aren’t locked into stuff from the past. I would go for a beefier CPU with good cooling and plenty of pcie. Just a note Intel CPUs work better for video encoding.
selfhosted
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.