I just did a quick bing chat search (“does DRI_PRIME work on systems without a cpu with integrated graphics?”) and it says it will work. I can’t check for you because my CPUs all have graphics.
I CAN tell you that some motherboards will support it (my ASUS does) and some don’t (my MSI).
BTW, I’m talking about Linux. If you’re using Windows, there’s a whole series of hoops you have to jump through. LTT did a video a while back.
While it might work in the OS, setting the OS up may be a pain (the installer may or may not work like that) and I strongly suspect that the BIOS can’t handle it.
I suspect that an easier route would be to use a cheap, maybe older, low-end graphics card for the video output and then using DRI_PRIME with that.
It’s probably a pain to set up in Windows. In Linux, it just works, there’s nothing to set up. I’m using it right now.
OP really should have mentioned their OS.
Edit: Actually, nevermind both my posts. I know DRI_PRIME works by using my APU for regular desktop activity, and routing discrete GPU output in whenever a game is being played. But I don’t know if it’s possible to make it use the dGPU all the time.
Even if it did, it would only work inside the OS, so if you had to boot into the BIOS for anything, you wouldn’t have a display. So for all intents and purposes, it wouldn’t really work.
If you are doing high bandwidth GPU work, then PCIe lanes of consumer CPUs are going to be the bottleneck, as they generally only support 16 lanes.
Then there are the threadrippers, xeons and all the server/professional class CPUs that will do 40+ lanes of PCIe.
A lane of PCIe3.0 is about 1GBps (Byte not bit).
So, if you know your workload and bandwidth requirements, then you can work from that.
If you don’t need full 16 lanes per GPU, then a motherboard that supports bifurcation will allow you to run 4 GPUs with 4 lanes each from a CPU that has 16 lanes if PCIe. That’s 4GBps per GPU, or 32Gbps.
If it’s just for transcoding, and you are running into limitations of consumer GPUs (which I think are limited to 3 simultaneous streams), you could get a pro/server GPU like the Nvidia quadros, which have a certain amount of resources but are unlimited in the number of streams it can process (so, it might be able to do 300 FPS of 1080p. If your content is 1080p 30fps, that’s 10 streams). From that, you can work out bandwidth requirements, and see if you need more than 4 lanes per GPU.
I’m not sure what’s required for AI. I feel like it is similar to crypto mining, massive compute but relatively small amounts of data.
Ultimately, if you think your workload can consume more than 4 lanes per GPU, then you have to think about where that data is coming from. If it’s coming from disk, then you are going to need raid0 NVMe storage which will take up additional PCIe lanes.
5? Holy heck, that’s amazing. I remember helping people that had built streaming rigs to use during the pandemic, and wondering why their production was stuttering and having issues with a bunch remote callers. Some of that work ended up being CPU bound.
Although, looks like that patch is for Linux? Not much use if your running vmix or some other windows-only software.
In OPs case, however, that’s not a problem
The spirit of Self-Hosting is trying things and then asking specific questions when you get stuck (stuck includes having no luck using a search engine).
Please let me know what you find for jellyfin with arrs and VPN. I have found that the VPN always interferes with jellyfin and other stuff and haven’t been able to figure out gluetun.
Stuff like this is why I moved my docker from unraid to a VM where I can use docker compose. Docker compose is really the only way to get a clean setup with complex stuff like this. That being said I recommend beginners use unraid. You don’t need a full vpn for torrents, a socks5 proxy will be fine and doesn’t require and special docker settings.
My setup uses traefik reverse proxy. Internal HTTPS (let’s encrypte wild card) and external HTTPS depending on what I want.
It uses authentik for single sign on and in this case provides LDAP for jellyfin and also provides web authentication for arr services.
The glutun container can be configured with any VPN and all services can only access the internet via the VPN.
My NAS is unraid, my docker host is a VM on proxmox. Media files are stored on HDDs on unraid and everything else is on on the docker SSD. Volumes are connected to where they need to be via NFS shares.
There are limits for cpu and ram so one container can’t bring everything down.
The containers themselves all communicate via their own docker network and only the reverse proxy (traefik) allows access to the UI.
For Headscale, I don’t have any direct experience but unRAID has a decent Wireguard plugin, and should get you up and running in a pinch.
And for your self-hosted services (especially Bitwarden) ensure you’re not exposing this on the net, by VPN is the only option I’d recommend. Even so, I prefer to use Bitwarden’s hosting with a family plan, for peace of mind and resiliency. It’s also much easier for my family.
UnRAID is a great place to start - it allows you to scale cheaply as you need and is easier to fix mistakes. Good luck, and happy homelabbing!
I agree Reddit is toxic. I’d argue reddit actually stopped being Reddit around 2016. But it’s posts like this that clog it all up and are partially why it is the way it is today.
I gotta agree with this. The toxicity in any reddit thread increases dramatically when the poster pre-emptively complains about all the toxicity they expect to receive. Whereas when you just ask straight without going into a whole speech about comment quality, you get much better replies. Particularly because it's hijacking your own thread; changing it from whatever question you wanted to ask into an analysis of the comments.
To your point, I clicked on this post hoping to see what OP was going to use and why because I would like to build my own NAS some day. But like you said, this post is a waste of everyone’s time.
No. The video card is only wired to send video out through it’s ports (which don’t exist) and the ports on the motherboard are wired to go to the nonexistent iGPU on the CPU.
In windows you’re not sending the signal directly through another port. You’re sending the dGPU’s signal through the iGPU to get to the port.
On a laptop with nvidia optimus or AMD’s equivalent you can see the increased iGPU usage even though the dGPU is doing the heavy lifting. it’s about 30% usage on my 11th gen i9’s iGPU routing the 3080s video out to my 4k display.
IMHO, Duplicacy is better than all of them at all those things - multi-machine, cross-platform, zstd compression, encryption, incrementals, de-duplication.
The paid GUI version is extremely cautious on the auto-updates (it’s basically a wrapper for the CLI) - perhaps a bit too cautious. The free CLI version is also very cautious about making sure your backup storage doesn’t break.
For example, they recently added zstd encryption, yet existing storages stay on lz4 unless you force it - and even then, the two compression methods can exist in the same backup destination. It’s extremely robust in that regard (to the point that if you started forcing zstd compression, or created a new zstd backup destination, you can use the newest CLI to copy data to the older lz4 method and revert - just as an example). And of course you can compile it yourself years from now.
The licence is pretty clear - the CLI version is entirely free for personal use (commercial use requires a licence, and the GUI is optional). If you don’t like the licence, that’s fine, but it’s hardly ‘disingenuous’ when it is free for personal use, and has been for many years.
So next I’d be checking logs for sata errors, pcie errors and zfs kernel module errors. Anything that could shed light on what’s happening. If the system is locking up could it be some other part of the server with a hardware error, bad ram, out of memory, bad or full boot disk, etc.
What cert did you put on the proxy answering the inbound? Usually that error means either the browser doesn’t like the cert, or it’s connecting to 80, and modern browsers really fight you on that sometimes. Also, cache. Clear your cache if you’re bouncing between internal URL/IP and the public.
I assume you just want to expose to internet to learn art of reverse. Otherwise there’s better ways.
Mainly I want to expose it so I can access my stuff remotely. What would you recommend otherwise? Traefik looks alot more difficult to me from the get go but I haven’t tried it out yet (because I dont know where to start) Issue is just that I have a basic understanding about docker/ubuntu stuff now (or I know how to manipulate stuff like I want) but basically everything with Web and https is a big black hole for me which I can’t seem to grasp yet.
Yeah, it’s a lot. It’s a very large field, and you’re playing in two or three areas here.
Look at a couple of overlay options. ZeroTier is the one I remember off top of my head. There are others, Google alternatives. These use a coordination server. Some are a hosted service, but there’s some that you host yourself. These are supposed to be pretty easy. You watch a couple of videos on these, I bet you’re be fine.
Wire guard offers more traditional VPN. You can tunnel your device back to your network. Some routers offer a VPN option. There’s open sense, ddwrt, etc. Again, lots of videos.
Since you said you mostly wanted remote access, I strongly suggest not opening services to public and use VPN.
You can still learn reverse proxy too, but just do it internally, even though it wouldn’t technically be needed. This will be much safer and learner friendly.
I have ridiculous amounts of services running, but I use gateway router VPN to access most of them.
using a vpn or similar is not really an option as I have famiy members accessing it and I dont want to always connect using a vpn just for example to open my garage or accessing my shopping list. Security wise I just use 2FA so I dont think thats the issue.
if I close the 8123 port and remove my cache, firefox will warn me, if I click on forward anyways it will forward to a website from my router for some reason saying that the DNS-Rebind-Protection has blocked my attempt and that there is some issue with the host-header.
Instead of forwarding ha.yourdomain.com to 192.168.178.214 (which I assume is the lan ip address for your machine), you should forward it to a hostname called homeassistant (which is the hostname for the home assistant instance inside your docker compose network).
You’re using network_mode: “host” which makes the container use the host’s networking directly. When you use host mode, the port mappings are ignored because the container doesn’t have its own IP address, it’s sharing the host’s IP. Remove or change the network mode to see if that fixes it.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.