As a general rule: One system, one service. That system can be metal, vm, or container. Keeping things isolated makes maintenance much easier. Though sometimes it makes sense to break the rules. Just do so for the right reasons and not out of laziness.
Your file server should be it’s own hardware. Don’t make that system do anything else. Keeping it simple means it will be reliable.
Proxmox is great for managing VMs. Your could start with one server, and add more as needed to a cluster.
It’s easy enough to setup wireguard for roaming systems that you should. Make a VM for your VPN endpoint and off you go.
I’m a big fan of automation. Look into ansible and terraform. At least consider ansible for updating all your systems easily - that way you’re more likely to do it often.
One rule one system is very bad practice. You should run a bunch of services with docker compose. If you have enough resources to warrant 3 VMs you could setup a swarm.
I have an x86 proxmox setup. I stuck a kill-o-watt on it. Keep your pi setup if it does what you want, and realize that there’s someone out there who is jealous of your power bill.
My x86 Proxmox consumes about 0.3 kwh a day at around 15% average load. I’ve only had the Kill A Watt on it for a day, so I don’t know how accurate that is, but it shouldn’t be too far off.
My current file server, an old gaming rig, consumes 100w at idle.
I’m considering a TrueNAS box running either 2.5" ssd’s or NVME sticks (My storage target is under 8TB, and that’s including 3 years projected growth).
Holy crap! I have a n100 SFF that consumes 5-6 w idle (with WiFi on) and I have an old i5 (gen 6 I think) that consumes 30 at idle. Your rig is defiantly not meant to act as a server (unless you want to mine bitcoons or run boinc…)
$1/day? At 100W average power usage, that’s 2.4kWh per day, suggesting that where you live, the price is 41.67 cents per kWh, roughly double that of California.
Is electricity that expensive where you live?
Edit: it’s been a while since I lived in the Bay area, I hadn’t realized that the electricity price now ranges from 38-62 cents per kWh, depending on rate plan and time.
Depends on what your server is running. Multiple GPUs, HDDs, and other fun items start to add up to well over 100W. I justify it by using it to keep my 3d printer filament dry.
If you have multiple GPUs in your home server you’re probably doing it wrong. But even then, at idle, with no displays connected, the draw will be surprisingly low.
Most systems with some ssd/NVMe, 2-4 DIMMs and maybe a drive or two should idle closer to 50w-60w.
Newer CPU’s tend to use a good chunk more power under low loads than some older ones. Going from 1st Gen. Ryzen to 2nd Gen. got me about 20 watts higher total system power draw with my use case. And 3rd Gen. is even worse.
Intel is MUCH worse at it than AMD, but every Gen. AMD keeps cranking up those boost clocks and power draw and it really can make a difference at low to mid range loads.
My Ryzen 3000 based system uses about 90 watts at “idle” with all my stuff running and the hard drives on.
It’s probably more about aggressive default bios speeds. Tweak your c states / bios overclocking / pcie power management / windows power management features. Idle power has gone down on most chips.
The Ryzen 3000 should truly idle closer to 20-30w.
That is after tweaking bios settings. Originally I was at around 100 watts, now I’m closer to 80.
Keep in mind that’s with a bunch of hard drives, and it’s not a 100% idle, more of a 90% idle which is where modern “race to idle” CPUs struggle the most.
I’ll freely admit to skimming a bit but yes proxmox can run trunas inside of it. Proxmox is powerful but might be a little frustrating to learn at first. For example by default proxmox expects to use the boot drive for itself and it’s not immediately clear how to change that to use that disk for other things.
The noctua dh-15 is overkill for that cpu btw unless you’re doing an overclock which I wouldn’t recommend for server use. What’s your plans for the 1060? If using proxmox you’ll want to get one of the “G” series AMD CPUs do that proxmox binds to the apu and then you should be able to do gpu passthrough on the 1060.
I’d planned on using the GPU for things like video transcoding (which I know it’s probably way overkill for). Perhaps something like stable diffusion to play around with down the line? I’m not entirely sure. I do know that, since the CPU isn’t a G series, it’ll need to be plugged in at least if/when I need to put a monitor on it. Laziness suggests I’ll likely just end up leaving it in there, lol. As far as the dh-15, yeah, that’s outrageously overkill, I know, and I may very well slap the stock cooler on it and sell the dh-15.
I have a proxbox with a R5 4600G even under extreme loads the stock cooler is fine. Honestly once prox is setup you don’t need a GPU. The video output of proxmox is just a terminal (Debian) so as long as things are running normally you can do everything through the web interface even without the gpu. I do highly recommend a second GPU (either a G series CPU or a cheap GPU) if you want to try proxmox GPU passthrough. I’ve done it and can say it is extremely difficult to get working reliably with just a single GPU.
Yeah, I’d definitely considered the fact that I can probably just take the GPU out as soon as proxmox is set up. The only thing I’d leave it for is for transcoding, which may or may not be something I even need to/want to bother with.
Have you considered keeping them on YouTube but unlisted, so that they don’t show up on your profile nor in youtube searches?
Otherwise, you could create a Google Photos album, but either quality suffers, or the videos will take a lot of space.
All the other options I could suggest either call for a recurrent payment, but trust me, it gets tedious after a while (ie. VPS with Peertube or similar), or call for losing quality by a lot (ie. Whatsapp or Telegram channels/groups), or quickly become unpractical (ie. Mega, Dropbox…)
There are plenty of choices, and if you’re 100% sure you’re fine with recurring payments and having to constantly mantain a system/keep it updated and secure, then go ahead and make a VPS, but if you’d rather have it be convenient, look into additional YouTube settings or common alternatives like Vimeo.
Yup, this is the answer - if they need to be able to open the video with just the link, there’s functionally no difference if it’s self-host or YouTube unlisted. Just a lot less effort.
Another option is to make the youtube video private. Then you have the option to only share it with specific people. If it’s unlisted, then anyone with the link can view it.
Hosting on a VPS will get expensive. 4K video takes up a lot of space. If you want adjustable quality, then you will need to store multiple copies of the video at various resolutions and bitrates. A cheap VPS won’t have a GPU to do real time transcoding.
That wouldn’t surprise me. I’m sure they don’t want people using youtube their own private video archive. Storage isn’t free after all. If they didn’t want people to set videos to private, they would have removed the option though. Just don’t expect the videos to stay there forever.
The more replies like this I get, the more I’m inclined to set up a second computer with just TrueNAS and let it do nothing but handle that. I assume that, then, would be usable by the server running proxmox with all its containers and whatnots.
If you want to learn zfs a bit better though, you can just stick with Proxmox. It supports it, you just don’t get the nice UI that TrueNAS provides, meaning you’ve got to configure everything manually, through config files and the terminal.
You can run Virtual Machines and containers in TrueNAS Scale directly. The “Apps” in TrueNAS run in K3s (a lightweight Kubernetes) and you can run plain Docker containers as well if you need to.
TrueCharts provides additional apps and services on top of the official TrueNAS supported selection.
I have used Proxmox a lot before TrueNAS. At work and in my homelab. It’s great, but the lack of Docker/containerd support made me switch eventually. It is possible to run Docker on the same host as Proxmox, but in the end everything I had was running in Docker. This made most of what Proxmox offers redundant.
TrueNAS has been a better fit for me at least. The web interface is nice and container based services are easier to maintain through it. I only miss the ability to use BTRFS instead of ZFS. I’ve had some annoying issues with TrueCharts breaking applications on upgrades, but I can live with the occasional troubleshooting session.
What happened is that people realized what I’ve been saying since ever - that the RPi and others are a money grab because of all the required accessories while a MiniPC will get you way more power, stable hardware , case, power supply and everything in between for the same price (if you go for second hand). Here is are examples of such posts: lemmy.world/comment/5357961 , lemmy.world/comment/4696545
For eg. for 100€ you can find an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCIe (m2) comes with a power adapter and outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + cable + bullshit adapter + SD card + whatever else money grab - the Pi isn’t just a good option.
Either way, Pis have their use cases however in my opinion it was an overhyped product that sits on the middle of a market:
They tried to make the Arduino easy by adding an operating system and high level programming languages such as Python. It never made much sense, why would you want to have GPIOs directly on a “computer”? not reasonable at all. Nowadays we’re seeing a raise of the ESP32 devices that have 30-40 GPIOs and Wifi for 2$ each. Cheap, easy to develop and deploy and eating away on the Pi’s market.
Another typical use case for a Pi is some low power server, but while it is great in theory then it lacks the CPU performance required for the container-based absurdities people want to run and the I/O sucks. USB wasn’t ever a good way to connect to storage, let alone a USB/network shared bus like we had in the past. The new PCIe is questionable (look at the NanoPi M4v2 from 2018) and requires… more adapters;
Price-wise it doesn’t make much sense as well because a second hand x86 will be 10x faster at the same price point… and way more stable with more expansion.
Now it’s all gone x86 and Proxmox
Proxmox isn’t a new thing, in fact it is a pile of crap and questionable open-source that people still run because they haven’t discovered LXC/LXD yet. Read more here: lemmy.world/comment/6507871. FYI you can run LXD on your Pis and get both containers and virtual machines with it in the same way Proxmox people do with x86.
The irony of this comment is that people will shit on me about replacing Proxmox with LXD in the same way they used to when I said that Pis were a money grab and x86 MiniPCs were way better.
I would agree to a certain point. If you get a 10th gen CPU it is power efficient and there are a lot of gamers and whatnot selling those. Also there are a lot of MiniPCs that come with mobile “T” CPU that are very decent at idle.
But idle still would run much more than 15w. There a very good compilation google sheets for the most efficient X86 cpus, but once you start factoring hdds and ssds, it’s only natural to go higher (20w-30w) at least. That’s at least double than rpis
I don’t (specially DDR3-era stuff) because old server hardware is way more expensive, won’t be of any particular advantage and older hardware, compared to new stuff, will use a LOT of power.
Instead use regular desktop/laptop machines as they’ll probably be more than enough for homelabs. You can a good 9-10th gen Intel CPU and motherboard that is perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. Modern hardware = less power consumption, cheaper, more performance.
If you go really low end, let’s say i5-6500, this will probably cost around 80€ second hand with RAM. You can use www.cpubenchmark.net/compare/ to compare CPUs the server hardware you can get with modern hardware if you’re interested.
Most DDR3-era server hardware comes with RAID controllers/cards and other things that nobody uses anymore, people have moved on the software RAID be it BRTFS or ZFS and you will want to do the same. Servers make a lot of noise - impractical for a home - and a CPU from that era will be around 150-200W, you can get a recent i5 with more performance that runs around 50W.
Another thing to consider: you’re trying to build a NAS get a basic motherboard with 4 SATA ports and then add a PCI to 5 SATA port card and it will be much cheaper than whatever server hardware. BTRFS as your filesystem and its RAID if needed. Now you may be thinking something like “I want a faster CPU in order to have fast SMB”, just don’t - your gigabit network will saturate before an i5-6500 or any mechanical drive does and when this happens you’ll be at something like 10-20% CPU usage. Just don’t waste your money.
Thank you, really appreciate your advice. I was just struggling to install Proxmox on a new machine, and you made me take a step back. The kernel is messed up, do I really want this? Why am I jumping through hoops for this when Debian has zero issues installing? I’ll be trying the container software you mentioned instead.
I’ve done the same thing as the person you replied to is suggesting for around 10 years now. It works very well for a home user because parts etc are readily available. Most hypervisors will run on x86/amd64 hardware without issue. Check out something other than proxmox. LXC is one suggestion. If you’re going to stick with Debian look into SAMBA with BIND to ensure ease of sharing and cross platform integration.
Another reason to not get an old server is power, noise and thermals. They’re designed to live in an air conditioned room. Anyone who works in server rooms for any length of time will tell you to wear ear protection.
people will shit on me about replacing Proxmox with LXD
From reading your comments I understand why. It’s in your delivery. You’re abrasive and you don’t explain why. You’re also telling people not to use something they know, to use something they don’t know, and not explaining how that would be beneficial. As far as I can see, you’ve only explained how LXD, when setup correctly, can do what Proxmox does.
You’re essentially telling people to use something that is at best a side grade for reasons, and being salty about it.
I wrote dozens of posts replying to every single question people had about LXD/Incus. Gave out printscreens, explained how it works, what it does, described useful features and pointed out multiple issues of Proxmox. I can show you what roads you can take and why but you must do the work yourself.
The same applies to the MiniPC vs Raspberry discussion as my price, performance and feature breakdowns and proved countless times that for a large number of use cases a MiniPC is better. Unsurprisingly this is the first of such breakdowns that got upvotes, and do you know why? Because a known youtuber in this space recently came out with a video saying the exact same things I’ve been saying and now it became “acceptable” to criticize the Raspberry Pi money grab.
to use something they don’t know, and not explaining how that would be beneficial you’ve only explained how LXD, when setup correctly, can do what Proxmox does.
Even if that were true, what’s was the issue then? Isn’t it obvious that a true open-source solution that is available on Debian’s repos from a fresh install is better than a half proprietary solution that asks you to buy a license at any turn? Use your common sense.
Besides my comments aren’t a marketing campaign there’s no “LXD will make you rich today and solve all your family drama” as soon as you complete our three step formula:
apt install lxd
lxd init
lxc launch debian debian-container
The advantage of using LXD/Incus are on the details, not on a flashy and shinny feature. It’s about running a clean Debian system, a non twisted and mangled kernel that will conflict with everything and not run stuff like OVPN properly, it’s about the license, the tools, not depending on a company, not having to wait 3x the time before your cluster is online. It’s about having a decent API for once and so many others.
Most people say they don’t want to be put in the same situation they were put about the the CentOS/RedHat licensing change, but then they proceeded to replace CentOS with Ubuntu and still use Proxmox. All questionable open-source that is as likely to fuck you over as RedHat did.
So eventually there will be a video from some youtuber stating that LXD/Incus is much better than Proxmox and people will flock to it without questioning anything. :)
I need everything to be fully but securely accessible from outside the network
I wouldn’t be able to sleep at night. Who is going to need to access it from outside the network? Is it good enough for you to set up a VPN?
The more stuff visible on the internet, the more you have to play IT to keep it safe. Personally, I don’t have time for that. The safest and easiest system to maintain a system is one where possible connections are minimized.
I sometimes travel for work, as an example, and need to be able to access things to take care of things while I’m away and the girlfriend is home, or when she’s with me and someone else is watching the place (I have a dog that needs petsat). I definitely have the time to tinker with it. Patience may be another thing, though, lol.
Tailscale would allow you access to everything inside your network without having it publicly accessible. I highly recommend that since you are new to security.
It’s not clear to me how tailscale does this without being a VPN of some kind. Is it just masking your IP and otherwise just forwarding packets to your open ports? Maybe also auto blocking suspicious behavior if they’re clearly scanning or probing for vulnerabilities?
That’s exactly what it is. I haven’t looked into it too much, but as far as I know it’s main advantage is simplifying the setup process, which in turn reduces the chances of a misconfigured VPN.
Just want to clarify - after looking at Porkbun’s DNS offerings, it does not appear they do DDNS either. Is that correct? So they are not any better than SquareSpace for that service. Porkbun does have an API interface.
It looks like Namecheap has DDNS support (at least I get valid-looking results when I search for that on their website).
I haven’t changed registrars in 10+ years. I am in the same boat re. Google -> SquareSpace. Is DDNS deprecated in favor of API’s across the board? It looks more complicated to set up.
Pro tip: If you use Porkbun, don’t leave your domain’s authoritative DNS with Porkbun nameservers.
Over the year or so I had my stuff configured this way, on at least one occasion (that I know about… I was still setting up my observability stack during this year), the servers were flapping hard for over a day, causing my records to magically vanish from existence intermittently.
I tried contacting them every way I could, hell I even descended into the quagmire of Twitter and created an account so I could tweet at them… and got silence.
Pretty disappointing. I ended up moving all my DNS to AWS Route 53 after a few hours of pulling out my hair. They did eventually respond to my email like a day later, after I’d already moved everything over.
But idk maybe I’m wrong expecting an indie domain registrar to have super high availability on their nameservers… oh well
There is no need fire up a dedicated machine to do this. Use your router/ap running openwrt and connect a hdd via usb. The machine needs at least 128 Mb RAM (256 mb would be better). Install the transmission package, set it up, add a gig of swap space on the hdd and you are good to go. The AP runs 24/7 anyways so there will be very few extra power consumption. Vpns often don’t allow port forwarding (mullvad has stopped support recently if I remember correctly). You can just be a passive node and not often ports, that should work good enough. Consider seeding parts of sci-hub. it’s a project worth supporting imho.
You can just download once of the parts below with less than 12 seeds and set it to host without ratio:
I am by no means an expert, but my current solution is a spare raspberry pi running a docker container with qBitTorrent+VPN that sits plugged into my router. I like to think of it as my first step towards getting my shit together to building a full ARR stack
PS.: if you’re new to this and muddling through, I am happy to send you my notes and the docker compose file. The only thing I had to do outside of that was to mount a network folder so that it was downloading straight into my server and not locally on the Pi
Definitely does the job… I have a Plex server that a lot of family and quite a few friends use. It used to be that every time someone had a request, I would walk over to my desktop, find a torrent, wait for it to finish, copy it over the LAN to my NAS running Plex, and there might be days between me remembering to fulfill their requests. Now I get a message, and immediately from my cellphone pull up the qBitTorrent web UI, paste whatever they asked into the built-in search, click add, and reply “will be in Plex in 10-15 minutes”.
Now I want a fully automated ARR stack with one of those tools that allows people to make their own requests and it have it autopirate… So instead of them sending me request messages, I will be opening my Plex to watch TV, see something I never heard of on the “recently added”, and then guess who requested that and text them “hey was that you? Thanks for the new movie/TV show, I love it”
Got so carried away I didn’t answer your actual question. Yes, good speeds but then again the sucker is hooked up to gigabit fiber. But also, my speed is usually not the bottleneck anyway, I think
Haha i actually like that you got carried away. You have a nice system :) i definitely want to have something similar. With gigabit fiber yeah you will hit whatever cap is on the pi board then and its still plenty
Not sure why you need a new router for PiHole. If your machines all point to the Pihole for DNS, it works. Router has almost nothing to do with what provides DNS, other than maybe having it’s DHCP config include the Pihole for DNS.
Even then, you can setup the Pihole to be both DHCP and DNS (which helps for local name resolution anyway), and then just turn off DHCP in your router.
As I understand it, Tailscale and Nginx fulfill the same requirements. I lean toward TS myself, I like how administration works, and how it’s a virtual network instead of an in-bound VPN. This means devices just see each other on this network, regardless of the physical network to which they’re connected. This makes it easy to use the same local-network tools you normally use. For example, you can use just one sync tool, rather than one inside the LAN, and one that can span the internet. You can map shares right across a virtual network as if it were a LAN. TS also enables you to access devices that can’t run TS, such as printers, routers, access points, etc, by enabling its Subnet Router.
Tailscale also has a couple features (Funnel and Share) which enable you to (respectively), provide internet access to specific resources for anyone, or enable foreign Tailscale networks to access specific resources.
I see Proxmox and TrueNAS as essentially the same kind of thing - they’re both Hypervisors (virtualizatiin hosts) with True adding NAS capability. So I can’t think of a use-case for running one on the other (TrueNAS has some docs around virtualizing it, I assume the use-case is for a test lab, I wouldn’t think running TN, or any NAS, virtualized is an optimal choice, but hey, what do I know? ).
While I haven’t explored both deeply, I lean toward TrueNAS, but that’s because I need a NAS solution and a hypervisor, and I’ve seen similar solutions spec’d many times for businesses - I’ve seen it work well. Plus TrueNAS as a company seems to know what they’re doing, they have a strong commercial arm with an array of hardware options. This tells me they are very invested in making True work well, and they do a lot of testing to ensure it works, at least on their hardware. Having multiple hardware products requires both an extensive test group and support organization.
Proxmox seems equivalent, except they do just the software part, as far as I’ve seen.
Two similar products for different, but similar/overlapping use-cases.
Best advice I have is to make a list of Functional Requirements, abstract/high-level needs, such as “need external access to network for management”. Don’t think about specific solutions, just make the list of requirements. Then map those Functional requirements to System requirements. This is often a one-to-many mapping, as it often takes multiple System requirements to address a single functional requirement.
For example, that “external access” requirement could map out to a VPN system requirement, but also to an access control requirement like SSO, and then also to user management definitions.
You don’t have to be that detailed, but it’s good to at least have the Functional-to-System mapping so you always know why you did something.
You make a very good argument for Tailscale, and I think I’ll definitely be looking deeper into that.
I like your suggestion to map out functional requirements, and then go from there. I think I’ll go ahead and start working on a decent map for that.
As far as the new router for pi-hole… my super-great, wonderful, most awesome ISP (I hope the sarcasm is evident, haha; the provider is AT&T) dictates that I use their specific modem/router (not optional), and they also do not allow me to change DHCP on that mandated hardware. So my best option, so far as I’ve seen, is to use the ISP’s box in pass-through with a better router behind it that I can actually set up to use pi-hole.
Thank you for your thoughts and suggestions! I’m going to take a deeper look at Tailscale and get started properly mapping high-level needs/wants out, with options for each.
Ya don’t need ATT’s modem. Some copy pasta I’ve put together:
If it’s fiber, you don’t need the modem. You’ll still need it once every few months.
Things you’ll need:
your own router
cheap 4 port switch (1gig pref)
Setup: Connect gpon (the little fiber converter box they installed on the wall near modem) wan to any port on 4port switch. Then from switch to gpon port of modem (usually red or green port). Make sure modem fully syncs. Once this happens, you can move the cable from the modem to your own routers wan port. Done! Allow router a few moments to sync as well.
Now, every once in a while they’ll send a line refresh signal that will break this, or if a power outage occurs. In such case, you’ll just plug back in their modem, move cable back to gpon port of modem, wait for sync. Move cable back to router.
Bonus: Hook up all this to a battery backup and you’ll have Internet even during power outages, at least for a while.
Huh, this is interesting, I’ll have to take another look into this. Thanks for the lead!
And I do have a UPS, and it is, indeed, pretty glorious that my internet, security cameras, and server all stay online for a good bit of time after an outage, and don’t even flinch when the power is only out briefly. Convenience and peace of mind. Well worth a UPS.
Since their modem is handing out DHCP addresses, is there any reason why you couldn’t just connect that cable to your router’s internet port, and configure it for DHCP on that interface? Then the provider would always see their modem, and you’d still have functional routing that you control.
Since consumer routers have a dedicated interface for this, you don’t have to make routing tables to tell it which way to the internet, it already knows it’s all out that interface.
Just make sure your router uses a different private address range for your network than the one handed out by the modem.
So your router should get a DHCP and DNS settings from the modem, and will know it’s the first hop to the internet.
I do this to create test networks at home (my cable modem has multiple ethernet ports), using cheap consumer wifi routers. By using the internet port to connect, I can do some minimal isolation just by using different address ranges, not configuring DNS on those boxes, and disabling DNS on my router.
Their modem is my router; it’s both. That’s why I need a new one, to do exactly as you’re describing (is my understanding, although another post here suggests otherwise).
Yea, they all suck that way. I still use my own router for wifi. It’s just routing, and your own router will know which way to the internet, unless there’s something I don’t understand about your internet connection. See my other comment below.
Yea, requirements mapping like this is standard stuff in the business world, usually handled by people like Technical Business/Systems Analysts. Typically they start with Business/Functional Requirements, hammered out in conversations with the organization that needs those functions. Those are mapped into System Requirements. This is the stage where you can start looking at solutions, vendor systems, etc, for systems that meet those requirements.
System Requirements get mapped into Technical Requirements - these are very specific: cpu, memory, networking, access control, monitor size, every nitpicky detail you can imagine, including every firewall rule, IP address, interface config. The System and Technical docs tend to be 100+/several hundred lines in excel respectively, as the Tech Requirements turn into your change management submissions. They’re the actual changes required to make a system functional.
The only reason SBCs were ever relevant is because of the excellent pricing, which has now been matched by used x86 computers. That and if the SBC had an open-source design/implementation (open schematics on RISC-V)
I’ll probably reconsider once renewal comes around, but that’s ~4 years away. Until then, as long as things continue functioning: meh. Doesn’t really make a difference.
Return for refund or replacement. If you’re even slightly concerned about WD giving you trouble, but know eBay/the seller won’t, just go that path since it’s still available.
Yeah I’m guessing this is the easiest option to just get my money back. Appreciate it and I’ll update the post with what I go with. I already have another drive that I tested and works so I’m not desperate for now.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.