Do two NICs. I have a bigger setup, and it’s all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.
Huh, cool, thank you! I’m going to have to look into that. I’d love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊
You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.
So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).
My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.
The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:
switch trunk port
enp2s0f0 (physical)
vmbr1 (Linux bridge)
vmbr1.60 (Proxmox server interface)
vmbr1.100 (Proxmox VLAN interface)
virtual guest nic (w/ vlan tag and IP address)
vtnet1 (OPNsense “physical” nic, but actually virtual)
vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)
All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.
Like I said, it’s a headfuck when you first set it up. Interface-ception.
The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.
I haven’t done it - but I believe Proxmox allows for creating a “backplane” network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn’t interfere with other traffic being used by the VMs and the rest of your network.
You’d just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn’t route anywhere else.
In proxmox there’s no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you’d create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.
This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.
With arr services try to limit network throughput and disk throughput on them, as if either are maxed out for too long (like moving big linux iso files) it can cause weird timeouts and failures
I believe I would be fine on the network part, I am just guessing writing them to an SSD cache drive on my NAS would be fine? Im currently writing to the SSD and have a move script run twice a day to the HDDs
Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.
I’m essentially doing this with my set up. I have a box running proxmox and a separate networked nas device. There aren’t really any changes, per se, other than pointing the *arr installs at the correct mounts. One thing to make note of, i would make sure that your download, processing, and final locations are all within the same mount point, so that you can take advantage of atomic moves.
I second this. It took me a really long time how to properly mount network storage on proxmox VM’s/LXC’s, so just be prepared and determine the configuration ahead of time. Unprivilaged LXC’s have differen’t root user mappings, and you can’t mount an SMB directly into a container (someone correct me if i’m wrong here), so if you go that route you will need to fuss a bit with user maps.
I personally have a VM running with docker for the arr suite and a separate LXC’s for my sambashare and streaming services. It’s easy to coordinate mount points with the compose.yml files, but still tricky getting the network storage mounted for read/write within the docker containers and LXC’s.
On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas
On the Nas I have a normal docker installation which runs my databases
On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances
I have no issues with performance or read/write or timeouts.
As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.
Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say sonarr:8989 and it will make life much easier
As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first
Thank you for including oAuth options for sign on. Makes a big difference being able to use the same account for all the things like freshRSS, seafile, immich etc.
The general principle is called single sign on (sso).
The idea is that instead of each all keeping track of users itself, there is another app (sometimes called an identity provider) that does this. Then when you try to log into an app, it takes to the to login of your identity provider instead. When the IP says you are the correct user, it sends a token to the app saying to let you access your account.
The huge benefits are if you are already logged into the IP on a browser for example, the other apps will login automatically without having to put in your password again.
Also for me the biggest benefit is not having to manage passwords for a large number of apps so family that uses my server have 1 account which gives them access to jellyfin, seafile, immich, freshrss etc. If they change that password it changes it for everything. You can enforce minimum password requirements. You can also add 2FA to any app now immediately.
There’s good guides to settings it up with traefik so that you get let encrypt certificates and can use traefik for proxy authentication on web based apps like sonarr. There are many different authentication methods an app can choose to use and Authentik essentially supports everything.
SSO should really be the standard for self hosted apps because this way they don’t have to worry about ensuring they have the latest security for user management etc. The app just allows a dedicated identity provider to worry about user management security so the app devs can focus on just the app.
If you have to add a whole other app the match what authentik can do, is authelia really lighter weight?
Im joking because authentik does takes a decent chunk of ram but having all protocols together is nice. You can actually make ldap authentication 2FA if you want.
I’m going to try it out and see how it compares to Authelia. My home server has 64GB RAM and I have VPSes with 16GB and 48GB RAM so RAM isn’t much of an issue :D
The above YouTube video shows that you can get authentik to send a 2fa push authentication that requires the phone to hit a button in order to complete the authentication flow.
I recently switched from Joplin to Obsidian for different reasons. I’d prefer something FOSS, but so far I’ve been happy with the transition. Since it works with plain markdown files, it would fit your use case
Can you not just backup the pg txn logs (with periodic full backups, purged in accordance with your needs?). That’s a much safer way to approach DBs anyway.
(exclude the online db files from your file system replication)
I second obsidian. I was on the verge to jump onto logseq, but found its way of handling notes to be… different. I also felt a dislike of anytype where I don’t really have control over my notes. Obsidian clicked with me from the start and felt right. So I went with it, even though it’s not FOSS (which is usually a hard requirement from me).
But jokes aside, it is mostly about syncing notes for the selfhosting part. You either go with the official offer, no self-hosting and costs money, or you use a community plug-in, self-hosted, or you use a third program like syncthing, selfhosted.
Syncthing is the way, I had tried setting on nextcloud but never could get it to store how I wanted, but syncthinf was ridiculously easy and should work for anything that uses a folder
There is a plugin for obsidian to work with syncthing, but it seems to still be in development, implementing through the app and selecting the folders also gave me a reason for syncing my camera as well, and was super easy, no portfowarding or anything required
I also switched from Joplin to Obsidian after about half a year. There’s an open-source plugin that lets you self-host a syncing server.
What I found paradoxical is how easy it is to mod and write plugins for Obsidian compared to Joplin. I would’ve thought that modifying the open-source candidate would’ve been easier, but nope.
It is not a unique feature. But as a non-FOSS program its notes are not hidden behind proprietary filesystems, so any time you want you can still switch if they go in a direction thr user does not like.
Not every one stores the files as plain text files in markdown format like Obsidian. Logseq does I believe, but Joplin stores it all in database files which require an export should you decide to leave that app in favor of a other. With Obsidian you just point the new app at the folders full of .md files and away you go. That was the main selling point for me.
I don’t know where you’re getting that from. Here is my Joplin folder on my NC server, stuffed with md files from my notes. There are some database driven references in them if you do things like add pictures, and obviously the filename is a UID format, but it’s markdown all the way, baby.
Have you looked at the contents of those md files? In addition to creating its own hexadecimal file name, it appends the text with a bunch of metadata info. If you were to then take that folder of notes to any other markdown editor like Obsidian, it would be a mess to organize. That is why I’m a stickler for file format agnosticism. There is no vendor lock in and more importantly, no manipulation of the text filenames or contents.
Screenshot of my phone copy of the Obsidian vault directory as an example:
I have two Proxmox hosts and two NASes. All are connected at 1Gbps.
The Proxmox hosts maintain the real network mounts - nfs in my case - for the NAS shares. Inside each CT that requires them, these are mapped to mount points with identical paths in each, eg. /storage/nas1 and /storage/nas2.
All my *arr (and downloader) CTs are configured to use the exact same paths.
It’s seamless. nzbget or deluge download to the same parent folders that my *arr CTs work with, which means atomic renames/moves are pretty much instant. The only real network traffic is from the download CTs to the NASes.
Edit: my downloader CTs download directly to the NAS paths - no intermediate disk at all.
From what I read disk wear out on consumer drives is a concern when using ZFS for boot drives with proxmox. I don’t know if the issues are exaggerated, but to be safe I ended up picking up some used enterprise SSDs off eBay for that reason.
Did you know that you can use Joplin on a standard webdav server? Basically it just takes up the space of the data itself. I have it on a Caddy server and works like q charm synching between Windows and Android client
I think you need to learn more about how databases work. They don’t typically reclaim deleted space automatically for performance reasons. Databases like to write to a single large file they can then index into. Re-writing those files is expensive so left to the DBA (you) to determine when it should be done.
And how are you backing up the database? Just backing up /var/lib/postgres? Or are you doing a pg_dump? If the former then it’s possible your backups won’t be coherent if you haven’t stopped your database and it will contain that full history of deleted stuff. pg_dump would give you just the current data in a way that will apply properly to a new database should you need to restore
You can also consider your backup retention policy. How many backups do you need for how long?
You are right, I should. They are a bit more complicated than I anticipated, and apparently I’m doing everything wrong, haha. I have backups set up to go 2 years back, but I’m checking backblaze occasionally to check, so it shouldn’t be an issue. I have two months so far lol Thanks for the write-up :)
selfhosted
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.