selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

phanto, in Planning on setting up Proxmox and moving most services there. Some questions

Do two NICs. I have a bigger setup, and it’s all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.

Edgarallenpwn,
@Edgarallenpwn@midwest.social avatar

So dual NIC on each device and set up another lan on my router? Sorry it seems like a dumb question but just want to make sure.

fuckwit_mcbumcrumble,

Why would you need two nics unless you’re planning on having a proxmox Vm being your router?

FiduciaryOne,

I think two NICs is required to do VLANing properly? Not 100% sure.

DeltaTangoLima, (edited )
@DeltaTangoLima@reddrefuge.com avatar

Nope - Proxmox lets you create VLAN trunks, just like a physical switch.

Edit: here’s one of my Proxmox server network configs.

FiduciaryOne,

Huh, cool, thank you! I’m going to have to look into that. I’d love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

No worries mate. Sing out if you get stuck - happy to provide more details about my setup if you think it’ll help.

FiduciaryOne,

Thanks for the kind offer! I won’t get to this for a while, but I may take you up on it if I get stuck.

monkinto,

Is there a reason to do this over just giving the nic for the vm/container a vlan tag?

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

  • switch trunk port
    • enp2s0f0 (physical)
      • vmbr1 (Linux bridge)
        • vmbr1.60 (Proxmox server interface)
        • vmbr1.100 (Proxmox VLAN interface)
          • virtual guest nic (w/ vlan tag and IP address)
        • vtnet1 (OPNsense “physical” nic, but actually virtual)
          • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

Like I said, it’s a headfuck when you first set it up. Interface-ception.

The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.

Live2day,

No, you can do more than 1 VLAN per port. It’s called a trunk

atzanteol,

I haven’t done it - but I believe Proxmox allows for creating a “backplane” network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn’t interfere with other traffic being used by the VMs and the rest of your network.

You’d just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn’t route anywhere else.

fuckwit_mcbumcrumble,

In proxmox there’s no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you’d create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.

possiblylinux127,

Can you explain what benefit that would bring?

Cooljimy84, in Planning on setting up Proxmox and moving most services there. Some questions
@Cooljimy84@lemmy.world avatar

With arr services try to limit network throughput and disk throughput on them, as if either are maxed out for too long (like moving big linux iso files) it can cause weird timeouts and failures

Edgarallenpwn,
@Edgarallenpwn@midwest.social avatar

I believe I would be fine on the network part, I am just guessing writing them to an SSD cache drive on my NAS would be fine? Im currently writing to the SSD and have a move script run twice a day to the HDDs

Cooljimy84,
@Cooljimy84@lemmy.world avatar

Should be fine, I’m writing to spinning rust, so if I was playing back a movie it could cause a few “dad the tv is buffering again” problems

eerongal, in Planning on setting up Proxmox and moving most services there. Some questions
@eerongal@ttrpg.network avatar

Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.

I’m essentially doing this with my set up. I have a box running proxmox and a separate networked nas device. There aren’t really any changes, per se, other than pointing the *arr installs at the correct mounts. One thing to make note of, i would make sure that your download, processing, and final locations are all within the same mount point, so that you can take advantage of atomic moves.

archomrade,

I second this. It took me a really long time how to properly mount network storage on proxmox VM’s/LXC’s, so just be prepared and determine the configuration ahead of time. Unprivilaged LXC’s have differen’t root user mappings, and you can’t mount an SMB directly into a container (someone correct me if i’m wrong here), so if you go that route you will need to fuss a bit with user maps.

I personally have a VM running with docker for the arr suite and a separate LXC’s for my sambashare and streaming services. It’s easy to coordinate mount points with the compose.yml files, but still tricky getting the network storage mounted for read/write within the docker containers and LXC’s.

tristan, in Planning on setting up Proxmox and moving most services there. Some questions

My current setup is 3x Lenovo m920q (soon to be 4) all in a proxmox cluster, along with a qnap nas with 20gb ram and 4x 8tb in raid 5.

The specs on the m920q are: I5 8500T 32gb ram 256gb sata SSD 2tb nvme SSD 1gbe nic

Pic of my setup

On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas

On the Nas I have a normal docker installation which runs my databases

On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances

I have no issues with performance or read/write or timeouts.

As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.

Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say sonarr:8989 and it will make life much easier

As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first

Lem453, in Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages

Thank you for including oAuth options for sign on. Makes a big difference being able to use the same account for all the things like freshRSS, seafile, immich etc.

Kir,

I’m intrigued. How does it work? Do you have a link or an article to point me to?

Lem453, (edited )

The general principle is called single sign on (sso).

The idea is that instead of each all keeping track of users itself, there is another app (sometimes called an identity provider) that does this. Then when you try to log into an app, it takes to the to login of your identity provider instead. When the IP says you are the correct user, it sends a token to the app saying to let you access your account.

The huge benefits are if you are already logged into the IP on a browser for example, the other apps will login automatically without having to put in your password again.

Also for me the biggest benefit is not having to manage passwords for a large number of apps so family that uses my server have 1 account which gives them access to jellyfin, seafile, immich, freshrss etc. If they change that password it changes it for everything. You can enforce minimum password requirements. You can also add 2FA to any app now immediately.

I use Authentik as my identity provider: goauthentik.io/https://goauthentik.io/

There’s good guides to settings it up with traefik so that you get let encrypt certificates and can use traefik for proxy authentication on web based apps like sonarr. There are many different authentication methods an app can choose to use and Authentik essentially supports everything.

youtu.be/CPURnYaW3Zk

SSO should really be the standard for self hosted apps because this way they don’t have to worry about ensuring they have the latest security for user management etc. The app just allows a dedicated identity provider to worry about user management security so the app devs can focus on just the app.

Kir,

Thank you for the detailed answer! It seems really interesting and I will definitely give a try on my server!

dan,
@dan@upvote.au avatar

Authentik is pretty good. Authelia is good too, and lighter weight.

You can combine Authelia with LLDAP to get a web UI for user management and LDAP for apps that don’t support OpenID Connect (like Home Assistant).

Lem453,

If you have to add a whole other app the match what authentik can do, is authelia really lighter weight?

Im joking because authentik does takes a decent chunk of ram but having all protocols together is nice. You can actually make ldap authentication 2FA if you want.

dan,
@dan@upvote.au avatar

Interesting… How does Authentik do 2FA for LDAP?

I’m going to try it out and see how it compares to Authelia. My home server has 64GB RAM and I have VPSes with 16GB and 48GB RAM so RAM isn’t much of an issue :D

Lem453,

Because authentik uses flows, you can insert the 2FA part into any login flow (proxy, oauth, ldap etc)

youtu.be/whSBD8YbVlc

dan, (edited )
@dan@upvote.au avatar

LDAP sends username and password over the network though… It doesn’t use regular web-based authentication. How would it add 2FA to that?

Lem453,

The above YouTube video shows that you can get authentik to send a 2fa push authentication that requires the phone to hit a button in order to complete the authentication flow.

dan,
@dan@upvote.au avatar

Ohhhh, interesting. Sorry, I didn’t watch the video yet. Thank you!!

subtext,

Although in the subscription version, SSO is not available unless you purchase the “Contact Us” version. sso.tax would like a word.

Lem453,

Free for self hosted which is probably what matters to most here

subtext,

Definitely a fair point, always good to see that in a project

bluespin, in Joplin alternative needed

I recently switched from Joplin to Obsidian for different reasons. I’d prefer something FOSS, but so far I’ve been happy with the transition. Since it works with plain markdown files, it would fit your use case

jaykay, (edited )
@jaykay@lemmy.zip avatar

I’ve switched from Obsidian to Joplin actually, cos syncing was a chore and Joplin is more straightforward imo

indigomirage, (edited ) in Joplin alternative needed

Can you not just backup the pg txn logs (with periodic full backups, purged in accordance with your needs?). That’s a much safer way to approach DBs anyway.

(exclude the online db files from your file system replication)

fenndev, in Joplin alternative needed
@fenndev@leminal.space avatar

Have you looked into either Obsidian or Logseq?

Obsidian is not open source, but uses Markdown for notes just like Logseq. Very popular overall.

krash,

I second obsidian. I was on the verge to jump onto logseq, but found its way of handling notes to be… different. I also felt a dislike of anytype where I don’t really have control over my notes. Obsidian clicked with me from the start and felt right. So I went with it, even though it’s not FOSS (which is usually a hard requirement from me).

helenslunch,
@helenslunch@feddit.nl avatar

How do you self-host Obsidian?

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

That’s the near thing about it… you don’t.

But jokes aside, it is mostly about syncing notes for the selfhosting part. You either go with the official offer, no self-hosting and costs money, or you use a community plug-in, self-hosted, or you use a third program like syncthing, selfhosted.

bbuez,

Syncthing is the way, I had tried setting on nextcloud but never could get it to store how I wanted, but syncthinf was ridiculously easy and should work for anything that uses a folder

helenslunch,
@helenslunch@feddit.nl avatar

I’ll have to look into that. It doesn’t work like Joplin where I can just connect it to the same remote backup within the app, across devices?

bbuez,

There is a plugin for obsidian to work with syncthing, but it seems to still be in development, implementing through the app and selecting the folders also gave me a reason for syncing my camera as well, and was super easy, no portfowarding or anything required

Opisek,

I also switched from Joplin to Obsidian after about half a year. There’s an open-source plugin that lets you self-host a syncing server.

What I found paradoxical is how easy it is to mod and write plugins for Obsidian compared to Joplin. I would’ve thought that modifying the open-source candidate would’ve been easier, but nope.

jaykay,
@jaykay@lemmy.zip avatar

Yeah, I’ve been on Obsidian before, but self-hosted syncing on iOS is a bit finicky.

I’ve heard good things about Logseq, but it’s certainly a waaay different approach to notes. I’ll have to read more about it. Thanks :)

ikidd,
@ikidd@lemmy.world avatar

Literally every note app uses markdown. I’m not sure why people point at that for Obsidian like it’s a unique feature.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

It is not a unique feature. But as a non-FOSS program its notes are not hidden behind proprietary filesystems, so any time you want you can still switch if they go in a direction thr user does not like.

Father_Redbeard,
@Father_Redbeard@lemmy.ml avatar

Not every one stores the files as plain text files in markdown format like Obsidian. Logseq does I believe, but Joplin stores it all in database files which require an export should you decide to leave that app in favor of a other. With Obsidian you just point the new app at the folders full of .md files and away you go. That was the main selling point for me.

ikidd, (edited )
@ikidd@lemmy.world avatar

I don’t know where you’re getting that from. Here is my Joplin folder on my NC server, stuffed with md files from my notes. There are some database driven references in them if you do things like add pictures, and obviously the filename is a UID format, but it’s markdown all the way, baby.

Father_Redbeard,
@Father_Redbeard@lemmy.ml avatar

Have you looked at the contents of those md files? In addition to creating its own hexadecimal file name, it appends the text with a bunch of metadata info. If you were to then take that folder of notes to any other markdown editor like Obsidian, it would be a mess to organize. That is why I’m a stickler for file format agnosticism. There is no vendor lock in and more importantly, no manipulation of the text filenames or contents.

Screenshot of my phone copy of the Obsidian vault directory as an example:

Obsidian md

DeltaTangoLima, (edited ) in Planning on setting up Proxmox and moving most services there. Some questions
@DeltaTangoLima@reddrefuge.com avatar

I have two Proxmox hosts and two NASes. All are connected at 1Gbps.

The Proxmox hosts maintain the real network mounts - nfs in my case - for the NAS shares. Inside each CT that requires them, these are mapped to mount points with identical paths in each, eg. /storage/nas1 and /storage/nas2.

All my *arr (and downloader) CTs are configured to use the exact same paths.

It’s seamless. nzbget or deluge download to the same parent folders that my *arr CTs work with, which means atomic renames/moves are pretty much instant. The only real network traffic is from the download CTs to the NASes.

Edit: my downloader CTs download directly to the NAS paths - no intermediate disk at all.

RobotToaster, in Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages

How does making collections public work if you’re self hosting?

daniel31x13,
atzanteol, in Planning on setting up Proxmox and moving most services there. Some questions

Use ZFS when prompted - it opens up some features and is a bitch to change later. I don’t understand why it’s not the default.

possiblylinux127, (edited )

I personally use both Btrfs and ZFS. For the main install I went with btrfs raid 1 as it is simpler and doesn’t have as much overhead.

I was a little worried about stability but I’ve had no issues and was able to swap a dead ssd without issue. It been going for almost 2 years now.

AlphaAutist,

From what I read disk wear out on consumer drives is a concern when using ZFS for boot drives with proxmox. I don’t know if the issues are exaggerated, but to be safe I ended up picking up some used enterprise SSDs off eBay for that reason.

atzanteol,

This seems to be a “widely believed fact” but I haven’t seen any real data to back it up.

Septimaeus, (edited ) in what if your cloud=provider gets hacked ?

Dammit, I came here hoping to see at least one “I have a very special set of skills.” Oh well.

Yeah I’d cut bait, rebuild from latest tapes. But also…

Septimaeus, (edited )

I’d put the corrupted backups in an eye-catching container, like a Lisa Frank backpack or Barbie lunchbox, to put on the wall in my office.

knF, in Joplin alternative needed

Did you know that you can use Joplin on a standard webdav server? Basically it just takes up the space of the data itself. I have it on a Caddy server and works like q charm synching between Windows and Android client

observantTrapezium,
@observantTrapezium@lemmy.ca avatar

Came here to say just that. The WebDAV synchronization target is great.

jaykay,
@jaykay@lemmy.zip avatar

Yeah, I’m yet to play around with WebDAV or learning what it actually is haha Will look into it, thanks :)

atzanteol, in Joplin alternative needed

I think you need to learn more about how databases work. They don’t typically reclaim deleted space automatically for performance reasons. Databases like to write to a single large file they can then index into. Re-writing those files is expensive so left to the DBA (you) to determine when it should be done.

And how are you backing up the database? Just backing up /var/lib/postgres? Or are you doing a pg_dump? If the former then it’s possible your backups won’t be coherent if you haven’t stopped your database and it will contain that full history of deleted stuff. pg_dump would give you just the current data in a way that will apply properly to a new database should you need to restore

You can also consider your backup retention policy. How many backups do you need for how long?

jaykay,
@jaykay@lemmy.zip avatar

You are right, I should. They are a bit more complicated than I anticipated, and apparently I’m doing everything wrong, haha. I have backups set up to go 2 years back, but I’m checking backblaze occasionally to check, so it shouldn’t be an issue. I have two months so far lol Thanks for the write-up :)

aniki, in Joplin alternative needed

Why are you not using the built-in backup system?

jaykay,
@jaykay@lemmy.zip avatar

If you mean the ‘export’ function, it’s not really the same as I’d have to do it manually every time

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #