It’s 2024, avoid Proxmox and safe yourself a LOT of headaches down the line.
You most likely don’t need Proxmox and its pseudo-open-source bullshit. My suggestion is to simply with with Debian 12 + LXD/LXC, it runs VMs and containers very well. Proxmox ships with an old kernel that is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circumstances.
What I would suggest you to use use instead is LXD/Incus.
LXD/Incus provides a management and automation layer that really makes things work smoothly - essentially what Proxmox does but properly done. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).
Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.
I draw your attention to containers (not docker), LXC containers because for most people full virtualization isn’t even required. In a small homelab if you can have containers that behave like full operating systems (minus the kernel) including persistence, VMs might not be required. Either way LXD/Incus will allow for both and you can easily mix and match and use what you require for each use case.
For eg. I virtualize the official HomeAssistant image with LXD because we all know how hard is to get that thing running, however my NAS / Samba shares are just a LXD Debian 12 container with Samba4, Nginx and FileBrowser. Sames goes for torrent client that has its own container. Some other service I’ve exposed to the internet also runs a full VM for isolation.
Like Proxmox, LXD/Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. I can guarantee you that most people running Proxmox today it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no BS licenses and way less overhead.
I do regular snapshots of my containers live and sometimes restore them, no issues there. De-duplication and incremental features are (mostly) provided by the storage backend, if you use BTRFS or ZFS for your storage pool every container will be a volume that you can snapshot, rollback, export at any time. LXD also provides tools to make those operations: documentation.ubuntu.com/lxd/…/instances_backup/
create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).
provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.
Your comment is wrong in a few ways and suggests using a LXC which is way slower than docker or podman and lacks the easy setup.
Proxmox is good because it makes it easy to create VMs and setup least access. It also has as new of kernel as stable Debian so no, its not terribly out of date.
If you want to suggest that someone install Debian + Docker compose that would make more sense. This isn’t a good setup for more advanced setups and it doesn’t allow for a not of flexibility.
This was a discussion about management solutions such as Proxmox and LXD and NOT about containerization technologies like Docker or LXC. Also Proxmox uses the Proxmox VE Kernel that is derived from Ubuntu.
Your comment makes no sense whatsoever. I’m not even sure you know the difference between LXD and LXC…
I’ve been on Proxmox for 6 or so months with very few issues and have found it to work well in my instance, I do appreciate seeing another alternative and learning about it too! I very specifically like Proxmox as it gives me an actual IP on my router’s subnet for my machines such as Home Assistant. So instead of the 192.168.122.1 it rolls a nice 192.168.1.X/24 IP which fits my range which makes it easier for me to direct my outside traffic to it. Does this also do this? Based on your screenshots, maybe not, IDK.
it gives me an actual IP on my router’s subnet for my machines
Yes you configure LXD/Incus’ networking to use a bridge and it will simply delegate the task to your router instead of proving IPs itself. One of my nodes actually runs the two setups at the same time, I’ve a bunch of containers on an internal range and then my Home Assistant VM getting an IP from my router.
Thanks for the link! I’ve been running Proxmox for years now without any of the issues like the previous commenter mentioned. Not that they don’t exist, just that I haven’t hit them. I really like Proxmox but love hearing about alternatives. One day I might get bored and want to set things up new with a different stack and anything that’s more free/open is better in my book.
It’s good for me because I’m piss poor at programming. In my defense, I’m not a programmer or even programmer adjacent. I do see how it wouldn’t be useful to a pro. It also has occasionally given me garbage advice that an expert would spot right away while I had to figure out in my own that it was ‘hallucinating’ again. There’s nothing better for learning than troubleshooting, though!
I can absolutely see it getting useful for a pro. It’s already a better version of IDE templates. If you have to write boilerplate code this can already do that. It’s a huge time saver for the things you’d have to go look up to remember how to do and piece together yourself.
Example: today I wanted a quick way to serve my current working directory over HTTP so I could do some quick web work. I asked ChatGPT to write me a bash function I could stick in my profile to do this, and I told it to pick a random unused port. That would have taken me much longer had I went to lookup how to do that all. The only hint I gave it was to use the Python builtin module for serving http.
There’s a project called Tabby that your can host as a server on a machine that has a GPU, and has a VSCode extension that connects to the server.
The default model is called starcoder, and it’s the small version, 1B parameters. The downside is that it’s not super smart (but still an improvement over built in tools), but since it’s such a small model, I’m getting sub-second processing times.
I have nextcloud running since nearly 5 years and it never failed once. Only dowtime is when the backup fails and somehow maintenance mode is still enabled (technically not a crash)
For those interested: Running in docker with mariadb in a stack, checking updates with watchtower everyday and pulling from stable, backups with borg(matic)
I consider the 'good enough' level to be, if I didn't pixel peep, I couldn't tell the difference. The visually lossless levels were the first crf levels where I couldn't tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.
I know that av1, x264, and x265 all have different ways of compressing video. Obviously, the whole point of this was to get a better idea of what that actually looked like. Everything on the visually lossless section is completely indistinguishable to my eyes, and everything on the good enough section has very minor bits of compression only noticed when i'm looking for it in a still image. This does not require the same codec to compare and contrast with.
Frankly, for anything other than real-time encoding, I don't actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.
Frankly, for anything other than real-time encoding, I don’t actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.
Encoding speed heavily depends on your preset. Veryslow will give you better compression than medium or fast, but at a heavy expense of encoding speed. You’re not gonna re-encode a movie overnight on slow preset. GPU encoding will also give you worse result than CPU encode so that’s something one would have to take into consideration. It’s not a big deal when you’re streaming, but if it’s for video files, I’d much prefer using the CPU.
I consider the ‘good enough’ level to be, if I didn’t pixel peep, I couldn’t tell the difference. The visually lossless levels were the first crf levels where I couldn’t tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.
I was mostly talking about how you organised your table by using CRF values as the rows. It implies that one should compare the results in each row, however that wouldn’t be a comparison that makes much sense. E.g. looking at row “24” one might think that av1 is less effective than h264/5 due to greater file size, but the video quality is vastly different. A more “informative” way to present the data might have been to organise each row by their vmaf score.
Hopefully I don’t come across as too cross or argumentative, just want to give some feedback on how to present the data in clearer way for people who aren’t familiar with how encoding works.
GPU encoding uses (relatively) simpler fixed function encoders that do it much faster than the CPU which uses its general purpose transistors to run an encoding algorithm. End result is GPU encoding is speedy at the cost of visual quality per bitrate; the file size is bigger for same visual quality as a CPU encode. Importantly for storing your videos - CPU encoding, while much slower, will get your file size smaller at the same visual quality threshold you desire, so you can save more videos per drive!
They’re not doing like proton and close basic stuff like IMAP and SMTP as a way to force you on the official apps
I especially love the feature where you can bounce emails based on domains, keywords or TLDs. My spam folder is finally empty. IMHO bounce back spam is much better, as the spammers get a response that the address is invalid and hopefully stop wasting their limited computing resources on that address.
Zoho is not open source, but proton is a “fake” open source that is mostly used for marketing: they opened only the UI, which communicates with a proprietary protocol to a proprietary server - useless. They also reject or ignore any pull request on GitHub.
i started with the mail basic (10 euro yearly for 10gb) but then because i switched from “secondary email that forwards to gmail” to “primary email that imports from gmail”, i had to move to the more expensive plan
I mean, that’s going to be a risk you take with any hosted service. I currently self-host my contacts and calendar, but I have no interest in hosting my own email again.
I don’t self host my email either. I got my registrar, DNS and email separate from each other so if any of them goes bad I can switch with minimum fuss.
But that makes it all the more important to be able to download all your mail from your provider.
Proton currently has two proprietary things you can use to download, a “bridge” PC app that pretends to speak IMAP, and a download tool. The bridge will be discontinued after they launch their propeietary PC mail app so that leaves just the proprietary download tool, which only does .eml. format.
That’s a very broad question that depends a lot on your usage. My needs may be different from yours.
I’m currently using Migadu because:
Unlimited domains, mailboxes, accounts and aliases for a flat fee. I’m managing accounts for myself as well as family and extended family members and it comes out much cheaper this way than services that ask $5-10/account.
Very nice management interface with all the bells and whistles but with reasonable defaults and easy to use.
The company is based in Switzerland and the mail hosted in EU (France).
Standard email service with everything you’d expect (the regular protocols, spam protection, webmail, full compatibility with clients etc.)
They’re not doing like proton and close basic stuff like IMAP and SMTP as a way to force you on the official apps
The reason Proton cannot do IMAP/SMTP is that they cannot read your emails which is required for both. That’s a feature, not a bug.
PM works with any app as long as the app implements their custom protocol for which there are at least two FOSS implementations as a reference.
proton is a “fake” open source that is mostly used for marketing: they opened only the UI, which communicates with a proprietary protocol to a proprietary server - useless
While I’d also prefer their back-end to be OSS, it’s not nearly as critical as the clients.
As a user, it doesn’t make a difference. I’m paying for an opaque service either way.
All the interesting stuff (E2EE, zero access storage) happen in the clients anyways. The BE is fairly uninteresting; it’s a mail server + zero-access encryption + Proton account handling. If you really wanted to build a mail service similar to Proton, you could build that yourself and probably would have to anyways.
i think instead the opposite. The backend is the real interesting part, and the only way that we can be sure that “they cannot read the emails” (they arrive in clear, saved with reversible encryption and they have a key for it - if you use their services to commit crimes they will collaborate with the law enforcement agencies like everyone else)
imap/smtp can be toggled with a warning, if that’s really their concern. As of now i have the feeling that’s instead blocked to keep users inside (no IMAP = no easy migration to somewhere else) or to limit usage (no SMTP = no sending mass email)
The backend is the real interesting part, and the only way that we can be sure that “they cannot read the emails”
While I’d still prefer it, OSS can’t really help with that because what’s really required here is remote attestation.
That is an unsolved problem to my knowledge; there is no way to know which software they’re actually running. Even if they published the source code, they could trivially apply a patch in their deployment that stores all incoming email somewhere and you’d be none the wiser.
Even if they published source code and could somehow prove to you that they’re running a version derived from it, you would still not be safe from surveillance as one could simply MITM all connections. See i.e. notes.valdikss.org.ru/jabber.ru-mitm/.
That’s likely one of the reasons they do everything they can to make PGP accessible to every user.
imap/smtp can be toggled with a warning, if that’s really their concern
It’s plain and simply not how their service works. They’d have to build most of their service a second time but unencrypted.
It’s like asking Signal to build in support for IRC; it does not make sense for them to do that in any way without malicious intent needed.
no IMAP = no easy migration to somewhere else
You have IMAP access via the bridge. That’s what it’s for.
Zoho and PM have two entirely different reasons for existence. If you don’t want E2EE (assuming the other sender is on PM) then by all means, use Zoho. And IMAP isn’t E2EE compatible in the slightest, what they’re charging for is the decryption bridge that makes it work with an IMAP client. They had to come up with that, it’s not just a switch you flip at PMs end that makes IMAP work.
I wouldn’t call it a clone, Tailscale didn’t invent mesh VPN’s. I believe Nebula is fully self hosted, while Tailscale makes initial connections through their servers. That means Nebula is more secure and private if you’re paranoid, but also harder to set up. They’re also based on different VPN protocols.
Borg (specifically Borg Matic) has been working very well for me. I run it on my main server and then on my Nas I have a Borg server docker container as the repository location.
I also have another repository location my on friends Nas. Super easy to setup multiple targets for the same data.
I will probably also setup a Borg base account for yet another backup.
What I liked a lot here was how easy it is to make automatic backups, retention policy and multiple backup locations .
Open source was a requirement so you can never get locked out of your data. Self hosted. Finally the ability to mount the backup as a volume / drive. So if I want a specific file, I mount that snapshot and just copy that one file over.
Not sure what kind of tinker board you’re working with, but the power of Pis has increased exponentially through its generations. There are tasks that would run slowly on a dedicated Pi2 that ran easily in parallel with a half dozen other things on a Pi4.
The older ones can still be useful, just for less intensive tasks.
Out of interest from someone with an Rpi4 and Immich, did you deactivate the machine learning? I did since I was worried it will be too much for the Pi, just curious to hear if its doable or not after all.
I use Downpour for Audiobooks. It is similar to Audible where audiobooks can be purchased individually, or there is a subscription that provides credits to purchase audiobooks. The audiobooks are drm-free and can be downloaded. I have not found a way to automate the download and transfer to my Audiobookshelf server, but I don’t mind doing it manually considering I average around two or three audiobooks a month.
It works well! I have one AdGuardHome instance running on my home server and one running on a Raspberry Pi, both using Docker. Having two prevents the internet from breaking in case I have to shut down one of them for some reason.
As an AdGuard home user for more than a few years, I switched back to Pihole because it wasn’t really any better. It was also easier to pair pihole with Unbound.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.