I use Veeam Backup & Recovery Community Edition. If you’re runing VM’s you have to be on VMWare or Hyper-V. You can also use agents on the individual VM/Server. It also requires a pretty hefty Windows host, at least if you want your backups to complete fairly quickly.
Those are understandably downsides for some people. But, Veeam is in a class by itself. It has no serious competitors and as far as ease of use and reliability, it’s top tier.
I’m lazy. I don’t want to spend a bunch of time configuring finicky backups only to find out I needed one and it failed. I honestly wish there were a comparable open source backup system. I have yet to find anything that works as well.
I was reading about it and I actually like a lot this solution’s principle. It reminds me a lot of puppet which I have seen before (for other kind of tasks) to orchestrate several computers. Big shame it works on windows though, since I have a server with docker on ubuntu server at this point and was not really looking forward to change that. But thanks for the suggestion, is for sure very interesting
Thanks! I’ll try it out. I don’t see anything on their site about JavaScript source mapping, so I assume they don’t do it. With Sentry, you upload the source map to the server as part of your JS build process, and their backend automatically maps minified stack traces to unminified ones using the uploaded source map. Maybe I’d be fine losing that in exchange for something lighter weight.
This blog post mentions that it should be possible at least!
Im currently using their free tier for a hobby project and have been happy with it. Have considered moving over to self hosting the solution, but have been keeping off on it due to resource contraints, but might make the leap soon! Would be nice to get use of the uptime pings, which currently would fill the event way too quickly for the free tier.
Hello, I’m the lead dev of GlitchTip. Fun to see it mentioned here. Source maps are supported. I wish I had time to make the feature easier to use and write better docs. Contributions are welcome. It’s very much a hobby project for the little time I have after work and family. Right now all of my attention is on an event ingest rewrite to work with fewer resources.
Nice to see you on here! I understand the lack of time - I’ve got some projects I’ve had on hold for years because of time constraints. I’m definitely going to try Glitchtip.
If I get some free time, I’ll see if I can write some docs about using source maps for JS apps. Sounds like it works in the same way as Sentry’s does.
It was a great idea for GlitchTip to reuse the Sentry SDKs and CLI, because their SDKs are solid. They’ve got the best .NET SDK out of all of the error logging systems I evaluated two years ago which is why I was using Sentry. Unfortunately, Sentry has become significantly heavier over those two years.
Do two NICs. I have a bigger setup, and it’s all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.
Huh, cool, thank you! I’m going to have to look into that. I’d love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊
You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.
So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).
My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.
The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:
switch trunk port
enp2s0f0 (physical)
vmbr1 (Linux bridge)
vmbr1.60 (Proxmox server interface)
vmbr1.100 (Proxmox VLAN interface)
virtual guest nic (w/ vlan tag and IP address)
vtnet1 (OPNsense “physical” nic, but actually virtual)
vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)
All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.
Like I said, it’s a headfuck when you first set it up. Interface-ception.
The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.
I haven’t done it - but I believe Proxmox allows for creating a “backplane” network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn’t interfere with other traffic being used by the VMs and the rest of your network.
You’d just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn’t route anywhere else.
In proxmox there’s no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you’d create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.
This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.
If you’re up for it, it’s generally better to not backup everything. Only backup the data that you need. Like a database. Or photos, music, movies, etc. for personal data. For everything else, it’s best to automate the install and maintenance of your server.
Nowadays I sort of do this with seafile. Select folders to sync, open the app every other time to resync stuff, carry on with your day. The only thing I wanted to take away if there is a better way to not have a massive hassle to reinstall everything in case something happens (and in case I forget to select a folder to sync also).
But your suggestion I think is very valid as well. At least for mint have a way to make a more automated installer or similar to get the stuff I use usually. Yet another rabbit hole to go into…
Restic is another option, but it’s a little less user friendly and is all CLI, if i recall correctly. However I’m pretty sure you can send backups straight to a server via Restic.
I checked a couple of times time shift, but it’s a shame not even ftp is allowed as a backup destination.
As for restic, will give a check later
EDIT: just read about restic, and I think this can be the solution I was looking for. Docker image is available and all, so for me that is a big plus. Once I have the chance I will test drive it and see where it goes. Thanks!
You could repurpose an old workstation, bought dirt cheap on eBay if you are lucky, but even then you’ll have to get yourself an HDD, maybe multiple of them if you want to have data redundancy.
For anything new your best bet is a 2 bay ready made NAS, but you’ll have to invest around 300€ for the cheapest one.
It is entirely possible to start with a 2-bay drive rack (not a caddy, we want something without the connections) and then run the SATA out the back of the computer to the drives. It’s a compromise for this low a budget, but it’s not a major sacrifice.
Start small. Find good used hardware first before thinking what services to run. I would start with an old desktop.
Self-hosting is a journey, not a destination. No matter what you buy you'll probably need to buy new hard drives. Used hard drives are a bit of a gamble.
Where do people buy used systems in Denmark? Show us a few things you're interested in and people can give you recommendations.
Also, instead of Photoprism, I would suggest Immich. I was a huge supported of Photoprism for years (even donated money) but their development is too slow. Immich is way faster and has an android app. Anyways, give it a look.
I think 8 GB of RAM is sufficient for all those services. I run them all with Yunohost and I rarely get over 4 GB RAM used.
dba.dk is a pretty popular site for buying used stuff in Denmark, but for electronics I usually go on eBay and sort by EU only (IIRC they removed that option so now the results are tainted with lots of UK gear that’ll be hit with import taxes).
Both deals sound amazing to me, but get 8GB or prepare for RAM upgrade. 4GB could be enough for what you listed there, but you might find more services to run in the near future 🤪
I think those tiny PCs are perfect if you dont need more SATA ports. Its hard to beat them with that low price
I don’t think you’ll be able to build anything with €100, but you might be able to buy an old PC or laptop locally and use it as is. I’ve never run nextcloud myself, but from I’ve read it’ll be the most taxing service on your list. Everything seems pretty minimal, though I don’t know anything about Photoprism.
Yeah, for that price you won’t find anything new. For illustration, when I bought a new Athlon 3000G, which was the very lower CPU on their AM4 offering, it was at 55€ without anything else.
Older thinkpads in this price range will not perform well as servers. They will be pretty limited in specs. Better to go with a used SFF or other form-factor business model desktop.
All kinds of stuff. I use it when I need a way to structure my data:
I use it to keep track of software / libs that are of interest, what they are an alternative to. See example here: ibb.co/ncsdt0W
I’ve also tried to recreate the functionality of a personal relational management (a la MonicaHQ, or per this post: medium.com/…/my-homegrown-personal-crm-87dffbcf54…) but found it to be an overengineered solution.
I also used it to interact and store data through my python apps, to avoid dealing with it directly in python.
You can also use it as a Kanban board
Also, I’ve been trying to use it as an excel replacement - which is an overengineered solution but you get impeccable dataquality.
Nocodb is a bit wonky, but it is quite easy to work with (front- and backend) and since everything is in the database format you choose - you’re in control of how you want your data.
I don’t remember exactly, it was around 2 years ago. It was easy to set up, but I found the feature set to be lacking some essentials, and I ran into a couple of big bugs. Couldn’t really replace my airtable setup yet. Happy to give it another try though!
It’s insane how many things they push as Snaps when they are entirely incompatible with the Snap model.
I think everyone first learns what Snaps are by googling “why doesn’t ____ work on Ubuntu?” For me, it was Filebot. Spent an hour or two trying to figure out how the hell to get it to actually, you know, access my files. (This was a few years ago, so maybe things are better now. Not sure. I don’t live that Snap life anymore, and I’m not going back.)
Thank you for including oAuth options for sign on. Makes a big difference being able to use the same account for all the things like freshRSS, seafile, immich etc.
The general principle is called single sign on (sso).
The idea is that instead of each all keeping track of users itself, there is another app (sometimes called an identity provider) that does this. Then when you try to log into an app, it takes to the to login of your identity provider instead. When the IP says you are the correct user, it sends a token to the app saying to let you access your account.
The huge benefits are if you are already logged into the IP on a browser for example, the other apps will login automatically without having to put in your password again.
Also for me the biggest benefit is not having to manage passwords for a large number of apps so family that uses my server have 1 account which gives them access to jellyfin, seafile, immich, freshrss etc. If they change that password it changes it for everything. You can enforce minimum password requirements. You can also add 2FA to any app now immediately.
There’s good guides to settings it up with traefik so that you get let encrypt certificates and can use traefik for proxy authentication on web based apps like sonarr. There are many different authentication methods an app can choose to use and Authentik essentially supports everything.
SSO should really be the standard for self hosted apps because this way they don’t have to worry about ensuring they have the latest security for user management etc. The app just allows a dedicated identity provider to worry about user management security so the app devs can focus on just the app.
If you have to add a whole other app the match what authentik can do, is authelia really lighter weight?
Im joking because authentik does takes a decent chunk of ram but having all protocols together is nice. You can actually make ldap authentication 2FA if you want.
I’m going to try it out and see how it compares to Authelia. My home server has 64GB RAM and I have VPSes with 16GB and 48GB RAM so RAM isn’t much of an issue :D
The above YouTube video shows that you can get authentik to send a 2fa push authentication that requires the phone to hit a button in order to complete the authentication flow.
1.) No one runs rooted docker in prod. Everything is run rootless.
2.) That’s just patently not true. docker inspect is your friend. Also you can build your own containers trusting no-one. FROM Scratchhub.docker.com/_/scratch/
3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.