The fact that it’s a “single board” computer, specifically, is mildly irrelevant, imo; just follow standard backup practices. The only way the type of computer really comes into question is whether or not it has adequate resources to run whatever backup solution that you choose. For my usecase, Borg works great, but choose whatever solution fits your requirements. The “simplest”, and lightest solution is probably rsync, but that may leave a lot to be desired.
I use Veeam Backup & Recovery Community Edition. If you’re runing VM’s you have to be on VMWare or Hyper-V. You can also use agents on the individual VM/Server. It also requires a pretty hefty Windows host, at least if you want your backups to complete fairly quickly.
Those are understandably downsides for some people. But, Veeam is in a class by itself. It has no serious competitors and as far as ease of use and reliability, it’s top tier.
I’m lazy. I don’t want to spend a bunch of time configuring finicky backups only to find out I needed one and it failed. I honestly wish there were a comparable open source backup system. I have yet to find anything that works as well.
I was reading about it and I actually like a lot this solution’s principle. It reminds me a lot of puppet which I have seen before (for other kind of tasks) to orchestrate several computers. Big shame it works on windows though, since I have a server with docker on ubuntu server at this point and was not really looking forward to change that. But thanks for the suggestion, is for sure very interesting
I think Borg Backup would fit your needs. You would still need to reinstall things like a boot sector and recreated partitions but on the other hand file based backups have the advantage that you can restore individual files when needed too and that it is easier to only backup what changed. Just make sure to exclude any temporary files you don’t want to keep from the backup (e.g. cache dirs, log files that get rewritten often and aren’t relevant long-term,…).
Thanks! I’ll try it out. I don’t see anything on their site about JavaScript source mapping, so I assume they don’t do it. With Sentry, you upload the source map to the server as part of your JS build process, and their backend automatically maps minified stack traces to unminified ones using the uploaded source map. Maybe I’d be fine losing that in exchange for something lighter weight.
This blog post mentions that it should be possible at least!
Im currently using their free tier for a hobby project and have been happy with it. Have considered moving over to self hosting the solution, but have been keeping off on it due to resource contraints, but might make the leap soon! Would be nice to get use of the uptime pings, which currently would fill the event way too quickly for the free tier.
Hello, I’m the lead dev of GlitchTip. Fun to see it mentioned here. Source maps are supported. I wish I had time to make the feature easier to use and write better docs. Contributions are welcome. It’s very much a hobby project for the little time I have after work and family. Right now all of my attention is on an event ingest rewrite to work with fewer resources.
Nice to see you on here! I understand the lack of time - I’ve got some projects I’ve had on hold for years because of time constraints. I’m definitely going to try Glitchtip.
If I get some free time, I’ll see if I can write some docs about using source maps for JS apps. Sounds like it works in the same way as Sentry’s does.
It was a great idea for GlitchTip to reuse the Sentry SDKs and CLI, because their SDKs are solid. They’ve got the best .NET SDK out of all of the error logging systems I evaluated two years ago which is why I was using Sentry. Unfortunately, Sentry has become significantly heavier over those two years.
I was really surprised when I learned that they have any locations at all besides Germany. I remember when they were just starting out and I spoke to Mr Hetzner himself about a support issue. Good times.
Podman pods + systemd units to manage pods lifecycle. Ansible to deploy the base OS requirements, the ancillary services (SSH, backups, monitoring…), and the pods/containers/services themselves.
I know enough to be dangerous. I know enough to follow faqs but dumb enough to not backup like I should.
So I’d be running my server on bare metal and have a couple services going and sooner or later, shit would get borked. Shit that was miles past my competence to fix. Sometimes I’d set up a DB wrong, or break it, or an update would screw it up, and then it would all fall apart and I’m there cursing and wiping and starting all over.
Docker fixes that completely. It’s not perfect, but it has drastically lowered my time working on my server.
My server used to be a hobby that I loved dumping hours into. Now, I just want shit to work.
It has been years since I played with it but OpenStack is a suite of tools to build a data center like AWS or Azure. You can get the VM bit up and running pretty quick with basic packages on an Ubuntu system if you want to play with it, but again it has been years.
What is your goal? Playing with kvm may be a better path if you want to understand virtualization.
If you want to upskill for a job, I’d see if there is a certificate to work on. Even if you don’t want the cert, the curriculum might be a good starting point.
I am happy with my simple docker-compose setup - one root folder with one subfolder per project containing the compose file and any configuration mounted into the container. Traefik automatically exposes all services I want under a well-known URL using a single line in each compose file. Watchtower updates the containers.
This has been running stable for over two years with probably 2-3 reboots in between. If my current NUC ever breaks I’ll set it up again using Podman instead of Docker, but aside from that I couldn’t be happier!
How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?
In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.
Your first sentence proves that it’s different. The developer needs to know it’s going to be a Deb package. What about rpm? What about if it’s going to run on mac? Windows? That means they’ll have to change how they develop to think about all of these different platforms. Oh you run windows - well windows doesn’t have openssl, so we need to do this vs that.
I’d recommend reading up on docker and containerization. It is not a script for setting up software. If that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it. Like it or not it’s here, and if you’re doing any dev/ops work professionally you will be left behind for not understanding it.
Apparently I was unclear, I was referring to the security implications of using different manifestations of other people’s code. Those are rather similar.
I’d recommend reading up on docker and containerization. It is not a script for setting up software.
I was referring specifically to docker files. Those are almost to the letter scripts for setting up software.
if that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it.
I find your attitude not just uncharitable, but also rude.
and I find misinformation about topics like this also to be rude. It’s perfectly fine if you don’t understand something, but what I don’t like is you going out of your way to dissuade people from using a product when I don’t think you understand the core concepts of it. If you have valid criticisms like security of docker then that’s a different conversation about securing containers, but it’s hard to take them as valid criticisms if the criticism is based on a fundamental misunderstanding of the product.
I don’t think anyone I have ever talked to professionally or read about docker would ever describe a dockerfile as “scripts for setting up software”. It is much more nuanced then that.
So yes, I’m a bit rude about it. I do this professionally and I’m very tired of people who don’t understand containerization explain to me how containerization sucks.
I don’t think you understood the context of the comment you replied to. As a reply to “Here are all these drawbacks to Docker vs hosting on bare metal,” it makes perfect sense to point out that the risks are there regardless.
Unless I misread your comment and you’re suggesting that you think devs not having to deal with OS-specific code is a disadvantage of Docker. Or maybe you meant your second paragraph to be directed at OP?
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.