About the trust issue. There’s no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won’t, and you might trust your distro package manager, but that still has a similar problem.
Thanks! I’ll try it out. I don’t see anything on their site about JavaScript source mapping, so I assume they don’t do it. With Sentry, you upload the source map to the server as part of your JS build process, and their backend automatically maps minified stack traces to unminified ones using the uploaded source map. Maybe I’d be fine losing that in exchange for something lighter weight.
This blog post mentions that it should be possible at least!
Im currently using their free tier for a hobby project and have been happy with it. Have considered moving over to self hosting the solution, but have been keeping off on it due to resource contraints, but might make the leap soon! Would be nice to get use of the uptime pings, which currently would fill the event way too quickly for the free tier.
Hello, I’m the lead dev of GlitchTip. Fun to see it mentioned here. Source maps are supported. I wish I had time to make the feature easier to use and write better docs. Contributions are welcome. It’s very much a hobby project for the little time I have after work and family. Right now all of my attention is on an event ingest rewrite to work with fewer resources.
Nice to see you on here! I understand the lack of time - I’ve got some projects I’ve had on hold for years because of time constraints. I’m definitely going to try Glitchtip.
If I get some free time, I’ll see if I can write some docs about using source maps for JS apps. Sounds like it works in the same way as Sentry’s does.
It was a great idea for GlitchTip to reuse the Sentry SDKs and CLI, because their SDKs are solid. They’ve got the best .NET SDK out of all of the error logging systems I evaluated two years ago which is why I was using Sentry. Unfortunately, Sentry has become significantly heavier over those two years.
I am very happy with my Omada setup. It’s an ecosystem, not a single device. I use an er605 as router and eap610 as AP. I also have a switch, probably you don’t need that, and I now have an Omada controller (you can also host that in as a docker container, so not strictly needed). For wifi you can simply throw another ap somewhere and have excellent Mesh wifi. It’s more complex than a simple consumer router, but also has a lot more functionality.
The controller does not need to run 24/7. The controller configures the devices and the config remains on the devices. Though, when your devices are adapted by a controller, you cannot access any settings on the devices themselves, only via the controller.
Maybe should add: depending on the network set-up, I’d strongly recommend getting a hardware controller. For me, I have one server hosting all my stuff. I also hosted the controller with docker in this server. Which ends up being a single point of failure, and no way to look into your routing if your server is down/unreachable. I got a hardware controller (oc200) eventually just to separate my interner and network infrastructure from my hosting and service infrastructure.
The controller also handles roaming, as I understand it. I have a software controller on a VM. They provide a .deb! I have 3 EAP670s and an EAP-655-Wall. Roaming works perfectly on phones and laptops. I have a hidden SSID on each individual AP that I use to lock dumber stuff. Some devices fight the AP Lock on Omada.
I see the value in going 100% omada, but I couldn’t justify the cost of the switches I’d need. Their routers look good for the price too, but my use case is a notch or two above their target market.
Restic is another option, but it’s a little less user friendly and is all CLI, if i recall correctly. However I’m pretty sure you can send backups straight to a server via Restic.
I checked a couple of times time shift, but it’s a shame not even ftp is allowed as a backup destination.
As for restic, will give a check later
EDIT: just read about restic, and I think this can be the solution I was looking for. Docker image is available and all, so for me that is a big plus. Once I have the chance I will test drive it and see where it goes. Thanks!
I think Borg Backup would fit your needs. You would still need to reinstall things like a boot sector and recreated partitions but on the other hand file based backups have the advantage that you can restore individual files when needed too and that it is easier to only backup what changed. Just make sure to exclude any temporary files you don’t want to keep from the backup (e.g. cache dirs, log files that get rewritten often and aren’t relevant long-term,…).
If you’re up for it, it’s generally better to not backup everything. Only backup the data that you need. Like a database. Or photos, music, movies, etc. for personal data. For everything else, it’s best to automate the install and maintenance of your server.
Nowadays I sort of do this with seafile. Select folders to sync, open the app every other time to resync stuff, carry on with your day. The only thing I wanted to take away if there is a better way to not have a massive hassle to reinstall everything in case something happens (and in case I forget to select a folder to sync also).
But your suggestion I think is very valid as well. At least for mint have a way to make a more automated installer or similar to get the stuff I use usually. Yet another rabbit hole to go into…
I use Veeam Backup & Recovery Community Edition. If you’re runing VM’s you have to be on VMWare or Hyper-V. You can also use agents on the individual VM/Server. It also requires a pretty hefty Windows host, at least if you want your backups to complete fairly quickly.
Those are understandably downsides for some people. But, Veeam is in a class by itself. It has no serious competitors and as far as ease of use and reliability, it’s top tier.
I’m lazy. I don’t want to spend a bunch of time configuring finicky backups only to find out I needed one and it failed. I honestly wish there were a comparable open source backup system. I have yet to find anything that works as well.
I was reading about it and I actually like a lot this solution’s principle. It reminds me a lot of puppet which I have seen before (for other kind of tasks) to orchestrate several computers. Big shame it works on windows though, since I have a server with docker on ubuntu server at this point and was not really looking forward to change that. But thanks for the suggestion, is for sure very interesting
one of the benefits of things like docker is creating a very lightweight configuration, and keeping it separate from your data.
ive setup things so i only need to rsync my data and configs. everything else can be rebuilt. i would classify this as 'disaster recovery'.
some people reeeeally want that old school, bare-metal restore. which i have to admit, i stopped attempting years ago. i dont need the 'high availability' of entire system imaging for my personal shit.
Do you have tips to save multiple location, as I also have non-docker configs to backup in /etc and /home, how do you do it ? just multiple rsync command in a sh script file with cron executing it periodically ? or is there a way to backup multiple folder with one command ?
The second isn’t a bad idea if it’s in combination with the first. Then you have an image you can restore with most of your config and you can just restore the rest from the normal backups.
selfhosted
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.