Well, the issue here is that your backup may be physically in a different location (which you can ask to host your S3 backup storage in a different datacenter then the VMs), if the servers themselfs on which the service (VMs or S3) is hosted is managed by the same technical entity, then a ransomware attack on that company can affect both services.
So, get S3 storage for your backups from a completely different company?
I just wonder to what degree this will impact the bandwidth-usage of your VM if -say- you do a complete backup of your every day to a host that will be comsidered as “of-premises”
if you backup your vm data to the same provider as you run your vm on you don’t have an ‘off-site’-backup, which is one criteria of the 3-2-1 backup rule.
I personally graduated from a Rpi3b to an Intel NUC years ago and never looked back. Real RAM slots and Storage options internally and you can get as nice a processor as your budget allows. So my vote is to move to the SFF PC and let your Pi stick around for other projects.
Thanks for your insights. Thought about a NUC as well, but AFAIK it doesn’t have pcie slots? So I won’t be able to install eg a graphics card or pcie coral?
I wouldn’t go NUC if you need a PCIe slot. The HP you were talking about would fit the bill though.
I believe they make a Coral that fits where the wifi chip goes too. As long as you are ok ditching the wifi/bt functionality for a TPU. For a server doing image processing that’s almost a no-brainer to me.
I have my Immich library backed up to Backblaze B2 via Duplicacy. That job runs nightly. I also have a secondary sync to Nextcloud running on another server. That said, I need another off prem backup and will likely run a monthly job to my parents house either via manually copying to an external disk then taking it over or setting up a Pi or other low power server and a VPN to do it remotely.
I use node exporter for host metrics (Proxmox/VMs/SFFs/RaspPis/Router) and a number of other *exporters:
exportarr
plex-exporter
unifi-exporter
bitcoin node exporter
I use the OpenTelemetry collector to collect some of the above metrics, rather than Prometheus itself, as well as docker logs and other log files before shipping them to Prometheus/Loki.
Oh, I also scrape metrics from my Traefik containers using OTEL as well.
What does having OpenTelemetry improve? I have a setup similar to yours but data goes from Prometheus to Grafana and I never thought I would need anything else.
Not a whole lot to be honest. But I work with OpenTelemetry everyday for my day job, so it was a little exercise for me.
Though, OTEL does have some advantages in that It is a vendor agnostic collection tool. allowing you to use multiple different collection methods and switch out your backend easily if you wish.
It’s open source, it’s easy to setup, its agents are available for nearly anything including OpenWrt, it can serve the simplest use case of “is it down” as well as much more complicated ones that stem from its ability to collect data over time.
Personally I’m monitoring:
Is it up?
Is the storage array healthy?
Are the services I care about running?
I used to run it ephemerallly - wiping data on restart. Recently started persisting its data so I can see data over the longer run.
This one will be a bit trickier because of federation. Maybe it is even impossible. But for git hosting, website hosting, email, your cloud, various chats software or torrents it should just work.
How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?
In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.
Your first sentence proves that it’s different. The developer needs to know it’s going to be a Deb package. What about rpm? What about if it’s going to run on mac? Windows? That means they’ll have to change how they develop to think about all of these different platforms. Oh you run windows - well windows doesn’t have openssl, so we need to do this vs that.
I’d recommend reading up on docker and containerization. It is not a script for setting up software. If that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it. Like it or not it’s here, and if you’re doing any dev/ops work professionally you will be left behind for not understanding it.
Apparently I was unclear, I was referring to the security implications of using different manifestations of other people’s code. Those are rather similar.
I’d recommend reading up on docker and containerization. It is not a script for setting up software.
I was referring specifically to docker files. Those are almost to the letter scripts for setting up software.
if that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it.
I find your attitude not just uncharitable, but also rude.
and I find misinformation about topics like this also to be rude. It’s perfectly fine if you don’t understand something, but what I don’t like is you going out of your way to dissuade people from using a product when I don’t think you understand the core concepts of it. If you have valid criticisms like security of docker then that’s a different conversation about securing containers, but it’s hard to take them as valid criticisms if the criticism is based on a fundamental misunderstanding of the product.
I don’t think anyone I have ever talked to professionally or read about docker would ever describe a dockerfile as “scripts for setting up software”. It is much more nuanced then that.
So yes, I’m a bit rude about it. I do this professionally and I’m very tired of people who don’t understand containerization explain to me how containerization sucks.
I don’t think you understood the context of the comment you replied to. As a reply to “Here are all these drawbacks to Docker vs hosting on bare metal,” it makes perfect sense to point out that the risks are there regardless.
Unless I misread your comment and you’re suggesting that you think devs not having to deal with OS-specific code is a disadvantage of Docker. Or maybe you meant your second paragraph to be directed at OP?
Use Cloudflare or PorkBun.com for cheap, no bullshit domains. As for the email host, self hosting not recommended. It’s a long battle to be not blocked by every other provider.
I recommend purelymail.com - no cost to add (even multiple!) custom domains, unlimited users, only pay for mail usage and storage. Go for advanced pricing until it starts costing you more than $10/yr. (Which it shouldn’t if it’s just you. Seriously this thing is cheap!) I just passed my one year anniversary with PurelyMail, and have spent $6 so far. This is my most expensive month, 85¢. And that’s only because I host a public Lemmy instance (small) and we had a few hundred spam signups which sends an email each time.
This will give you a total yearly price WAY under what Google or Microsoft will give you. Google is like, $7.20/user/month.
And if for some reason that service goes down one day, as long as you still have a mail client with your email stored in it you should be able to just switch providers and import your emails from your client. Make some backups.
I was very tempted to go for this one, but couldn’t find info on whether this was a one-man operation or if there are any disaster recovery plans. Sounds cruel, but if that one single guy my email depends on gets hit by a bus…
For anybody interested in more choices for volume-based providers like PurelyMail (with tiers based on storage and emails sent/received but who otherwise allow unlimited domains/mailboxes/aliases) there’s also MXRoute (US) and Migadu (Swiss/EU).
These providers don’t usually make sense for a single mailbox (although some of them have a low entry tier for this purpose) but can be extremely cost-efficient if you need 2 or more mailboxes/domains.
It has the benefit that the container can’t start before the mount point is up without any additional scripts or kludges, so no race conditions or surprise behaviour. Using fstab can’t provide that guarantee. The other option is Autofs but it’s messier to configure and may not ship out of the box on modern distros.
Very interested in this as Gmail is one of my last Google cords to cut. But it doesn’t solve the issue of trying to host it from a non-commercial Internet connection. Last I remember most ISPs won’t let you open the ports required to run an email service on a home connection. Anyone have modern experience with that?
Then deal with your email going to spam most of the time because you’re domain/IP is so new and not “warmed up” that email systems think it’s all spam.
Yeah, it seems like the latter option is the obvious answer. It’s an awful lot of work you still have to pay for. I’d rather just pay someone to offer me secure email and not harvest my information.
In my experience, this is nothing more than an urban legend at this point. There are great standards, like DMARC, DKIM, SPF, proper reverse DNS and more, that are much more reliable and are actually used by major mail servers. Pick a free service that scans the publicly visible parts of your email server and one that accepts an email that you send to them and generates a report. Make sure all checks are green. After an initial day of two of getting it right, I’ve never had trouble with any provider accepting mail and the ongoing maintenance is very low.
Milage may vary with an unknown domain and large email volumes or suspicious contents, though.
There are literally RBLs in use by many major mail providers that just contain all dynamic IPs. There are others that block entire subnets used by VPSs at certain hosters. In neither of those you can remove your IP yourself (unlike the ones that list individual IPs because of that IP’s reputation).
It’s not a shame because of the amount of people we know, or how many people there are in total, that want to self-host email. It’s about the fact that it’s so difficult to set up, and hard to secure. I just wish it were simpler and more secure by default so that more people could roll their own and break free from ad-ridden and privacy-invading email services. 👍
Gmail to MXroute when Google threatened to pull the grandfathered free Gmail custom domain thing. Got their lifetime plan, easy enough to configure so outgoing mails don’t get marked as spam. However, the major downside is it’s still using Spam Assassin as spam filter.
I moved from Gmail to ProtonMail, then to Mailbox.org. Ypu can set up a mailserver on your home server, but you would need a VPS that would forward the traffic to and from your home server without you needing to open any ports. This guide can help you with TLS passthrough.
But setting up your own mailserver is a big hassle. Just pay a trusted provider and keep your inbox, and preferably all emails, encrypted with GPG.
I was paying $7/m for their mail, VPN and drive services. One of my major reasons to switch was their lack of linux support. They claim that it is hard to find Linux developers. Second reason was their drive’s download and upload speeds were terrible, from where I am sitting. Their VPN service is great. I always got great speeds, but their linux apps have always been terrible. Their mail service is also great, but I would like more control over it, like Mailbox.org. on Mailbox, I can encrypt my inbox using a different key, while also having the SMTP submission feature. I really ned that to integrate emails with my websites and services. Mailbox can also encrypt their cloud drive with our key, while also providing WebDAV support (how cool is that). Their mail app on android is open-source but is not available on f-droid. And the apk they provide on their website neither has a notification functionality, nor does it auto-update. Another reason was that I was limited to 3 custom domains, unless I buy their business plan. Mailbox has no such limit.
One final reason was that I did not want to keep all my apples in one basket. So, for mail, I am using mailbox, for storage, I am using a personal nextcloud and a Hetzner managed nextcloud, for VPN, I started using mullvad, but their speeds are terrible and connections are unreliable. For passwords I am using self-hosted vaultwarden.
There are a few more reasons that I do not remember, now. Proton is great, I still trust them. But these small things really go a long way.
That’s pretty much exactly my story except I went with fastmail.com, mullvad for vpn (you really need to test with some script to find your best exit nodes I forget which one I used ages ago but it found me a couple of nodes about 1000 kms away from my location and in a different country that I can do nearly a gig through routinely… Maybe it was this script? github.com/bastiandoetsch/mullvad-best-server) . I went with pcloud for a bit but tailscale and now currently netbird make it kind of irrelevant since its’ so easy to get all my devices able to communicate back to my house file server. I want to like hetzner so bad but every time I try it the latency to north america just kills me and the north american offering was really far away and undeveloped last time Itried it
For me the issue with Mullvad is like this… I connect to a server, I get good speeds, but after an hour or two, I get stuck at 2-3mbps. This issue gets resolved when I reconnect, even to the same server. Also, I like using OpenVPN over TCP, but their speeds, in Mullvad’s case, are terrible for all exit nodes.
It also may be the case that my ISP is deliberately ruining the IPv4 routes because I am connecting to a VPN for privacy.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.