Openstack is like self-hosting your own cloud provider. My 2 cents is that it’s probably way overkill for personal use. You’d probably be interested in it if you had a lot of physical servers you wanted to present as a single pooled resource for utilization.
How does one install it?
From what I heard from a former coworker - with great difficulty.
What is the difference between a hypervisor/openstack/a container service (podman,docker)?
A hypervisor runs virtual machines. A container service runs containers which are like virtual machines that share the host’s kernel (more to it than that but that’s the simplest explanation). Openstack is a large ecosystem of pieces of software that runs the aforementioned components and coordinates it between a horizontally scaling number of physical servers. Here’s a chart showing all the potential components: …wikimedia.org/…/Openstack-map-v20221001.jpg
If you’re asking what the difference between a container service and a hypervisor are then I’d really recommend against pursuing this until you get more experience.
It’s for getting acquainted with the whole software stack. Also I have enough free time for it :) I’m also very well aware what the difference between a container service and a hypervisor are, I’m just a little overwhelmed by what open stack can do.
Deploying openstack seems like a very fun and frustrating experience. If you succeed, you should consider graduating from selfhosting and entering hosting business. Then, maybe post your offering on lowendtalk. Not many providers there use openstack so you might be able to lead the pack there.
There is a lot of complexity and overhead involved in either system. But, the benefits of containerizing and using Kubernetes allow you to standardize a lot of other things with your applications. With Kubernetes, you can standardize your central logging, network monitoring, and much more. And from the developers perspective, they usually don’t even want to deal with VMs. You can run something Docker Desktop or Rancher Desktop on the developer system and that allows them to dev against a real, compliant k8s distro. Kubernetes is also explicitly declarative, something that OpenStack was having trouble being.
So there are two swim lanes, as I see it: places that need to use VMs because they are using commercial software, which may or may not explicitly support OpenStack, and companies trying to support developers in which case the developers probably want a system that affords a faster path to production while meeting compliance requirements. OpenStack offered a path towards that later case, but Kubernetes came in and created an even better path.
PS: I didn’t really answer your question”capable” question though. Technically, you can run a kubernetes cluster on top of OpenStack, so by definition Kubernetes offers a subset of the capabilities of OpenStack. But, it encapsulates the best subset for deploying and managing modern applications. Go look at some demos of ArgoCD, for example. Go look at Cilium and Tetragon for network and workload monitoring. Look at what Grafana and Loki are doing for logging/monitoring/instrumentation.
Because OpenStack lets you deploy nearly anything (and believe me, I was slinging OVAs for anything back in the day) you will never get to that level of standardization of workloads that allows you to do those kind of things. By limiting what the platform can do, you can build really robust tooling around the things you need to do.
If you are more interested in running apps than having a NAS, I recommend trying CasaOS. TrueNAS is great, but I found CasaOS significantly more straightforward, especially when it comes to smb shares (it’s like two clicks).
Also TrueNAS uses ZFS which is good for what it is, but means you basically need a machine running TrueNAS to read/write the drives in case something goes wrong.
that is kinda what I’m trying to do. truenas is nice and all but its also pretty advanced and not beginner friendly when it comes to a lot of things. I’ve heard a lot about casaos from a youtuber that I like to watch, but I never realized that it was more of a nas os than just a platform to run applications. I’ll give it a shot! thanks for the recommendation.
Compared to TrueNAS, CasaOS is more of a “platform for running apps”, but unless you’re storing dozens of terabytes of improtant data in RAID or something, it’s still probably the easier/lower maitenence option.
Be careful OP that after first year you have to pay the ‘renew’ price, which is generally higher than ‘register’ price. A lot of cheap domain offers use that trick expecting users to become attached to their domains.
Because I am school student (16yr) from INDIA. Here u have to give record of each penny to parents and If say them that I just want a domain for self hosting my personal stuff I will not be able to say something else.
Can you make the domain somehow personalized to you so you can say its for an online resume to further your education and employability? If you happen to host other personal stuff that won’t cost you anything extra, just make sure you have a fancy looking CV at the root.
If you have a stable IP, there also free top level domains .TK / .ML / .GA / .CF / .GQ over at www.freenom.com . Their frontend is down sometimes, but once you have a domain and are point it to an IP, you should be dandy.
Check whatismyipaddress.com to see your IP address once you’re connected to either network, but with a high likelihood, it’s almost certainly different IPs. In that case, Dynamic DNS is probably best.
But if you’re using your neighbor’s wifi, I doubt there’s a way for you to host stuff unless you have access to their routers, can open ports 80 (HTTP) and 443 (HTTPS), and forward them to your server. It’s best to use hardware you control (including the router).
Not sure which ports are required for your usage but maybe cloudflared would work? It works on the free tier as well, you can install cloudflared on your linux/windows server (no BSD support afaik).
Freenom’s domains are pretty unstable, they lost management for .ga domains last year and they often claim others’ free domain when they have high usage.
though if you have unstable network I won’t suggest self hosting fediverse stuff.
Had really good experience with this option. Namecheap seems quite reasonable. Also, self hosting on other’s domain can cause a lot of issues as you try creating enough paths for everything. I have found subdomain routing to work much better as a lot of applications get sad when their host url is something like blarg.com/gitea or something.
I don’t know, maybe ?
But I recommand strongly to have your own domain name.
As long as you do nothing illegal, when you own a domain name, you have legal recourse to keep it. It’s not the case for an email service mail like gmail, which can ban you for no reason tomorrow, and you have no recourse to get back your email address back.
It’s a few euro per year, plus you can mutualize the cost with your family, take a domain name with your last name, this will allow your whole family to have firstname@lastname.yourcountrytld.
I just looked for my lastname, it’s around $10 per year.
I’ll repeat this again, but it means you will own this domain name, you have legal ressources and big companies won’t be able to take your mail address from you.
Else, use duckdns if you really don’t want to pay anything.
You just made a mistake into saying what domain name you will take, someone may buy it before you in order to extort money from you.
It probably won’t happen but…
That reminds me of my Linux server teacher in university. We were to buy a domain name from Namecheap or Gandi during class with some free credits, and the teacher was recommending lastname[dot]com if that was available.
I happened to say aloud “yep, mylastname[dot]com is available” and he quickly sushed me as if I had named Voldemort aloud in Hogwarts, telling me that saying it is a really bad move… lol
I don’t want to be rude but if you can’t afford a domain you probably shouldn’t be hosting a fediverse server.
Honestly do you even need to expose services to the internet? Internet exposure is dangerous and is not necessary for 95% of things. You can use a mesh VPN like netbird or Tailscale if you need remote access.
This isn’t me trying to offend you I just think it would be wise to reduce the scope of you projects.
I would stay away from kubernets/k3/k8s. Unless you want to learn it for work purposes, it’s so overkill you can spend a month before you get things running. I know from experience. My current setup gives you options and has been reliable for me.
NAS Box: Truenas Scale - You can have UnRaid fill this role.
Services Hosting: Proxmox - I can spin up any VMs I need and lots of info online to do things like hardware passthrough to VMs.
Containers: Debian VM - Debian makes a great server environment as it’s stable and well supported. I just make this VM a docker swarm host. I managed things with Portainer for a web interface.
I keep data on the NAS and have containers access it over the network. Usually a NFS share.
How do you manage your services on that, docker compose files? I’m really trying to get away from the workflow of clicking around in some UI to configure everything, only for it to glitch out and disappear and I have to try and remember what things to click to get it back. It was my main problem with portainer that caused me to move away from it (I have separate issues with docker-compose but that’s another thing)
I personally stepped away from compose. You mentioned that you want a more declarative setup. Give Ansible a try. It is primarily for config management, but you can easily deploy containerized apps and correlate configs, hosts etc.
I usually write roles for some more specialized setups like my HTTP reverse proxy, the arrs etc. Then I keep everything in my inventory and var files. I’m really happy and I really can tear things down and rebuild quickly. One thing to point out is that the compose module for Ansible is basically unusable. I use the docker container module instead. Works well so far and it keeps my containers running without restarting them unnecessarily.
Unifi is specific about expecting the controller address to not change. You have several options: There’s the “override controller address” setting, which you can use to point the devices at a dns name, instead of an ip address. The dns can then track your controller. It doesn’t exactly solve your issue, though, as USG doesn’t assign dns names to dynamic allocations.
Another option is to give the controller a static IP allocation. This way, in case you reboot everything, USG will come up with the latest good config, then will (eventually) allocate the IP for controller, and adopt itself.
Finally, the most bulletproof option is to just have a static IP address on the controller. It’s a special case, so it’s reasonable to do so. Just like you can only send NetFlow to a specific address and have to keep your collector in one place, basically.
I’d advise against moving dhcp and dns off unifi unless you have a better reason to do so, because then you lose a good chunk of what unifi provides in terms of the network management. USG is surprisingly robust in that regard (unlike UDMs), and can even run a nextdns forwarding resolver locally.
So this is where I’m a little confused. The USG had the option to assign a static IP (which I’ve done), but if you ever need to CHANGE that IP… Chaos. From what I understand the USG needs to propagate that IP to all your devices, but it uses the controller to do that. Then you also run into issues with IP leases having to time out. Same problem occurs if I ever upgrade my server and change out the MAC address. Because now the IP is assigned to the old MAC.
I’m not sure if there’s any way around this. But it basically locks me in to keeping the controller (and thus my server) at a single, fixed IP, without any chance of changing it.
Here’s how it works: unifi devices need to communicate with the controller over tcp/8080 to maintain their provisioned state. By default, the controller adopts the device with http://controller-ip:8080/inform, which means that if you ever change the controller IP, you’ll must adopt your devices again.
There are several other ways to adopt the device, most notably using the DHCP option 43 and using DNS. Of those, setting up DNS is generally easier. You’d provision the DNS to point at your controller and then update the inform address on all your devices (including the USG).
Now, there’s still a problem of keeping your controller IP and DNS address in sync. Unifi, generally, doesn’t do DNS names for its DHCP leases, and devices can’t use mDNS, so you’ll have to figure a solution for that. Or, you can just cut it short and make sure the controller has a static IP―not a static DHCP lease, but literally, a static address. It allows your controller to function autonomously from USG, as long as your devices don’t reach to it across VLANs.
I run Plex on a Raspberry Pi 3, it can support two simultaneous 1080p Streams on my local Wifi. Cant support 2k or 4k videos at all. And cant support video outside of the local network.
“use your favorite Unix then install Plex” or “Here are 56 perfect versions of Unix to install for your Plex server”
What part of this do you think is hard?
Each step can be scary at first but its not hard if you break it into pieces.
Booting Ubuntu or some linux OS is a fun first step if you actually have a spare computer handy
The articles are saying “Here is a list of almost all known versions of Linux, these are good for you to use” when you query what the best option is… Hardly narrowing down anything. Likewise saying “Use your favorite… then install the product you want to use” is also useless information if you are asking the question I was… I have no favorite obviously since I know nothing about it… and OBVIOUSLY I am going to install the produce I just asked about installing…
The pages I was looking at answered the question “How do I install Linux?” by saying “First, Install Linux”
Not to say there aren’t better sources, but all the first ones I found where along that theme
Ah yeah, i know what you mean. That can be overwhelming. There are a loooooot of choices, and the differences might be things I’ve never even heard of before.
I think a lot of these articles are written with the expectation that you will try several different versions after you learn to flash/boot. I think i ended up with 4 different forks i could boot from.
When I started, i went with Ubuntu first just because it seemed pretty stable and had support from a large company, but once I leanred how to boot Ubuntu it was easy to do the same steps for the other versions to try them out.
I actually went back and had a look at a few of the top results and I have a feeling a lot were AI written Sandtraps. Several were very similar “Install your favorite Linux then <copy and paste from Plex web site on how to install Plex>”
Makes it had for a newbie who doesn’t know what they don’t know so can’t ask the right question.
The Mint install works fine now, I made a lot of mistakes and took a while to get head around the folder structure and permissions but once I am more comfortable next time I’ll try something a little more headless I think, though playing around I reckon I’d be happy with Mint as a daily machine (if only my job wasn’t coding Windows apps :/)
The simplest solution would be to install Debian. The thing to note is that the Debian installer is designed to be multipurpose so it will default to installing a GUI.
Assuming you can boot off of a live USB with the Debian installer, you can follow the steps until you get to tasksel software selection from there uncheck gnome and check system utilities and ssh server. Also Debian defaults to separate root and user accounts. I would recommend disabling root (see steps below)
On a different machine, ssh into the server (I’m using debian.local but you should replace that with a hostname or IP)
Now you have a system to set things up. I would start by enabling automatic updates and installing docker compose. (Docker compose allows you to deploy software very quickly in co trainers via a yaml spec)
Thanks, I decided to see what happened with a Mint Install (Before I saw your reply) so as a Toe-in-water thing to learn more about the OS and see what stuff was like. I only Kitty into a Linux server for work and do some basic tasks on it occasionally so was interested.
An … interesting experience… trivial install, easy enough to understand the UI, entirely failed to get a Plex server working though… Nothing on the network can see it (Local works fine) which doesn’t make much difference because Plex has nothing to server since it can’t see the folder with movies on it due to, I believe, ownership issues (The files are on a portable USB drive)
Still fiddling but most help documents descend into arcane command line arguments very quickly and are generally “wrong” in that they suggest editing files that don’t exist in folders that aren’t there.
Still… a learning experience :) (Easy enough to kill it and tried Debian if I can’t work out chown!
Hah! Apparently in the long list of UFW commands I was running, the first one didn’t run or I missed it, can see the server now at least, just need it to see the files!
Entertaining but the wife is getting impatient :/
I use pihole for managing DNS and DHCP. It’s run via docker and the compose file and dnsmasq configs are version controlled so if the Pi dies I can just bring it up on another Pi.
The Pi with pihole has a static IP to avoid some of the issues you described.
That’s what I do. I do have a small VM that is linked to it in a keepalived cluster with a synchronized configuration that can takeover in case the rpi croaks or in case of a reboot, so that my network doesn’t completely die when the rpi is temporarily offline. A lot of services depend on proper DNS resolution being available.
I’ve been meaning to standup another pihole on another pi for DNS redundancy. I have to research how to best keep the piholes in sync. So far I’ve found orbital-sync and gravity-sync.
For me gravity sync was too heavy and cumbersome. It always failed at copying over the gravity sqlite3 db file consistently because of my slow rpi2 and sd card, a known issue apparently.
I wrote my own script to keep the most important things for me in sync: the DHCP leases, DHCP reservations and local DNS records and CNAMES. It’s basically just rsync-ing a couple of files. As for the blocklists: I just manually keep them the same on both piholes, but that’s not a big deal because it’s mostly static information. My major concern was the pihole bringing DHCP and DNS resolution down on my network if it should fail.
Now with keepalived and my sync script that I run hourly, I can just reboot or temporarily shutdown pihole1 and then pihole2 automatically takes over DNS duties until pihole1 is back. DHCP failover still has to be done manually, but it’s just a matter of ticking the box to enable the server on pihole2, and all the leases and reservations will be carried over.
If you ever switch to AdGuard Home, adguardhome-sync is pretty good. IMO AdGuard Home is better since it has all of PiHole’s features plus it supports DNS-over-HTTPS out-of-the-box, so your ISP can’t spy on your DNS queries (non-encrypted DNS queries can be easily intercepted and modified by your ISP even if you use a third-party DNS server, since they’re unencrypted and unauthenticated)
It might not be applicable to you but in many cases single board computers are used where there is minimal changes in files in day to day basis. For example when used for displaying stuff. For such cases, it is useful to know that after installing all the required stuff, the SD card can be turned into read only mode. This prolongs its life exponentially. Temporary files can still be generated in the RAM and if needed, you can push them to an external storage/FTP through a cron job or something. I have built a digital display with weather/photos/news where beyond the initial install, everything is pulled from the internet. I’m working towards implementing what I’ve suggested above.
That would not be ideal, as I want to keep most logs of the system, and I don’t have a syslog server and even if I had one I wouldn’t be able to get everything I need… But it is a quite good idea for other usecase and I might do that with my future projects that doesn’t need a rw filesystem!
I love that idea, and I’d love to implement that. But I honestly can never figure out how people are working with services that enables the user to change settings (for example, to set their location to get their local weather) while still maintaining a read-only system.
You keep the user-changeable files on a separate filesystem. Whether that’s just a separate partition, or an external disk. Keep the system itself read only, and write-heavy directories like logs and caches in RAM.
I am still figuring it out since it is my hobby and I’m unable to devote much time to it. But I think it will be something like Ubuntu live disks which enabled you to try Ubuntu by running it from a DVD. You could run anything like web server, save files, settings etc. Only they would not persist after a reboot since every thing was saved in RAM. Only here it’ll be a write locked SD card instead of a DVD.
I’m also sure there must be a name for it and step by step tutorial somewhere. If only Google was not so bad these days…
one of the benefits of things like docker is creating a very lightweight configuration, and keeping it separate from your data.
ive setup things so i only need to rsync my data and configs. everything else can be rebuilt. i would classify this as 'disaster recovery'.
some people reeeeally want that old school, bare-metal restore. which i have to admit, i stopped attempting years ago. i dont need the 'high availability' of entire system imaging for my personal shit.
Do you have tips to save multiple location, as I also have non-docker configs to backup in /etc and /home, how do you do it ? just multiple rsync command in a sh script file with cron executing it periodically ? or is there a way to backup multiple folder with one command ?
Privacy/security: Cloudflare terminates HTTPS, which means they decrypt your data on their side (e.g. browser to cloudflare section) then re-encrypt for the second part (cloudflare to server). They can therefore read your traffic, including passwords. Depending on your threat model, this might be a concern or it might not. A counterpoint is that Cloudflare helps protect your service from bad actors, so it could be seen to increase security.
Cloudflare is centralised. The sidebar of this community states “A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.”, and Cloudflare is for sure a service you don’t control, and arguably you’re locked into it if you can’t access your stuff without it. Some people think Coudflare goes against the ethos of self-hosting.
With that said, you’ll find several large lemmy instances (and many small ones) use cloudflare. While you’ll easily find people against its use, you’ll find many more people in the self-hosted community using it because it’s (typically) free and it works. If you want to use it, and you’re ok with the above, then go ahead.
In addition to the above, most of the percieved advantages of CF are non-existent on the free tier that most people use. Their “DDoS protection” just means they’ll drop your tunnel like a hot potato, and their “attack mitigation” on the free tier is a low-effort web app firewall (WAF) that you can replace with a much better and fully customizable self-hosted version.
They explicitly use free DDoS protection as a way to get you in the door, and upsell you on other things. Have you seen them “drop your tunnel like a hot potato”?
Now obviously if their network is at capacity they would prioritise paying customers, but I’ve never heard of there being an issue with DDoS protection for free users. But I have heard stories of sites enabling Cloudflare while being DDoSed and it resolving the problem.
Any stories you’ve heard about websites enabling CF to survive DDoS were not on the free tier, guaranteed.
Please re-read the description for the free tier. Here’s what “DDoS protection” means on free tier:
Customers are not charged for attack traffic ever, period. There’s no penalty for spikes due to attack traffic, requiring no chargeback by the customer.
Will they use some of their capacity to minimize the DDoS effects for their infrastructure? Sure, I mean they have to whether they like or not, since the DNS points at their servers. But will they keep the website going for Joe Freeloader? Don’t count on that. The terms are carefully worded to avoid promising anything of the sort.
They also say “Cloudflare DDoS protection secures websites and applications while ensuring the performance of legitimate traffic is not compromised.”, with a tick to indicate this is included in the Free tier.
You are honestly the first person I’ve heard complain about Cloudflare failing to protect against DDoS attacks. However, I have no doubt that not having Cloudflare, I would fare no better. So still seems worthwhile to me.
The first point is only when you use the tunnel function, right ?
Because I noticed, if use the tunnel function (hiding your private ip) the sites gets an Cloudflare certificate, but if just using it as DNS (without tunnel) the page has my certificate.
If you use DNS with proxy it still applies, you should get a Cloudflare certificate then. But yes, if you use Cloudflare as DNS only, then it should be direct. I believe you get none of the protection or benefits doing this, you’re just using them as a name server.
The Cloudflare benefits of bot detection, image caching, and other features all rely on the proxy setting.
Also if proxying is enabled, your server IP is hidden which helps stop people knowing how to attack your server (e.g. they won’t have an IP address to attempt to SSH into it). You don’t get this protection in DNS only mode either.
Basically if you’re using DNS only, it’s no different to using the name server from your domain registrar as far as I can tell.
There’s a third point which is: Things in CloudFlare are publicly accessible, so if you don’t put a service on front for authentication and the service you’re exposing has no authentication, a weak password or a security issue, you’re exposing your server directly to the internet and bad actors can easily find it.
Which is why some services that I don’t want to have complicated passwords are only exposed via Tailscale, so only people inside the VPN can access them.
I have a cloudflare tunnel setup for 1 service in my homelab and have it connecting to my reverse proxy so the data between cloudflare and my backend is encrypted separately. I get no malformed requests and no issues from cloudflare, even remote public IP data in the headers.
Everyone mentions this as an issue, and I am sure doing the default of pointing cloudflared at a http local service but it’s not the ONLY option.
I’m not quite sure I get what you’re getting at. If you’re using Cloudflare (for more than just a nameserver), then the client’s browser is connecting to Cloudflare via a Cloudflare SSL certificate. Any password (or other data) submitted will be readable by Cloudflare because the encryption is only between the browser and Cloudflare. They then connect to your reverse proxy, which might have SSL or it might be unencrypted. That’s a second jump done by re-encrypting the data.
How does the reverse proxy help, when the browser is connecting to Cloudflare not to the reverse proxy?
Consider checking out XCP-ng. I’ve been testing it for a few days and I’m really enjoying it. Seems less complicated and more flexible than Proxmox but admittedly I’m still learning and haven’t even tried multiple servers yet. I would suggest watching some YouTube videos first. Good luck!
I want to like XCP-ng. Unfortunately my primary use case is VMs or containers working with attached USB devices. On Xen it seems like an absolute nightmare to passthrough USB or PCI devices other than GPUs (as vGPUs).
Even on Proxmox it has been frustratingly manual.
I’m planning to try out k8s generic device plugins. I don’t really need VMs if containers will cooperate with the host’s USB. I’m sure that will be a bit of a nightmare on its own and I will be right back to Proxmox.
I hope someone will tell me I am wrong and USB can be easy with Xen. I do prefer XCP-ng over Proxmox in many other ways.
Here’s their documentation. The tip suggests it may have been harder in the past but it doesn’t seem too bad now. Hopefully this is configurable in Xen Orchestra in the future.
Do NOT self-host email! In the long run, you’ll forget a security patch, someone breaches your server, blasts out spam and you’ll end up on every blacklist imaginable with your domain and server.
Buy a domain, DON’T use GoDaddy, they are bastards. I’d suggest OVH for European domains or Cloudflare for international ones.
After you have your domain, register with “Microsoft 365” or “Google Workspace” (I’d avoid Google, they don’t have a stable offering) or any other E-Mail-Provider that allows custom domains.
Follow their instructions on how to connect your domain to their service (a few MX and TXT records usually suffice) and you’re done.
After that, you can spin up a VPS and try out new stuff and connect it also to your domain (A and CNAMR records).
I’d throw in mailbox.org as a more privacy-focused alternative to Google and Microsoft. Been using them for years without issues. Only their 2FA solution sucks.
If you get your domain from OVH, you get one single mailbox (be it with a lot of aliases, like a different email-address for every service/website you use) for free.
I did as well, but then I went Microsoft and never looked back. Google’s platform still feels like a shitty startup with missing stuff everywhere, compared to Azure (or AWS).
The only thing I’m missing is Google Photos, but there are self-hosted alternatives out, that I’ll try soon.
All good advice. I’d recommended protonmail for mail hosting - got very good experience with them and the onky downside is you have to use their client.
I’ll second not self hosting email unless you’re in it for the experience.
I’d also strongly caution against hosting email for friends and family unless you want to own that relationship for the rest of your life.
If you do it anyway, you’re going to end up locked into whatever solution you decide for a long time, because now you have users who rely on that solution.
If you still go forward, don’t use Google (or msft). Use a dedicated email service. Having your personal domain tied to those services just further complicates the lock in.
(I did this over a decade ago, with Google, when it was just free vanity domain hosting. I’ve been trying for years to get my users migrated to Gmail accounts.)
If I had it all to do over again. I’d probably setup accounts as vanity forwards to a “real” account for people who wanted them. That’s easy to maintain, move around, and you’re not dealing with migrating peoples oauth to everything when you want to move or stop paying for it.
I have a bunch of users (friends and family) on a bunch of different domains. It’s honestly not so bad but yeah, you need a decent dedicated service.
Migrations aren’t simple but aren’t that complicated either (just did one last year).
I mainly need to copy their email over but it’s also a good moment to check they’re using decent passwords and to have them freshen it.
I also need to update their webmail and IMAP/SMTP URLs in their bookmark/email apps but I’ve been playing with DNS CNAMEs for this purpose and it’s mostly working ok (aliasing one of my domains to the provider’s so I only have to update the DNS which I do anyway for a mail migration).
My mistake was using Google but when it was just the ability to have a personal domain as your google account. But they kept expanding and morphing that into what is now Google Workspace. Migrating people off of that requires them to abandon their Google accounts and start over. If it was just email it would be a much simpler prospect to change backends.
Certainly. But, what I’m trying to say is it’s not just email. My users are using my domain as their Google account. All Google services, oAuth, etc…, not just email. To do it right I need to get them to migrate their google services to a gmail.com account.
I currently selfhost mailcow on a small VPS but I would like to move the receiving part to my homelab and only use a small VPS or service like SES for sending.
I set this up a couple years ago but I seem to remember AWS walking me through the initial setup.
First you’ll need to configure your domain(s) in SES. It requires you to set some DNS records to verify ownership. You’ll also need to configure your SPF record(s) to allow email to be sent through SES. They provide you with all of this information.
Next, you’ll need to configure SES credentials or it won’t accept mail from your servers. From a security standpoint, if you have multiple SMTP servers I would give each a unique set of credentials but you can get away with one for simplicity.
I’ve got postfix configured on each of my VPS servers, plus and internal relay, to relay all mail through SES. To the best of my knowledge it’s worked fine. I haven’t had issues with mail getting dropped or flagged as SPAM.
There is a cost, but with my email volumes (which are admittedly low) it costs me 2-3 cents a month.
I’d avoid Google, they don’t have a stable offering
What you you mean by not stable?
I’ve been (stuck with) Google Workspace for many, many years - I was grandfathered out from the old G-Suite plans. The biggest issue for me is that all my Play store purchases for my Android are tied to my Workspace’s identity, and there’s no way to unhook that if I move.
I want to move. I have serious trust issues with Google. But I can’t stop paying for Workspaces, as it means I’d lose all my Android purchases. It’s Hotel fucking California.
But I’ve always found the email to be stable, reliable, and the spam filtering is top notch (after they acquired and rolled Postini into the service).
I mean, they kill services willy nilly. Sure Gmail will probably survive, but the rest drove me away (Reader, Music, …).
Regarding your Android purchases: At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.
Don’t let those old purchases hold you back. Cut this old baggage loose.
At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.
Yeah, I know this what I should do too. As someone else said in this comment thread, gotta tear that bandaid off at some point. Just shits me that I should have to. But the freedom after doing it… <chef’s kiss>
“But I shouldn’t have to” is a trap, everywhere it occurs. It cripples one’s ability to act on an emotional level, and manifests as all kinds of resistances and avoidances that ultimately prevent you from seeing the problem clearly - and if you somehow do see the problem clearly, you still don’t want to do anything about it.
The world owes you nothing. You exist. If you want love and fairness and a reasonable world, love and be fair and be reasonable, and choose to work together with those who are. Where you work, what you spend your time on, where you spend your money, and who you spend your time with are your places of impact. Don’t let others steal that - particularly over ‘but I shouldn’t have to defend myself.’
I tore that bandwidth off a while ago. Same thing with trust issues and google.
Since then I set up a family account and use a regular Gmail account for app store purchases so I can change provider at any time. Can share most of my app purchases with family. I don’t actually check the gmail email. Just use it for Android services.
Yeah, that’s the other thing that shits me. Paying for my wife and I on Workspaces, and we don’t have family sharing rights. We’re literally paying to be treated like second-class citizens!
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.