I do this. Personally I use cloudflaire for my domain and dns, not that I’m committed to them it’s just what I use. I then use protonmail for my email and point the relivent records to them.
I don’t give my personal email address to literally anyone. Everyone gets an alias.
Once someone gets your personal email address and leaks it, there is no way to stop spam. You cannot delete your personal address because it is your account identity.
Firefox Relay, AnonAddy, SimpleLogin, all great services.
I have a business email address that I’m just unfortunately stuck digging through spam.
If you want to start try to find something used on DBA, like an old laptop. If you are an student, maybe someone in your class upgrades their laptop and you can get it cheap. (Best a laptop where you can remove the battery, plus you need to change a setting so it doesn’t go in standby when closing the lit)
You can add an external hard disk for nextcloud data.
My first home server was an raspberry pi, it’s not great for nextcloud, you need to disable all preview image, and the UI might still be slow. Untop using an microSD card for the OS might randomly break (happens to me, SanDisk).
My second server was my old laptop, I used an laptop with i3 from like 2013 as server for a long long time.
Best thing I can recommend is to not rush and get the first best thing, try to look for a good deal. Start small and you can always increase your server in the future.
Bookstack is really nice and user friendly. It’s probably one of my favorites.
Dokuwiki is simple and stores files in plaintext.
I haven’t used wiki.js much but I’ve heard good things about it too.
Another option if you don’t need to share the wiki with anyone would be a note tool like Trilium. It has built in support for stuff like mermaid or excalidraw diagrams.
Don’t forget to setup backups for whatever wiki you do go with, and make sure you can restore them when your wiki is broken ;)
I tried both hosting my own mail server and using a paid mail hosting with my own domain and I advise against the former.
The reason not to roll out your own mail server is that your email might go to spam at many many common mail services. Servers and domains that don’t usually send out big amount of email are considered suspicious by spam filters and the process of letting other mail servers know that they are there by sending out emails is called warming them up. It’s hard and it takes time… Also, why would you think you can do hosting better than a professional that is paid for that? Let someone else handle that.
With your own domain you are also not bound to one provider - you can change both domain registrar and your email hosting later without changing your email address.
Also, avoid using something too unusual. I went with firstname@lastname.email cause I thought it couldn’t be simpler than that. Bad idea… and I can’t count how many times people send mail to a wrong address because such tld is unfamiliar. I get told by web forms regularly that my email is not a valid address and even people that got my email written on a piece of paper have replaced the .email with .gmail.com cause “that couldn’t be right”…
I get told by web forms regularly that my email is not a valid address and even people that got my email written on a piece of paper have replaced the .email with .gmail.com cause “that couldn’t be right”…
That’s the thing that holds me back from a non-standard TLD, as much as I’d love to get a vanity domain.
I’ve got a .org I’ve had for over 20 years now. My primary email address has been on that domain for almost as long. While I don’t have problems with web-based forms, telling people my email address is a chore at best since it’s not gmail, outlook, yahoo, etc…
I keep seeing people say this but I’ve yet to encounter it even once. I fully believe it happens with non-com/net/org TLDs but I’ve been using my .org as my daily driver for 2 decades and have never had it rejected or denied.
You can avoid the warmup by using an SMTP relay, and you can just use the one from your DNS provider if you’re not planning to send hundreds of mails per day.
Using CloudFlare and using the cloudflared tunnel service aren’t necessarily the same thing.
For instance, I used cloudflared to proxy my Pihole servers’ requests to CF’s DNSoHTTPS servers, for maximum DNS privacy. Yes, I’m trusting CF’s DNS servers, but I need to trust an upstream DNS somewhere, and it’s not going to be Google’s or my ISP’s.
I used CloudFlare to proxy access to my private li’l Lemmy instance, as I don’t want to expose the IP address I host it on. That’s more about privacy than security.
For the few self-hosted services I expose on the internet (Home Assistant being a good example), I don’t even both with CF at all. use Nginx Proxy Manager and Authelia, providing SSL I control, enforcing a 2FA policy I administer.
Actually you dont need to trust a upstream DNS server. Checkout dnscrypt-proxy in github. You can use dnscrypt with Anonymized DNS relays. You can use the IP of this dnscrypt-proxy as your DNS resolver.
I’m probably the biggest simpleton in this thread, but I was just looking at this earlier and TiddlyWiki still seems like the easiest of the easiest. It’s literally just an html file that requires pretty minimal setup to get going. Nothing else seems to even come close. I’ve been using it for a couple of years as a sort of internal departmental job aid, just basic information for our group and it’s pretty straight-forward.
The problem is Opnsense, as the BSD kernel used is doing single thread network routing. So the APU can saturate 1gbit with multiple connections/threads or if you switch to a firewall with a Linux kernel like OpenWRT.
That said, a N100 probably does have enough single thread performance to do 1.2 Gbit. Not sure about the full 2.5gbit though.
Thank you for the answers. I enjoy opnsense, it’s easier to use then openwrt for me personally.
I was thinking to do some testing of the new device before I replace the old one. But I wanted to hear if anyone has experiences.
I looked at CPU benchmark net, and saw that N100 is about 8 times faster then the AMD SOC. I’m not sure if this is linear with performance increase. Currently max download is about 600-700 while upload is 300-400.
How are you measuring your speeds? I think cloudflare speed tests were more accurate for me then ookla, but in the end downloading a large file over usenet gives me the best picture
Edit- and that made me realise my ssd was a bottleneck, replacing that helped me go from 500-600 to about 900-950 on my gigabit connection
Based on my personal experience, id say gmail, you only need a domain I used namecheap without any issue. You register with that on google, some settings you set on namecheap , it guides you all the way then you pay the lowest monthly fee, I pay 5.20 euros per month for my company’s mail.
You set up a main email then you can setup any number of aliases for yourself I think, you can also create group emails and assign yourself to it
I’ve been using Wiki.js since I asked this question a few months ago. I’ve been pretty happy with it. Stores data in text files using markdown and can synchronize a number of backends. I’ve got mine syncing to a private github repo.
Everyone’s saying fstab but if Navidrome is in a docker container, just mount it as a volume on your container. I found this guide that seems to document it fairly well.
I can’t remember what I was watching, but I remember watching something where they said Kubernetes is designed for something so large in scale that the only reason people have heard about it is because some product manager asked what Google use and then demanded that they use it to replicate the success of Google and subsequently, hobbyists also followed and now a bunch of people are using stuff that’s poorly optimized for such small scale systems.
Haha yeah true, but it does come with the advantage that it’s super prevalent and so has a lot of tools and docs. Nearly every self-hosted service I use has a docs page for how to set it up with Kubernetes. (Although it’s not nearly as prevalent as plain docker)
With a basic understanding of how k8s works and an already running cluster, all one needs to know is how to run a service as a docker file to have it also run in k8s
I would stay away from kubernets/k3/k8s. Unless you want to learn it for work purposes, it’s so overkill you can spend a month before you get things running. I know from experience. My current setup gives you options and has been reliable for me.
NAS Box: Truenas Scale - You can have UnRaid fill this role.
Services Hosting: Proxmox - I can spin up any VMs I need and lots of info online to do things like hardware passthrough to VMs.
Containers: Debian VM - Debian makes a great server environment as it’s stable and well supported. I just make this VM a docker swarm host. I managed things with Portainer for a web interface.
I keep data on the NAS and have containers access it over the network. Usually a NFS share.
How do you manage your services on that, docker compose files? I’m really trying to get away from the workflow of clicking around in some UI to configure everything, only for it to glitch out and disappear and I have to try and remember what things to click to get it back. It was my main problem with portainer that caused me to move away from it (I have separate issues with docker-compose but that’s another thing)
I personally stepped away from compose. You mentioned that you want a more declarative setup. Give Ansible a try. It is primarily for config management, but you can easily deploy containerized apps and correlate configs, hosts etc.
I usually write roles for some more specialized setups like my HTTP reverse proxy, the arrs etc. Then I keep everything in my inventory and var files. I’m really happy and I really can tear things down and rebuild quickly. One thing to point out is that the compose module for Ansible is basically unusable. I use the docker container module instead. Works well so far and it keeps my containers running without restarting them unnecessarily.
selfhosted
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.