Using CloudFlare and using the cloudflared tunnel service aren’t necessarily the same thing.
For instance, I used cloudflared to proxy my Pihole servers’ requests to CF’s DNSoHTTPS servers, for maximum DNS privacy. Yes, I’m trusting CF’s DNS servers, but I need to trust an upstream DNS somewhere, and it’s not going to be Google’s or my ISP’s.
I used CloudFlare to proxy access to my private li’l Lemmy instance, as I don’t want to expose the IP address I host it on. That’s more about privacy than security.
For the few self-hosted services I expose on the internet (Home Assistant being a good example), I don’t even both with CF at all. use Nginx Proxy Manager and Authelia, providing SSL I control, enforcing a 2FA policy I administer.
Actually you dont need to trust a upstream DNS server. Checkout dnscrypt-proxy in github. You can use dnscrypt with Anonymized DNS relays. You can use the IP of this dnscrypt-proxy as your DNS resolver.
I second obsidian. I was on the verge to jump onto logseq, but found its way of handling notes to be… different. I also felt a dislike of anytype where I don’t really have control over my notes. Obsidian clicked with me from the start and felt right. So I went with it, even though it’s not FOSS (which is usually a hard requirement from me).
But jokes aside, it is mostly about syncing notes for the selfhosting part. You either go with the official offer, no self-hosting and costs money, or you use a community plug-in, self-hosted, or you use a third program like syncthing, selfhosted.
Syncthing is the way, I had tried setting on nextcloud but never could get it to store how I wanted, but syncthinf was ridiculously easy and should work for anything that uses a folder
There is a plugin for obsidian to work with syncthing, but it seems to still be in development, implementing through the app and selecting the folders also gave me a reason for syncing my camera as well, and was super easy, no portfowarding or anything required
I also switched from Joplin to Obsidian after about half a year. There’s an open-source plugin that lets you self-host a syncing server.
What I found paradoxical is how easy it is to mod and write plugins for Obsidian compared to Joplin. I would’ve thought that modifying the open-source candidate would’ve been easier, but nope.
It is not a unique feature. But as a non-FOSS program its notes are not hidden behind proprietary filesystems, so any time you want you can still switch if they go in a direction thr user does not like.
Not every one stores the files as plain text files in markdown format like Obsidian. Logseq does I believe, but Joplin stores it all in database files which require an export should you decide to leave that app in favor of a other. With Obsidian you just point the new app at the folders full of .md files and away you go. That was the main selling point for me.
I don’t know where you’re getting that from. Here is my Joplin folder on my NC server, stuffed with md files from my notes. There are some database driven references in them if you do things like add pictures, and obviously the filename is a UID format, but it’s markdown all the way, baby.
Have you looked at the contents of those md files? In addition to creating its own hexadecimal file name, it appends the text with a bunch of metadata info. If you were to then take that folder of notes to any other markdown editor like Obsidian, it would be a mess to organize. That is why I’m a stickler for file format agnosticism. There is no vendor lock in and more importantly, no manipulation of the text filenames or contents.
Screenshot of my phone copy of the Obsidian vault directory as an example:
I tired a bunch, but current state of the art is text-generation-webui, which can load multiple models and has a workflow similar to stablediffusion-webui.
Agreed. Easy to setup on my synology NAS, and it works so well.
My only issue I’ve been having, which is not related to FreshRSS, is getting RSS in twitter to work reliably. Nitter hasn’t been reliable at all over the last year.
For your Proxmox cluster shoot for three devices. With three devices you can do high availability which is a bonus but not something I though to do when I built my setup.
I need to re-ip both of my proxmox hosts and ran into a wall due to quorum. This could get me over that hump.
That being said, it was a failed experiment to put them in a cluster. I don’t use any of the cluster functionality and would love to destroy the cluster config w/o having to rebuild the proxmox hosts.
You don’t have to rebuild the proxmox hosts to remove the cluster. I made the same mistake last year sometime and was able to remove the cluster and each of the proxmox machines works as it should standalone. I don’t recall the exact steps but it was very easy. A quick search for “proxmox remove cluster” gave me this result and from what I recall these are the steps I followed as well. https://rostislavjadavan.com/posts/promox-delete-cluster
I have looked high and low for how to delete a cluster and have never stumbled on this page, thanks! Almost everything I found said I had to destroy proxmox and reinstall it.
I believe this is old information and any restrictions around serving none HTML content has been removed from their terms of service related to cloud flare tunnels.
My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I’d have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.
I just don’t get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.
Though this is more of a proxmox ease of use issue than docker, personally I swapped from it to pure debian server/host to run a similar manual setup with podman - so everything runs right on the host.
In theory I think you can achieve this with proxmox ssh’ing into the host and just treating it like a usual debian
I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.
Fireproof safes don’t protect against heat except what’s high enough to combust paper. Temps will still probably be high enough to destroy a drive with a regular fireproof safe.
First, hire a team of energetic full-time container bros. Half of them will help architect your setup, and other half will focus entirely on supporting the container cult.
In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.
I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.
Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.
This, I used to have a kubernetes setup but how much redudency can you really have at home. Do you have a generator? Multiple Internet lines?
The fact is most hardware is highly reliable. Having good backups to restore from is all you need and you gain a huge improvement in simplicity which adds reliability in and of itself.
Yeah I guess that’s true, I do think the other part about having configs done programatically is a lot more important anyway. If things go down but all it takes to get it back is to re-run the configs from files then it’s not so bad
More importantly, if you do things programmatically you will still have the information how you did it last time the next time you need to move to a new major version of something which is particularly important in a home setting where you don’t do tasks like that often.
I would say that if you are going to host it at home then kubenetes is more complex. Bare metal kubernetes control plane management has some pitfalls. But if you were to use a cloud provider like linode or digital ocean and use their kubernetes service, then only real extra complexity is learning how to manage Kubernetes which is minimal.
There is a decent hardware investment needed to run kubernetes if you want it to be fully HA (which I would argue means it needs to be a minimum of 2 clusters of 3 nodes each on different continents) but you could run a single node cluster with autoscaling at a cloud provider if you don’t need HA. I will say it’s nice not to have to worry about a service failing periodically as it will just transfer to another node in a few seconds automatically.
I can’t remember what I was watching, but I remember watching something where they said Kubernetes is designed for something so large in scale that the only reason people have heard about it is because some product manager asked what Google use and then demanded that they use it to replicate the success of Google and subsequently, hobbyists also followed and now a bunch of people are using stuff that’s poorly optimized for such small scale systems.
Haha yeah true, but it does come with the advantage that it’s super prevalent and so has a lot of tools and docs. Nearly every self-hosted service I use has a docs page for how to set it up with Kubernetes. (Although it’s not nearly as prevalent as plain docker)
With a basic understanding of how k8s works and an already running cluster, all one needs to know is how to run a service as a docker file to have it also run in k8s
You should try out all the options you listed and the other recommendations and find what works best for you.
I personally use Kubernetes. It can be overwhelming but if you’re willing to learn some new jargon then try a managed kubernetes cluster. Like AKS or digital ocean kubernetes. I would avoid managing a kubernetes cluster yourself.
Kubernetes gets a lot of flack for being overly complicated but what is being overlooked with that statement is all the things that kubernetes does for you.
If you can spin up kubernetes with cert-manager, external-dns, and an ingress controller like istio then you got a whole automated data center for your docker containers.
Thanks. Yeah I’m temped to try kubernetes because of what you mentioned. I really like that every part that I need (ingress controller, certs, etc) are considered part of the core service and are built in. Right now I just have to run that stuff like it’s own service and wire everything up by hand. I don’t think I mind the extra overhead of kubernetes either, I love to tinker with that sort of thing anyway!
I think I will try a couple of things though. Maybe find a set of services to deploy with each and compare the experiences.
Well the kubernetes API has all the necessary parts built in mostly, although sometimes you may want to install a custom resource which often comes with complex service installs.
But I think the biggest strength of kubernetes is all the foss projects that are available for it. Specifically external-dns, cert-manager, and istio. These are separate projects and will have to be installed after the cluster is up.
Caution, not all cloud providers support istio. I know that Google’s GKS doesn’t, they make you use their own fork of it
I would also recommend you avoid helm if possible as it obfuscates what the cluster is doing and might make learning harder. Try to just stick to using kubectl if possible.
I have heard good things about nomad too but I have yet to try it.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.