I did a similar inquiry a few months ago. I tried DocuWiki and Wiki.js. Ended up with Wiki.js. It’s very easy to setup with docker-compose. Everything is stored in Postgres but it also exports to the local filesystem in Markdown. Its advanced built-in search is pretty good.
Honestly, for personal use I just switched to straight Markdown that I edit with Vim (w/ Vimwiki plugin) or Markor on Android and synchronize with Syncthing. Simple, low effort, portable, does enough of what I need to get the job done.
And if I wanna publish a read-only copy online I can always use an SSG.
A tool that can track price changes on any website automatically is difficult since there isn’t a standard way that prices are presented on a website. As has already been said, changedetection is your best bet
This sounds like a dream for me, what I found was even better, was making a slick deals account and setting up an alert for exactly what I needed. That way I wasn’t mindlessly shopping and buying unnecessary things! Following this thread though cuz I’m interested!
Same. Slickdeals and forget it. The website is a bit of a privacy nightmare w/ inserted tracking/referral links for every deal though. I’ve stopped logging in entirely and just use it for emailed alerts.
I used browser extension Distill in the past, it’s pretty easy to use and it works well for detecting/tracking changes of the specific elements on the page. I think free version allows 25 local monitors.
I also just found this extension Automa, I’ve never used it, but it seems cool. Looks like a Tasker for your browser. And there are also a workflows that people share, I saw this one randomly, Scrap Google Suggest to SpreadSheet so I guess you could do a similar thing for prices.
My nas is a low powered atom board that runs unraid.
My dockets run on a ryzen CPU with proxmox. I don’t have a cluster, just 1.
In proxmox I run a VM that runs a all my dockets.
I use portainer to run all my services as stacks. So the arr stack has all the arrs together in a docker compose file. The docker compose files are stored in gitea (one of the few things I still run on unraid) and Everytime I make a change to the git, I press one button on portainer and it pulls down the latest docker compose.
For storage, on proxmox I use zfs with ssds only. The only thing that needs HDDs is the media on my unraid.
When a docker needs to access the media it uses an NFS mount to the unraid server.
Everything else is on my zfs array on proxmox. I have auto zfs snapshots every hour. Borg backup also takes hourly incremental backups of the zfs array and sends it to the unraid server locally and borg base for off-site backup.
The whole setup works very well and it very stable.
The flexibility of using proxmox means that things that work better in a VM (HaOS) I can install as a VM. Everything else is docker.
Recently went through this. Needed a quick and dirty knowledge repository for work. Ended up running bookstack, wiki.js and dokuwik in docker containers, building the same wiki in each and then messed around with it. Landed on bookstack and wiki.js, bookstack because I liked the end user UI and wiki.js because of the backend. I think most my co-workers use the bookstack one.
Bookstack looks really cool, considering it for a project at work but I don’t like how you can’t have pages outside of books. We’re looking to put together a general knowledge base that could span many different types of equipment and manufactures. For that reason I’m leaning towards wiki.js due to the search and tag browsing, but basically planning on doing the same installing those same 3 to check them out.
Another thing with bookstack is that if your local IP changes for any reason, it breaks all the images and it is pretty frustrating to get them working again. They added a command to try to fix this, but I could never get it to work correctly.
I ended up switching to wiki.js and haven’t had a single problem since, but I do miss the super sleek look of bookstack sometimes.
You should try out all the options you listed and the other recommendations and find what works best for you.
I personally use Kubernetes. It can be overwhelming but if you’re willing to learn some new jargon then try a managed kubernetes cluster. Like AKS or digital ocean kubernetes. I would avoid managing a kubernetes cluster yourself.
Kubernetes gets a lot of flack for being overly complicated but what is being overlooked with that statement is all the things that kubernetes does for you.
If you can spin up kubernetes with cert-manager, external-dns, and an ingress controller like istio then you got a whole automated data center for your docker containers.
Thanks. Yeah I’m temped to try kubernetes because of what you mentioned. I really like that every part that I need (ingress controller, certs, etc) are considered part of the core service and are built in. Right now I just have to run that stuff like it’s own service and wire everything up by hand. I don’t think I mind the extra overhead of kubernetes either, I love to tinker with that sort of thing anyway!
I think I will try a couple of things though. Maybe find a set of services to deploy with each and compare the experiences.
Well the kubernetes API has all the necessary parts built in mostly, although sometimes you may want to install a custom resource which often comes with complex service installs.
But I think the biggest strength of kubernetes is all the foss projects that are available for it. Specifically external-dns, cert-manager, and istio. These are separate projects and will have to be installed after the cluster is up.
Caution, not all cloud providers support istio. I know that Google’s GKS doesn’t, they make you use their own fork of it
I would also recommend you avoid helm if possible as it obfuscates what the cluster is doing and might make learning harder. Try to just stick to using kubectl if possible.
I have heard good things about nomad too but I have yet to try it.
In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.
I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.
Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.
This, I used to have a kubernetes setup but how much redudency can you really have at home. Do you have a generator? Multiple Internet lines?
The fact is most hardware is highly reliable. Having good backups to restore from is all you need and you gain a huge improvement in simplicity which adds reliability in and of itself.
Yeah I guess that’s true, I do think the other part about having configs done programatically is a lot more important anyway. If things go down but all it takes to get it back is to re-run the configs from files then it’s not so bad
More importantly, if you do things programmatically you will still have the information how you did it last time the next time you need to move to a new major version of something which is particularly important in a home setting where you don’t do tasks like that often.
I would say that if you are going to host it at home then kubenetes is more complex. Bare metal kubernetes control plane management has some pitfalls. But if you were to use a cloud provider like linode or digital ocean and use their kubernetes service, then only real extra complexity is learning how to manage Kubernetes which is minimal.
There is a decent hardware investment needed to run kubernetes if you want it to be fully HA (which I would argue means it needs to be a minimum of 2 clusters of 3 nodes each on different continents) but you could run a single node cluster with autoscaling at a cloud provider if you don’t need HA. I will say it’s nice not to have to worry about a service failing periodically as it will just transfer to another node in a few seconds automatically.
selfhosted
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.