I still have drawings I made in MS Paint on Windows 95 when it had just come out, my first text document, and the first report I ever typed in grade school.
Btrfs snapshots of the root volume in RAID1 configuration with 8 hourly, 7 daily, 3 weekly, and automated rsync backups to NAS, with primary and secondary offsite, physically disconnected backups stored in sealed, airtight, and waterproof containers at two different banks prepaid storage and with advanced directive in the event of my demise.
Bit of a hobby really. I acknowledge it’s completely unnecessary. I don’t like to lose data.
You got me there! Not fireproof. In that case I’m just hoping that having two off-site backups at different locations has me covered, but that’s a good idea. I should consider fireproof foil.
I have system images of machines of relatives who have died. Many of the photos that I have retained are the only ones. However, that was more an emergent utility than a motivating one.
I would stay away from kubernets/k3/k8s. Unless you want to learn it for work purposes, it’s so overkill you can spend a month before you get things running. I know from experience. My current setup gives you options and has been reliable for me.
NAS Box: Truenas Scale - You can have UnRaid fill this role.
Services Hosting: Proxmox - I can spin up any VMs I need and lots of info online to do things like hardware passthrough to VMs.
Containers: Debian VM - Debian makes a great server environment as it’s stable and well supported. I just make this VM a docker swarm host. I managed things with Portainer for a web interface.
I keep data on the NAS and have containers access it over the network. Usually a NFS share.
How do you manage your services on that, docker compose files? I’m really trying to get away from the workflow of clicking around in some UI to configure everything, only for it to glitch out and disappear and I have to try and remember what things to click to get it back. It was my main problem with portainer that caused me to move away from it (I have separate issues with docker-compose but that’s another thing)
I personally stepped away from compose. You mentioned that you want a more declarative setup. Give Ansible a try. It is primarily for config management, but you can easily deploy containerized apps and correlate configs, hosts etc.
I usually write roles for some more specialized setups like my HTTP reverse proxy, the arrs etc. Then I keep everything in my inventory and var files. I’m really happy and I really can tear things down and rebuild quickly. One thing to point out is that the compose module for Ansible is basically unusable. I use the docker container module instead. Works well so far and it keeps my containers running without restarting them unnecessarily.
I use it to manage my subdomains, something like notes.mywebsite.com would point at my trillium instance while photos.mywebsite.com would point at my my immich container it has more uses but that’s my extent. I just have an instance of a cloud flare dns updater keeping my domain in sync with my ip so I don’t have to do that manually when it changes.
So in my scenario cloud flare is just part of my setup.
Agree with others here. Ansible isn’t for beginners and neither is a Lemmy instance.
Try some other projects first, maybe some docker containers that involve a reverse proxy.
For example, NextCloud is a very useful thing to set up as a project, but I would say that you specifically need the new Pi 5 with plenty of RAM for that. The Pi 4 doesn’t handle a full NextCloud installation well.
I just don’t want to keep running an entire VM with their image. Something more simple that could be used on a LXC / systemd-nspawn container or directly on a base system would be nicer.
What is weird is having to waste almost 700MB of ram + 10GB of storage for a simple webui that charts sensor data and only keeps it for 10 days. As a comparison my NAS container runs Samba4, FileBrowser, Syncthing, Transmission, and a few others under 300MB of RAM with pontual spikes on operations.
There’s a lot of difference between a container and a VM. You can install HA on a container, all you have to do is set it up according to the manual install instructions, and work around any hardware interfacing issues that come up. You’ll save 200MB of RAM and will have to do any upgrades manually. Doesn’t seem worth it to me, but to each their own.
What I’m going to do is setup HA Core on a container manually and run without addons / docker. That will be about installing python and should waste way less resources.
You need to edit your configuration.yaml file to exclude certain sensors or values. I excluded some of the more chatty sensors that I didn’t need and my disk use went from around 40gb to 150mb
I’m just going to say, I shit on them all along. ARM is relatively expensive, bespoke and difficult to compile for because of that. Anyone can puke out a binary for amd64 that works everywhere. And way, way faster than some sad little SOC. Especially weird is spending $1000 on a clusterboard with CMs that had half of the power of a 5 year old X86 SFF desktop you could pick up for $75 and attach some actual storage to.
Maybe RISC-V will change all that, but I doubt it. Sure hope so though. The price factor has already leaned the right way to make it worthwhile.
2 - 8 watts of power for a Pi vs 9-150watts for an x86 system. There are definitely use-cases.
I use a Pi for DHCP, DNS with PiHole, Tailscale Subnet Router, Rustdesk server, Vaultwarden, Syncthing (connects to local device shares, rather than run ST on each device), ArchiveBox, and working on instant messaging (maybe SimpleX, not sure yet). It’s kind of maxed out.
But all this runs under 8watts (actually it’s so low my smart switch doesn’t even register the consumption).
New X86 processors are as efficient as the Apple M series. They are far more power efficient than a Pi under load, though they will consume slightly more at idle. But not nearly as much as you’re suggesting.
Uh, my server is an x86, is fanless and the cpu idles at 9 and maxes at 12. Is much faster then my pi and has quicksync.
I run plex, jellyfin, smb shares, mealie, tailscale and rerouting, notes, and books.
I like my pi but performance per watt isn’t as drastic with x86 if you build for it. Did I mention it’s also fanless? Passive heating that just works on the cpu.
Yea, I’ve been eyeing a box like that, looks like it could be useful.
Yep, it’s all tradeoffs, gotta know what you’re shooting for. My Pi cost $5, I’m using an old phone charger (I have many), and an old microsd. If anything fails, I just grab another from the junk box.
All I know with my current use-case is I can’t measure the power consumption with the tools I use. I imagine that means under 5w draw (not really sure what it’s capable of measuring).
I’m glad the threat of being on a FOSS Hall of Shame is effective for some companies, and that they can’t just frivolous lawsuit away a hobby developer without consequences to their bottom line, which would have set a bad precedent against small-time FOSS developers everywhere.
Now their status to me is moved from “Shitlist” to “Shitlist Pending”, they’ve talked their talk so now it’s time to see them walk their walk. Best would be to allow users to control their Haier products from their own servers rather than Haier’s. That will reduce their cloud computing bills from 3rd party users but they can still offer “compelling value” in their walled garden ecosystem as a simple one-and-done setup. Win-win right?
GE has garbage company for a very long time. It’s a shell of its former self. If I’m paying for a product I am going to do whatever I want with it because it’s my money. And if a company has a problem with that, it sounds like the company needs to fix it on their end. If it’s possible to create a plugin that cost your company millions of dollars is obviously you’re not running your company properly.
That’s what LCARS means, it’s the name of the computer console in Star Trek. In the show, it stands for “Library Computer Access and Retrieval System” although it’s often used for stuff other than the library computer too.
Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer
I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.
As for your user & permissions concern, are you aware that docker these days can be configured to map “root” in the container to a different user? Personally I prefer to use podman though, which doesn’t have that problem to begin with
I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.
Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn’t have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran docker compose up -d.
If my server ever crashes, I can just copy it over and start from scratch.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.