I recently migrated most of my homelab to Proxmox running on a pair of x86 boxes. I did it because I was cutting the streaming cord, and wanted to build a beefy Plex capability for myself. I also wanted to virtualise my router/firewall with OPNsense.
Once I mastered Proxmox, and truly came to appreciate both the clean separation of services and the rapid prototyping capability it gave me, I migrated a lot of my homelab over.
But, I still use RasPis for a few purposes: Frigate server, second Pi-hole instance, backup Wireguard server. I even have one dedicated to hosting temperature sensors, reed switches, and webcams for our pet lizard’s enclosure.
Same feeling, except that rather than lizard enclosure, I am waiting to see how long that Pi will last in the heat and dust of a chicken coop while serving the sole purpose of a “do we have eggs?” And/or “WTF happened/WTF did the chickens do?” Web stream
Why no real db? Those other 2 features make sense, but if the only option you can use sacrifices the 3rd option then it seems like a win. Postgres is awesome and easy to backup, just a single command can backup the whole thing to a file making it easy to restore.
I think oCIS spoiled me with regards to the database issue xD. You bring up a good point - I’ll try reinstalling Nextcloud with Postgres, removing unneeded bloat, and use it until oCIS has a “native” backend
Sbcs are neat and raspi is still cool imo, i guess people just started to realise that mini x86s exist too and the recent releases with 6, 8, 12, cores are enticing to a group of people. Really depends on what you want to do, right tool for the right job etc
I guess people just started to realise that mini x86s exist too
People always knew x86s existed. I think the main culprit is the price gap between them and Pis is decreasing. Pis used to be around $35, which has skyrocketed to 3-5x MSRP, plus they were unavailable for a long time. Now the Pi’s performance to price ratio isn’t justifiable to most, so people pay a little more for the x86 but get so much more capability.
So… and this is probably debatable, the point of a dedicated seed box is that there are a metric-shitton of other seed boxes on the local network (at the datacenter).
I’d argue the point of self hosting is to be able to set it up however you please. It sounds like you know what to do to be safe.
I use Mulvad for general VPN duty, though I can’t personally speak to its torrent support/speed I do see many recommend it in combination with a wireguard supporting container image. Spin a few up and let us know which ones you like and why.
I will definitely document it when I reach a decision about it all. That will hopefully help lots of people too later on, but at least i’ve already decided on the client and everything is configured there so that’s half the battle. I just wonder about recommendations around here, and absolutey i would self host it all!
Hell yeah! I had bought a front end called lunchbox a long time ago but i havent got to install moonlight streaming either :) for midis that’s an awesome idea too. Maybe one raspberry PI for all music stuff… thats one way to organise things too
Pi 4’s were hard to get there for a while. Pi 5’s are expensive. Lot of other SBCs are also expensive, as in not all that much cheaper than a 2-3 generations old low-end x86. That makes them less attractive for special purpose computing, especially among people who have a lot of old hardware lying around.
Any desktop from the last decade can easily host multiple single-household computer services, and it’s easier to maintain just one box than a half dozen SBCs, with a half dozen power supplies, a half dozen network connections, etc. Selfhosters often have a ‘real’ computer running 24/7 for video transcoding or something, so hosting a bunch of minimal-use services on it doesn’t even increase the electric bill.
For me, the most interesting aspect of those SBCs was GPIO and access to raw sensor data. In the last few years, ‘smart home’ technology seems to have really exploded, to where many of the sensors I was interested in 10 years ago are now available with zigbee, bluetooth or even wifi connectivity, so you don’t need that GPIO anymore. There are still some specific control applications where, for me, Pi’s make sense, but I’m more likely to migrate towards Pi-0 than Pi-5.
SBCs were also an attractive solution for media/home theater displays, as clients for plex/jellyfin/mythtv servers, but modern smart-TVs seem mostly to have built-in clients for most of those. Personally, I’m still happy with kodi running on a pi-4 and a 15 year old dumb TV.
I would much rather have a single machine running vms which I can easily snapshot and back up rather than a dozen small machines I have to deal with power supplies and networking.
SBCs have specific use cases, usually where they need to interact with hardware. That’s what made the rpi so great with it’s GPIO and hats. But that’s a rather small use case.
I have pi4 with OpenMediaServer for SMB shares and videos to TV, it has docker and portainer add ins; so that single Pi has CUPS, Trillium Notes, PaperlessNG, homeassistant, kanboard, pdftk converter, syncthing. It could have more, I just ran out of applications I might need. no issues with performance.
Man my home server IDLES at 76 watts per hour running x86. Now mind you I need the x86 to perform some of the functions I want. This thing works as an NAS, nextcloud, media server, kiwix, security camera (zoneminder), remote desktop (xrdp), runs home assistant, gpu AI upscaling for photos, and finally screeches along running a virtual pipe organ I built that takes 69 GB of RAM to run.
If I could do that with raspberry pi’s I would in a heartbeat! the power savings alone would eventually pay for them. If it’s doing what you want then don’t worry about them. My pi400 works as a remote desktop client and one day I hope more of this stuff will work well on it/a future generation so I can ditch the tower, energy usage, and noise.
It is software (grandorgue) that pretends to be a pipe organ (the instrument). In order to run fast enough it needs to load every sound sample into memory to play, as well as usually multiple kinds of sound endings. I play professionally on a “small to mid sized” pipe organ with 1,438 pipes. The one I load for use at home has more than that!
The instrument was from the 1960s and I rebuilt it with a pi pico that you can see here, and you can hear the before (analog sound cards) versus one of the organs I’ve loaded into it here.
I’ve been recently bingeing Look Mum No Computer’s rescue/re-build/midi-fication of an organ that had been shoehorned into an organist’s home, after the church had been converted. I’m more of an engineer than musician, but it’s amazing how much goes into the layering of sounds from so many different pipes.
My 6 yo loves learning with such a cool soundtrack too.
Gonna second Silverbullet. I’m a current logseq user, but I’m really liking the direction of this. Mainly because of the ease of accessing from multiple devices such as desktop, laptop, and mobile. I’m currently opening my logseq graph in sb on my android phone. Once I switch over fully, I won’t have to worry about syncing my logseq graph.
I kind of get it. Note apps are normally horribly cumbersome data serialization ecosystems you have to invest a lot of time into before you really feel like its doing anything more than a standard text editor could
I meant beast in the figurative sense. It’s not a desktop app, which perhaps doesn’t make that much of a difference. I wrote it so I think I’m entitled to call my own software a beast 😂
If you’ve an OpenWRT compatible router why are you thinking about pfsense? There isn’t much to gain there, your OpenWRT will do NAT and also has a firewall.
I like this device since 3ports would allow me to create a physically separate DMZ
OpenWRT can do this as well. What are your plans with the DMZ tho?
Be careful with the use of the acronym DMZ as in the context of typical routers and ISPs it has a different meaning of what you’re implying here. DMZ usually is used in the context for a single host that is “outside” the ISP router’s firewall and all requests coming into the ISP router will be forward to that device.
With my current diagram, it seems like it is not possible for the NAS to receive updates from the internet.
You NAS will never “receive updates” it will ask for updates. Maybe add a firewall rule that allows traffic from the NAS to the internet but not the other way around (this is usually the default state of any router, it will allow local devices to go to the internet but not incoming connections to those devices).
My TrueNas has 2x2.5Gb ports. Can i connect each NIC to a different network? Would this have any benefit?
You can, but is it really worth it? If someone hacks the device they’ll access the rest of the network. Same applies to your computers and cames consoles, they can be used to jump to the other side and vice versa.
Frankly I don’t see the usefulness of your setup as you’ll end up with weak points somewhere. Just get a single OpenWRT router and throw everything into the same network. Apply firewall restrictions as needed.
The idea of “self-hosting” git is so incredibly weird to me. Somehow GitHub managed to convince everyone that Git requires some kind of backend service. Meanwhile, I just push private code to bare repositories on my NAS via SSH.
You’re completely missing the point. Even Gitea (much simpler than GitHub, nevermind GitLab) is much more than a git backend. It’s viewable in a browser, renders markdown, has integrated CI functionality, and so on.
Even for my meager self-host use-case, being able to view markdown docs in the browser is useful from time to time, even on my phone.
As for the things I use (a self-hosted) GitLab instance at work for… that doesn’t even scratch the surface.
Do you honestly think they’re “completely missing the point”? Read the meme. There’s no mention of gitea. Self-hosting git is nothing to wiggle your tie over. Maybe setting up the things you are talking about are, but git?
The title of the post is literally “I love my Gitea”.
The content of them meme does conflate “git” with its various frontends (like gitea), but it’s an incredibly common misnomer so who cares?
The person I responded to then went on a weird rant about how “git by itself is distributed” which is completely irrelevant to the point since OP’s Gitea provides a whole lot more.
I said “read the meme” because that is all I was addressing. The title is just engagement-bait as far as I’m concerned. It’s either a meme or question. I’m sure others are here for the question but not the meme. And therefore, I’m being engagement-baited. Who knows, but I was clear about what I was talking about.
I just think saying “you’re completely missing the point” to a comment that is perfectly on topic is completely uncalled for.
I reason I think git is dead-simple to “self-host” is because I do it. I’m not a computer guy. I just used svn to version control some papers with fellow grad students. (it didn’t last, i was the only one that liked it.) so now i use git for some notes i archive. I’m not saying there aren’t tools to considerably upgrade the easy-of-use factor that would require some tech skills I don’t possess, but I stand by point.
They didn’t convince anyone of anything, they just have a great free-tier service, so people prefer using it than self-hosting something. You can also self-hosted Github if you want the features they offer, besides Git.
This post is about “self-hosting” a service, not using GitHub. That’s what I’m responding to.
I’m not saying GitHub isn’t valuable. I use it myself. And in any situation involving multiple collaborators I’d probably recommend that kind of tool–whether GitHub or some self-hosted option–for ease of user administration, familiar PR workflows, issue tracking, etc.
But if you’re a solo developer storing your code locally with no intention to share or collaborate, and you don’t want to use GitHub (as, again, is the case with this post) a self-hosted service adds a ton of complexity for only incremental value.
I suspect a ton of folks simply don’t realize that you don’t need anything more than ssh and git to push/pull remote git repositories because they largely cargo cult their way through source control.
Absolutely. Every service you run, whether containerized or not, is software you have to upgrade, maintain, and back up. Containers don’t magically alleviate the need for basic software/service maintenance.
Yes, but doesn’t that also apply for a machine running bare git?
Not containers also adds some challenges with posibly having dependecies problems. I’d say running bare git is not a lot easier than having a container with say forgejo.
Right now I have a NAS. I have to upgrade and maintain my NAS. That’s table stakes already. But that alone is sufficient to use bare git repos.
If I add Gitea or whatever, I have to maintain my NAS, and a container running some additional software, and some sort of web proxy to access it. And in a disaster recovery scenario I’m now no longer just restoring some files on disk, I have to rebuild an entire service, restore it’s config and whatever backing store it uses, etc.
Even if you don’t already have a NAS, setting up a server with some storage running SSH is already necessary before you layer in an additional service like Gitea, whereas it’s all you need to store and interact with bare git repos. Put the other way, Gitea (for example) requires me to deploy all the things I need to host bare repos plus a bunch of addition complexity. It’s a strict (and non-trivial) superset.
I dont know. 😆 im really just trying to get it in case -for example- of needing to advice someone in such a case :) my confusion probably comes from the fact that I have never host anything outside containers.
I still see it a bit diferent. A well structured container structure with configs as files instead of bare commands, back up volumes would be the same effort… But who knows. Regarding the rest like proxies, well you do not really need one.
Honestly the issue here may be a lack of familiarity with how bare repos work? If that’s right, it could be worth experimenting with them if only to learn something new and fun, even if you never plan to use them. If anything it’s a good way to learn about git internals!
Anyway, apologies for the pissy coda at the end, I’ve deleted it as it was unnecessary. Keep on having fun!
Bare repos with multiple users are a bit of a hassle because of file permissions. It works, and works well, as long as you set things up right and have clear processes. But god help you if you don’t.
I find that with multiple users the safest way is to set up/use a service. Plus you get a lot of extra features like issue tracking and stuff.
Agreed, which is why you’ll find in a subsequent comment I allow for the fact that in a multi-user scenario, a support service on top of Git makes real sense.
Given this post is joking about being ashamed of their code, I can only surmise that, like I’m betting most self-hosters, they’re not dealing with a multi-user use case.
Well, that or they want to limit their shame to their close friends and/or colleagues…
Would this let me do something like SSH to a bastion host, elevate privs with sudo, and SSH forward from there, then elevate privs again on the final target I’m trying to get to? Maybe do that on 100 servers at the same time?
Back a half decade, I and my team of DBAs would have killed for something like that.
Sorry if I’m the “can it do this weird and unnecessary thing” guy, but it really looks like a dream come true if it’s what I think it is
You always have to fiddle around a bit with SSH jumps and fowards as there are two different ways in xpipe to handle that. You also have to take care of your authentication maybe with agent forwarding etc. if you use keys. But I’m confident that you can make this work with the new custom SSH connections in xpipe as that allows you to do basically anything with SSH.
This. Save yourself some time and just go with Trillium. It does not have a native mobile app yet, but when it does, there’ll be nothing to compare! :P
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.