I tired a bunch, but current state of the art is text-generation-webui, which can load multiple models and has a workflow similar to stablediffusion-webui.
Keep your Apple TV and use it as a streaming client for whatever you stand up on the backend. Personally I have a Synology NAS that I love and I use the net to get all my content. Use the net. 😉
Appreciate your comment, and that seems like a common setup. If you didn’t have the ATV, what would you front end the Plex server with? I have a Synology router and would probably buy a Synology NAS, if I went that route.
Actually with a Synology NAS you don’t need Plex, they have a built in equivalent called DS Video with apps for Apple TV, iOS, Android, etc!
I’ve had an Nvidia shield in the past as well and it works reasonably well, but the video experience is definitely better on the Apple TV. The Android boxes make more sense if you want a place to install emulators that also occasionally streams.
Thank you for this! I’ll look more at the Synology NAS devices and see what that’s all about. I’m probably the other way around, stream more, and emulate once in a while.
I know it’s not technically “self” hosted but I’d get a cheap yearly VPS somewhere and run a webserver off of that.For me its worth the peace of mind to keep my network a temple instead of a bus terminal. I paid $13 usd for the year for mine
I believe Oracle is still offering to slice off a bit of compute for free that should accomplish OP’s goal. I’ve used it to test a Jellyfin host among other things and for the price it can’t be beat!
I’ve been running a script every 60 seconds for 2 months now as a cron job and it still hasn’t been able to create a VM in their US datacenter. I just have a log full of “insufficient host capacity” errors.
A VPS makes sense insofar as keeping things thoroughly isolated from my own systems, but the overhead of maintaining a box that’s directly connected to the Internet like that isn’t something I’m keen on and I’m not convinced I’d have the expertise to do it right from the outset.
The Oracle Cloud VPS only has SSH key authentication enabled by default. You can also set it to only allow SSH from your home IP in the virtual firewall before the machine is ever spun up.
Their current free ARM offering is 1 machine with 4-cores and 24gb RAM for life. You can also add another 2 AMD machines with 1-core and 1gb RAM and still be in their free-tier.
If you’re going to set it up and take advantage of the ARM machine, make sure you pick a home location for your account that has multiple availability zones. San Fran right now only has 1 zone, so if the shared ARM instances are all used up, you’ll have to wait a few days and try again. Phoenix I think has 3, so you can try with another zone right away.
I guess I’m extremely paranoid then, my home IP doesn’t change much and I just expose the port only to it from Oracle’s site. I rarely touch mine though.
Changing port is security by obscurity and it doesn’t take much time for botnets to scan all of IPV4 space on all ports. See for example the ever updated list that’s available on Shodan.
Disable password login and use certificates as you’ve suggested already, add fail2ban to block random drive-bys, and you’re off to the races.
I just restrict SSH to an internal VPN IP on all my servers (ZeroTier). 100% impossible to even try logging into them unless you’ve managed to crack into my network first.
+1 for VPS, the ionos ones are $2/mo and have unlimited bandwidth at 400mbps. That’s basically the cost of electricity for a home server with orders of magnitude better reliability.
You’re going to get a lot of bad or basic advice with no reasoning (use a firewall) in here… And as you surmised this is a very big topic and you haven’t provided a lot of context about what you intend to do. I don’t have any specific links, but I do have some advice for you:
First - keep in mind that security is a process not a thing. 90% of your security will come from being diligent about applying patches, keeping software up-to-date, and paying attention to security news. If you’re not willing to apply regular patches then don’t expose anything to the internet. There are automated systems that simply scan for known vulnerabilities on the internet. Self-hosting is NOT “set it and forget it”. Figuring out ways to automate this help make it easy to do and thus more likely to be done. Checkout things like Ansible for that.
Second is good authentication hygiene. Choose good passwords. Better yet long passphrases. Or enable MFA and other additional protections. And BE SURE TO CHANGE ANY DEFAULT PASSWORDS for software you setup. Often there is some default ‘admin’ user.
Beyond that your approach is"security in depth" - you take a layered approach to security understanding what your exposure is and what will happen should one of your services / systems be hacked.
Examples of security in depth:
Proper firewalling will ensure that you don’t accidentally expose services you don’t intend to expose (adds a layer of protection). Sometimes there are services running that you didn’t expect.
Use things like “fail2ban” that will add IP addresses to temporary blocklists if they start trying user/passwords that don’t work. This could catch a bot from finding that “admin/password” user on your Nextcloud server that you haven’t changed yet…
Minimize your attack surface area. If it doesn’t need to be exposed to the internet then don’t expose it. VPNs can help with the “I want to connect to my home server while I’m away” problem and are easy to setup (tailscale and wireguard being two popular options). If your service needs to be “public” to the internet understand that this is a bigger step and that everything here should be taken more seriously.
Minimize your exposure. Think though the question of “if a malicious person got this password what would happen and how would I handle it?” Would they have access to files from other services running on the same server (having separation between services can help with this)? Would they have access to unencrypted files with sensitive data? It’s all theoretical, until it isn’t…
If you do expose services to the internet monitor your logs to see if there is anything “unusual” happening. Be prepared to see lots of bots attempting to hack services. It may be scary at first, but relatively harmless if you’ve followed the above recommendations. “Failed logins” by the thousands are fine. fail2ban can help cut that down a bit though.
Overall I’d say start small and start “internal” (nothing exposed to the internet). Get through a few update/upgrade cycles to see how things go. And ask questions! Especially about any specific services and how to deploy them securely. Some are more risky than others.
Going off of what you said, I am going to take what I currently have, scale it back, and attempt to get more separation between services.
Containerization and virtualization can help with the separation of services - especially in an environment where you can’t throw hardware at the problem. Containers like Docker/podman and LXD/LXC aren’t “perfect” (isolation-wise) but do provide a layer of isolation between things that run in the container and the host (as well as other services). A compromised service would still need to find a way out of the container (adding a layer of protection). But they still all share the same physical resources and kernel so any vulnerabilities in the kernel would potentially be vulnerable (keep your systems up-to-date). A full VM like VirtualBox or VMWare will provide greater separation at the cost of using more resources.
Docker’s isolation is generally “good enough” for the most part though. Your aggressors are more likely to be bot nets scanning for low-hanging fruit (poorly configured services, known exploits, default admin passwords, etc.) rather than targeted attacks by state-funded hackers anyway.
I think $700-800 for a server with SFP ports sounds like good value in terms of price relative to capability, but the absolute price and capability are probably overkill for a residential use-case (even a homelab one). It’s a no-brainer if you’re the Other Linus (the Tech Tips one) and have unlimited budget for all the latest electro-bling in your house, but if you’re any sort of normal person you don’t need 10 gig networking yet.
Does Minisforum make anything with 4 ethernet ports and a <100W TDP in the <$300 range? If so, get that instead.
If this can handle routing 10g this is a great choice to use as a router. It’s actually quite difficult to find a gateway that’s around this price and ISPs (at least here in Canada, or my part of Canada) are offering internet over 1Gbps at the same price as gigabit, but their routers are awful.
I use 10g between my main pc and my nas. It’s amazing. I use nvmes for triple a, intensive type games, and almost everything else gets installed on the nas. There’s great use cases for 10g.
I have a similar setup. Even for hard drives and slower SSDs on a NAS, 10g has been beneficial. 2.5 gig would probably be sufficient for most of what I do, but even a few years ago when I bought my used mellanox sfp+ cards on eBay it was basically just as cheap to go full 10g (although 2.5 gig Ethernet ports are a bit more common to find built-in these days, so depending on your hardware, that might be a cheaper place to start). But even from a network congestion standpoint, having my own private link to my NAS is really nice.
Also for media creation, using my main pc and nvme as a staging area and moving finished work and archived projects to my NAS is really helped along by the 10g connection I have. Easily saturates it with 6x7200rpm exos drives.
I don’t think you’ll be able to build anything with €100, but you might be able to buy an old PC or laptop locally and use it as is. I’ve never run nextcloud myself, but from I’ve read it’ll be the most taxing service on your list. Everything seems pretty minimal, though I don’t know anything about Photoprism.
Yeah, for that price you won’t find anything new. For illustration, when I bought a new Athlon 3000G, which was the very lower CPU on their AM4 offering, it was at 55€ without anything else.
Older thinkpads in this price range will not perform well as servers. They will be pretty limited in specs. Better to go with a used SFF or other form-factor business model desktop.
Yep. Just install Linux, plug it into your router, set a static ip, and install the nas software ya want.
There are plenty of approaches. ChatGPT is great at debugging issues and helping ya through the setup. I did this with a raspberry pi and external usb drive the other week.
Some people even use Raspberry Pis as their NAS. I use an old MacBook (5th gen i5) as a home server with 2 external hard drives as a NAS, which also runs a few docker containers like Jellyfin. Before that, I was using an old PC with 1st gen i3 for all these things.
There are a few different components to RSS. The feed (in this case Instagram or something like webtoons), an aggregator (the software that pulls in all the feeds you’re interested in and keeps track of things like read status), and a client (the actual interface you interact with to read your feeds)
A lot of time, the aggregator will include a web client you can use, so these can be bundled together. But, because RSS is an open specification, you could use a client other than the one that comes with an aggregator. Examples of this are Miniflux or FreshRSS. If you’re interested in Nextcloud, there is also an RSS plugin for that
The other part, the feed, is often provided by a website directly. Webtoons does this for instances. For each comic, there is a URL that points to the feed. Some sites will have a little RSS icon that direct you to the feed. While other sites will have you manually add a something likr “/rss” or “/atom.xml” to find that feed
But other sites, like Instagram, don’t provide feeds directly. To get those feeds, you’ll need some kind of service that scrapes content from Instagram and creates a feed from that. I’m sure there are selfhosted options for this, but because the original content had to come from a third party anyway, I don’t mind using a public service to create feeds for me. I personally use openrss.org, which doesn’t require an account to use, though I’m sure there are others as well. It has support for Instagram and a bunch of other sites too. I will warn that by the nature of having a service that scrapes another, things may break sometimes. I don’t follow any Instagram feeds through openrss, but I have some other sites/feeds that I do get through them and am generally happy with it
TLDR: Put something like Miniflux on your server and add the Instagram feeds you want through openrss.org to Miniflux
But other sites, like Instagram, don’t provide feeds directly. To get those feeds, you’ll need some kind of service that scrapes content from Instagram and creates a feed from that. I’m sure there are selfhosted options for this
yeah, once you have the drives, building the rest of the system can be done for dummy cheap if you look at like cheap used workstations that some company or school is offloading. and it would still probably be a more capable system all around.
What happened is that people realized what I’ve been saying since ever - that the RPi and others are a money grab because of all the required accessories while a MiniPC will get you way more power, stable hardware , case, power supply and everything in between for the same price (if you go for second hand). Here is are examples of such posts: lemmy.world/comment/5357961 , lemmy.world/comment/4696545
For eg. for 100€ you can find an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCIe (m2) comes with a power adapter and outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + cable + bullshit adapter + SD card + whatever else money grab - the Pi isn’t just a good option.
Either way, Pis have their use cases however in my opinion it was an overhyped product that sits on the middle of a market:
They tried to make the Arduino easy by adding an operating system and high level programming languages such as Python. It never made much sense, why would you want to have GPIOs directly on a “computer”? not reasonable at all. Nowadays we’re seeing a raise of the ESP32 devices that have 30-40 GPIOs and Wifi for 2$ each. Cheap, easy to develop and deploy and eating away on the Pi’s market.
Another typical use case for a Pi is some low power server, but while it is great in theory then it lacks the CPU performance required for the container-based absurdities people want to run and the I/O sucks. USB wasn’t ever a good way to connect to storage, let alone a USB/network shared bus like we had in the past. The new PCIe is questionable (look at the NanoPi M4v2 from 2018) and requires… more adapters;
Price-wise it doesn’t make much sense as well because a second hand x86 will be 10x faster at the same price point… and way more stable with more expansion.
Now it’s all gone x86 and Proxmox
Proxmox isn’t a new thing, in fact it is a pile of crap and questionable open-source that people still run because they haven’t discovered LXC/LXD yet. Read more here: lemmy.world/comment/6507871. FYI you can run LXD on your Pis and get both containers and virtual machines with it in the same way Proxmox people do with x86.
The irony of this comment is that people will shit on me about replacing Proxmox with LXD in the same way they used to when I said that Pis were a money grab and x86 MiniPCs were way better.
I would agree to a certain point. If you get a 10th gen CPU it is power efficient and there are a lot of gamers and whatnot selling those. Also there are a lot of MiniPCs that come with mobile “T” CPU that are very decent at idle.
But idle still would run much more than 15w. There a very good compilation google sheets for the most efficient X86 cpus, but once you start factoring hdds and ssds, it’s only natural to go higher (20w-30w) at least. That’s at least double than rpis
I don’t (specially DDR3-era stuff) because old server hardware is way more expensive, won’t be of any particular advantage and older hardware, compared to new stuff, will use a LOT of power.
Instead use regular desktop/laptop machines as they’ll probably be more than enough for homelabs. You can a good 9-10th gen Intel CPU and motherboard that is perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. Modern hardware = less power consumption, cheaper, more performance.
If you go really low end, let’s say i5-6500, this will probably cost around 80€ second hand with RAM. You can use www.cpubenchmark.net/compare/ to compare CPUs the server hardware you can get with modern hardware if you’re interested.
Most DDR3-era server hardware comes with RAID controllers/cards and other things that nobody uses anymore, people have moved on the software RAID be it BRTFS or ZFS and you will want to do the same. Servers make a lot of noise - impractical for a home - and a CPU from that era will be around 150-200W, you can get a recent i5 with more performance that runs around 50W.
Another thing to consider: you’re trying to build a NAS get a basic motherboard with 4 SATA ports and then add a PCI to 5 SATA port card and it will be much cheaper than whatever server hardware. BTRFS as your filesystem and its RAID if needed. Now you may be thinking something like “I want a faster CPU in order to have fast SMB”, just don’t - your gigabit network will saturate before an i5-6500 or any mechanical drive does and when this happens you’ll be at something like 10-20% CPU usage. Just don’t waste your money.
Thank you, really appreciate your advice. I was just struggling to install Proxmox on a new machine, and you made me take a step back. The kernel is messed up, do I really want this? Why am I jumping through hoops for this when Debian has zero issues installing? I’ll be trying the container software you mentioned instead.
I’ve done the same thing as the person you replied to is suggesting for around 10 years now. It works very well for a home user because parts etc are readily available. Most hypervisors will run on x86/amd64 hardware without issue. Check out something other than proxmox. LXC is one suggestion. If you’re going to stick with Debian look into SAMBA with BIND to ensure ease of sharing and cross platform integration.
Another reason to not get an old server is power, noise and thermals. They’re designed to live in an air conditioned room. Anyone who works in server rooms for any length of time will tell you to wear ear protection.
people will shit on me about replacing Proxmox with LXD
From reading your comments I understand why. It’s in your delivery. You’re abrasive and you don’t explain why. You’re also telling people not to use something they know, to use something they don’t know, and not explaining how that would be beneficial. As far as I can see, you’ve only explained how LXD, when setup correctly, can do what Proxmox does.
You’re essentially telling people to use something that is at best a side grade for reasons, and being salty about it.
I wrote dozens of posts replying to every single question people had about LXD/Incus. Gave out printscreens, explained how it works, what it does, described useful features and pointed out multiple issues of Proxmox. I can show you what roads you can take and why but you must do the work yourself.
The same applies to the MiniPC vs Raspberry discussion as my price, performance and feature breakdowns and proved countless times that for a large number of use cases a MiniPC is better. Unsurprisingly this is the first of such breakdowns that got upvotes, and do you know why? Because a known youtuber in this space recently came out with a video saying the exact same things I’ve been saying and now it became “acceptable” to criticize the Raspberry Pi money grab.
to use something they don’t know, and not explaining how that would be beneficial you’ve only explained how LXD, when setup correctly, can do what Proxmox does.
Even if that were true, what’s was the issue then? Isn’t it obvious that a true open-source solution that is available on Debian’s repos from a fresh install is better than a half proprietary solution that asks you to buy a license at any turn? Use your common sense.
Besides my comments aren’t a marketing campaign there’s no “LXD will make you rich today and solve all your family drama” as soon as you complete our three step formula:
apt install lxd
lxd init
lxc launch debian debian-container
The advantage of using LXD/Incus are on the details, not on a flashy and shinny feature. It’s about running a clean Debian system, a non twisted and mangled kernel that will conflict with everything and not run stuff like OVPN properly, it’s about the license, the tools, not depending on a company, not having to wait 3x the time before your cluster is online. It’s about having a decent API for once and so many others.
Most people say they don’t want to be put in the same situation they were put about the the CentOS/RedHat licensing change, but then they proceeded to replace CentOS with Ubuntu and still use Proxmox. All questionable open-source that is as likely to fuck you over as RedHat did.
So eventually there will be a video from some youtuber stating that LXD/Incus is much better than Proxmox and people will flock to it without questioning anything. :)
For me it’s the opposite. I tried to use nextcloud for years, installing the normal way, and it always broke for no reason. I just started using it on docker and it has been perfect, fingers crossed.
Interesting, when I used docker on a proxmox build, it would give me trouble. Once I installed it the normal way on an Ubuntu build, it was good to go.
I wonder why that is?
Fingers crossed that it continues to work for you in the current configuration!
Sir, this is Lemmy. People treat the applications and hardware you use with ethical alignment and switching to FOSS literally has approval on the level of religious conversion.
It’s no wonder people around here care so much about random people’s opinions, the place practically filters for it.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.