selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

nbafantest, in I want to set up plex server, no windows.. any simple options?

I run Plex on a Raspberry Pi 3, it can support two simultaneous 1080p Streams on my local Wifi. Cant support 2k or 4k videos at all. And cant support video outside of the local network.

“use your favorite Unix then install Plex” or “Here are 56 perfect versions of Unix to install for your Plex server”

What part of this do you think is hard?

Each step can be scary at first but its not hard if you break it into pieces.

Booting Ubuntu or some linux OS is a fun first step if you actually have a spare computer handy

Fashtas,

What part of this do you think is hard?

Not hard… essentially worthless however…

The articles are saying “Here is a list of almost all known versions of Linux, these are good for you to use” when you query what the best option is… Hardly narrowing down anything. Likewise saying “Use your favorite… then install the product you want to use” is also useless information if you are asking the question I was… I have no favorite obviously since I know nothing about it… and OBVIOUSLY I am going to install the produce I just asked about installing…

The pages I was looking at answered the question “How do I install Linux?” by saying “First, Install Linux”

Not to say there aren’t better sources, but all the first ones I found where along that theme

nbafantest,

Ah yeah, i know what you mean. That can be overwhelming. There are a loooooot of choices, and the differences might be things I’ve never even heard of before.

I think a lot of these articles are written with the expectation that you will try several different versions after you learn to flash/boot. I think i ended up with 4 different forks i could boot from.

When I started, i went with Ubuntu first just because it seemed pretty stable and had support from a large company, but once I leanred how to boot Ubuntu it was easy to do the same steps for the other versions to try them out.

Fashtas,

I actually went back and had a look at a few of the top results and I have a feeling a lot were AI written Sandtraps. Several were very similar “Install your favorite Linux then <copy and paste from Plex web site on how to install Plex>”

Makes it had for a newbie who doesn’t know what they don’t know so can’t ask the right question.

The Mint install works fine now, I made a lot of mistakes and took a while to get head around the folder structure and permissions but once I am more comfortable next time I’ll try something a little more headless I think, though playing around I reckon I’d be happy with Mint as a daily machine (if only my job wasn’t coding Windows apps :/)

nbafantest,

Thats great progress

lumpy, in I want to set up plex server, no windows.. any simple options?

I just got into self-hosting about a week ago and started by getting a small beelink s12 mini. Since you have an old pc you don’t need to worry about hardware for now.

To get going with the software I followed this (lemmy.world/post/6542476) lemmy post in the beginning. It took me a couple of evenings to understand some basic concepts and after getting everything going I found the recommdation for yams.media. So I wiped everything (because I decided to not encrypt the system and to go with Ubuntu 22.04.03 LTS instead of 23.10) and was supprised how quickly I had yams running again.

So just follow the guide and ask here or on the yams discord if you have any questions during the installation.

Skotimusj,

Check the compatibility with Linux but I also used Ubuntu with very little problem. It works flawlessly for me. I had no experience with Linux before this and was able to set it up with some googling and Asking ChatGPT for some useful commands. It was a fun project. The *arr suite is great.

erre, (edited ) in Should I use a dedicated DHCP/DNS server hardware
@erre@programming.dev avatar

I use pihole for managing DNS and DHCP. It’s run via docker and the compose file and dnsmasq configs are version controlled so if the Pi dies I can just bring it up on another Pi.

The Pi with pihole has a static IP to avoid some of the issues you described.

SpaceCadet,
@SpaceCadet@feddit.nl avatar

That’s what I do. I do have a small VM that is linked to it in a keepalived cluster with a synchronized configuration that can takeover in case the rpi croaks or in case of a reboot, so that my network doesn’t completely die when the rpi is temporarily offline. A lot of services depend on proper DNS resolution being available.

erre,
@erre@programming.dev avatar

I’ve been meaning to standup another pihole on another pi for DNS redundancy. I have to research how to best keep the piholes in sync. So far I’ve found orbital-sync and gravity-sync.

SpaceCadet,
@SpaceCadet@feddit.nl avatar

For me gravity sync was too heavy and cumbersome. It always failed at copying over the gravity sqlite3 db file consistently because of my slow rpi2 and sd card, a known issue apparently.

I wrote my own script to keep the most important things for me in sync: the DHCP leases, DHCP reservations and local DNS records and CNAMES. It’s basically just rsync-ing a couple of files. As for the blocklists: I just manually keep them the same on both piholes, but that’s not a big deal because it’s mostly static information. My major concern was the pihole bringing DHCP and DNS resolution down on my network if it should fail.

Now with keepalived and my sync script that I run hourly, I can just reboot or temporarily shutdown pihole1 and then pihole2 automatically takes over DNS duties until pihole1 is back. DHCP failover still has to be done manually, but it’s just a matter of ticking the box to enable the server on pihole2, and all the leases and reservations will be carried over.

dan, (edited )
@dan@upvote.au avatar

If you ever switch to AdGuard Home, adguardhome-sync is pretty good. IMO AdGuard Home is better since it has all of PiHole’s features plus it supports DNS-over-HTTPS out-of-the-box, so your ISP can’t spy on your DNS queries (non-encrypted DNS queries can be easily intercepted and modified by your ISP even if you use a third-party DNS server, since they’re unencrypted and unauthenticated)

SpaceCadet,
@SpaceCadet@feddit.nl avatar

DNS-over-HTTPS

You can also do that with running cloudflared or unbound on your pihole.

dan, (edited )
@dan@upvote.au avatar

Sure, but that’s extra manual setup, and the point of running something like PiHole is to have a nice UI to manage things.

AdGuard Home uses DNS-over-HTTPS by default, so it’s immediately more privacy-focused than PiHole. I’m really surprised that PiHole hasn’t done this.

Rootiest, (edited ) in How often do you back up?
@Rootiest@lemmy.world avatar

It depends what I’m backing up and where it’s backing up to.

I do local/lan backups at a much higher rate because there’s more bandwidth to spare and effectively free storage. So for those as often as every 10 mins if there are changes to back up.

For less critical things and/or cloud backups I have a less frequent schedule as losing more time on those is less critical and it costs more to store on the cloud.

I use Kopia for backups on all my servers and desktop/laptop.

I’ve been very happy with it, it’s FOSS and it saved my ass when Windows Update corrupted my bitlocker disk and I lost everything. That was also the last straw that put me on Linux full-time.

remotelove, (edited ) in Should I use a dedicated DHCP/DNS server hardware
@remotelove@lemmy.ca avatar

DHCP is a really stupid* service for the most part. Unless you are working with multiple subnets or have some very specific settings you need to pass to your clients, it’s probably not worth it to manage it yourself. I don’t want to discourage you though! Assigning static IP addresses by MAC can be extremely useful and is not always an option on routers. If you want static names and dynamic addresses, that is really where you need to manage both DNS and DHCP. It really depends on how and where you want names to be resolved and what you are trying to accomplish. (*stupid as in, it’s a really simple service. You want it simple because when DHCP breaks, you have other serious issues going on.)

Setting up your own DNS is worth its weight in gold. You can put it just about anywhere on your network (before your gateway, after, in China, whatever.) and your network won’t even know the difference if setup correctly. You can point BIND at the root servers and bypass your ISP completely if you want. ISP DNS services suck ass, so regardless of you resolve yourself, or forward all name queries to your anon DNS server of choice you have a really decent level of control on your network. It is the service to learn if you want to keep an eye on where your network wants to talk.

Your Unifi USG must play nice with your own server, by the laws of DNS. There may be some nuances when it comes to internal protocols like WINS, but other than that, it should be just fine.

I would setup a simple VM somewhere first, to answer your actual question. It’s good practice to keep core services isolated on their own, dedicated instances. This is to speed up recovery time and minimize down time. Even on your home network, DNS and DHCP are services you do not want going down. It’s always a pain when they do go down.

lemmyvore,

For the above reasons it’s very nice to use dnsmasq, because DHCP + DNS integration is really sweet, and a full featured local DNS is gold.

OpenWRT comes with dnsmasq btw so if you have a dodgy router but it supports OpenWRT you may be able to breath new life into it.

dan,
@dan@upvote.au avatar

If only everyone was on IPv6, then everything could use SLAAC and worrying about IP assignment for client systems would be a thing of the past. IPv6 on a home LAN generally only uses DHCPv6 for configuring the DNS servers - client systems get IPs using SLAAC and learn their gateway using RAs (router advertisements).

Ecclestoned,

Damn, I didn’t realize the amount of hate for DHCP. Ive used an already configured system with a DHCP/DNS server set up and it was super easy to manage. Want to change or add a static IP? Edit the text file, add the MAC, reload.

I didn’t know this wasn’t reflective of the overall experience.

remotelove,
@remotelove@lemmy.ca avatar

Meh, I didn’t mean to hate on DHCP. It’s just a service I have learned to keep running all by itself somewhere in a dark corner of my network. DNS and DHCP are just services that I don’t like going down. Ever.

farcaller, in Should I use a dedicated DHCP/DNS server hardware

Unifi is specific about expecting the controller address to not change. You have several options: There’s the “override controller address” setting, which you can use to point the devices at a dns name, instead of an ip address. The dns can then track your controller. It doesn’t exactly solve your issue, though, as USG doesn’t assign dns names to dynamic allocations.

Another option is to give the controller a static IP allocation. This way, in case you reboot everything, USG will come up with the latest good config, then will (eventually) allocate the IP for controller, and adopt itself.

Finally, the most bulletproof option is to just have a static IP address on the controller. It’s a special case, so it’s reasonable to do so. Just like you can only send NetFlow to a specific address and have to keep your collector in one place, basically.

I’d advise against moving dhcp and dns off unifi unless you have a better reason to do so, because then you lose a good chunk of what unifi provides in terms of the network management. USG is surprisingly robust in that regard (unlike UDMs), and can even run a nextdns forwarding resolver locally.

Ecclestoned,

So this is where I’m a little confused. The USG had the option to assign a static IP (which I’ve done), but if you ever need to CHANGE that IP… Chaos. From what I understand the USG needs to propagate that IP to all your devices, but it uses the controller to do that. Then you also run into issues with IP leases having to time out. Same problem occurs if I ever upgrade my server and change out the MAC address. Because now the IP is assigned to the old MAC.

I’m not sure if there’s any way around this. But it basically locks me in to keeping the controller (and thus my server) at a single, fixed IP, without any chance of changing it.

farcaller, (edited )

Here’s how it works: unifi devices need to communicate with the controller over tcp/8080 to maintain their provisioned state. By default, the controller adopts the device with http://controller-ip:8080/inform, which means that if you ever change the controller IP, you’ll must adopt your devices again.

There are several other ways to adopt the device, most notably using the DHCP option 43 and using DNS. Of those, setting up DNS is generally easier. You’d provision the DNS to point at your controller and then update the inform address on all your devices (including the USG).

Now, there’s still a problem of keeping your controller IP and DNS address in sync. Unifi, generally, doesn’t do DNS names for its DHCP leases, and devices can’t use mDNS, so you’ll have to figure a solution for that. Or, you can just cut it short and make sure the controller has a static IP―not a static DHCP lease, but literally, a static address. It allows your controller to function autonomously from USG, as long as your devices don’t reach to it across VLANs.

Ecclestoned,

make are the controller gas a static IP – not a static DHCP lease

Ahhh that makes complete sense. I’ll look into it. Thanks!

solidgrue, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)
@solidgrue@lemmy.world avatar

I’ve got HA with Frigate + USB Coral w/4 cams, FlightRadar24 receiver/feeder, ESPHome, NodeRed, InfluxDB, Mosquitto, and Zwave-JS on a refurbished Lenovo ThinkCenter M92p Tiny, rigged with an i5 3.6GHz, 8GB RAM and 500GB spindle drive. It’s almost overkill.

Frigate monitors 2 RTSP and 2 MJPEG cams (sometimes up to 3 RTSP and 5 MJPEG, depending of if I’m away for the weekend) with hardware video conversion. FR24 monitors a USB SDR dongle tracking several hundred aircraft per hour. I live under one.of the main approaches to a major US hub.

Processor sits at 10% or less most of the time, and really only spikes when I compile new binaries for the ESP32 widgets I have around the house. It uses virtually none of the available disk. It’s an awesome platform for HA for the price.

sylverstream,

Thanks for your reply! So that is a 3rd gen Intel chip if I kagi’d correctly? I was planning to get a 8th gen or later. Not sure though if it’s worth it, I’m not too familiar with the differences between all generations.

solidgrue,
@solidgrue@lemmy.world avatar

I think the i5 is Ivy Bridge, but I couldn’t tell you what gen that is. My main use of HA aside from the automation is Frigate, which apparently needs the hardware AVX flags. This chip supports AVX512, where my older AMD did not, so that’s why I went with it. Its an i5-3470T, if that helps.

For an older SFF unit, it’s a beast for HA.

sylverstream,

3470 means 3rd gen. The first number is the generation. Good to know that also works.

namelivia, in How do you monitor your servers / VPS:es?

Prometheus, Loki and Grafana.

johannes,

Golden! We use the same :)

SteveTech, in Hosting websites over 4g

I doubt this will be any use, but my Telstra 4G has a public IPv6.

justawittyusername,

Thanks thats good to know! I have got onto tailscale and have a test lab setup with a digital ocean vps for the public IP(exit node) and a ubuntu machine with a tunnel to it. Its working, just need to translate that to pfsense…

johntash, in How do you monitor your servers / VPS:es?

UptimeKuma is great, I use it for the simple “are my services up?” and is what I pay most attention to.

I still use zabbix for finer grained monitors though like checking raid status, smartctl, disk space, temperatures, etc.

I’ve been trying out librenms with more custom snmp checks too and am considering going that route instead of zabbix in the future

forwardvoid, in Hosting websites over 4g

If you’re hosting websites and not applications, perhaps you can use SSGs like Hugo/Gatsby. You could deploy your site in a bucket and put cloudflare in front. They can also be used on your own server of course. If you are hosting applications and want to keep them on 4g, you could put a CDN (CloudFlare or …) in frint of it. That would cache all static resources and greatly improve response times.

BCsven, in How often do you back up?

There should be a whitepaper you can reference based on sales scenario. As others have said hourly, daily, weekly snapshots are not backups, unless you also have a btrfs or zfs send that IS backing up the snapshots to another remote device

aleq, in Why docker
@aleq@lemmy.world avatar

the biggest selling point for me is that I’ll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it’s awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.

in short, it’s basically the exe-file of the server world

runs everything as root (not many well built images with proper useranagement it seems)

that’s true I guess, but for the most part shit’s stuck inside the container anyway so how much does it really matter?

you cannot really know which stuff is in the images: you must trust who built it

you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn’t I trust their images?

lots of mess in the system (mounts, fake networks, rules…)

that’s sort of the point, isn’t it? stuff is isolated

corsicanguppy, in Kubernetes? docker-compose? How should I organize my container services in 2024?

First, hire a team of energetic full-time container bros. Half of them will help architect your setup, and other half will focus entirely on supporting the container cult.

forwardvoid,

Containers are bad hmmkay… cause… cause… they’re bad… hmmkay

dataprolet, in How do you monitor your servers / VPS:es?
@dataprolet@lemmy.dbzer0.com avatar

Uptime-Kuma

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #