What makes a registrar more privacy focused than another. Just had a read of their website, but couldn’t understand why they’re better for privacy than any other
They own the domain instead of you. They can then act as a middle man between any inquiries and you, and as a company, they’re able to shield you from many 3rd parties.
That’s interesting. So they buy the domain on your behalf and then rent it out to you. Pretty cool concept
That said, I’ve owned a fair few domains and never had to deal with 3rd parties, so I’m not sure if the added security risk (however small) of them hijacking your domain is worth it. For me, at least. YMMV
I use Moonlight Qt on a raspberry pi 5, and used it on a raspberry pi 4 before that. Both connected via ethernet, streaming at 150 mbps. It works very well, feels like being at the computer. It feels like there is next to no delay, and moonlight reports around 5 ms.
Somewhere else I use a raspberry pi 3 A+ with Moonlight Embedded, connected via Wi-Fi, and it works pretty well, but I can notice the delay a bit more. Still able to stream at 40 mbps.
I have a 3b+ I want to try this with, it has double the ram and also Ethernet connection vs the 3a+. Do you see yours hit ram limit or you think the delay could be wifi related?
I just don’t want to keep running an entire VM with their image. Something more simple that could be used on a LXC / systemd-nspawn container or directly on a base system would be nicer.
What is weird is having to waste almost 700MB of ram + 10GB of storage for a simple webui that charts sensor data and only keeps it for 10 days. As a comparison my NAS container runs Samba4, FileBrowser, Syncthing, Transmission, and a few others under 300MB of RAM with pontual spikes on operations.
There’s a lot of difference between a container and a VM. You can install HA on a container, all you have to do is set it up according to the manual install instructions, and work around any hardware interfacing issues that come up. You’ll save 200MB of RAM and will have to do any upgrades manually. Doesn’t seem worth it to me, but to each their own.
What I’m going to do is setup HA Core on a container manually and run without addons / docker. That will be about installing python and should waste way less resources.
You need to edit your configuration.yaml file to exclude certain sensors or values. I excluded some of the more chatty sensors that I didn’t need and my disk use went from around 40gb to 150mb
Sending local ZFS snapshots to the remote ZFS might be problematic. Consider accidentally deleting important data locally and nuking all of your local snapshots, then sending that to the remote ZFS. You lost all of your snapshots and there’s no way to recover the deleted data. Instead do what I do - keep the two ZFS systems separate and use a non-ZFS mechanism to transfer data - rsync, Syncthing, etc. That way even if you delete everything locally, nuke all local snapshots and send the deletions via rsync remotely, you could still recover your data by restoring the remote ZFS to a snapshot prior to the deletions. For reference I have two ZFS machines doing frequent snapshots and Syncthing replicating data between them on immediate basis.
!selfhosted, please do critique if you find some fundamental issues with this.
Docs say this , so yeah. "send streams can either be “full”, containing all data in a given snapshot, or “incremental”, containing only the differences between two snapshots. ZFS receive reads these send streams and uses them to re-create identical snapshots on a receiving system. "
I don’t know of anything built for that purpose but you could use home assistant dashboards to pull it off pretty easily if you already have an instance set up.
I think mail forwarders are still a good way to go. It’s hard to predict how Internet providers will react to email running in their networks.
These days I have an ec2 at AWS for my mail server and use SES for outbound mail. I’m thinking of moving “receiving” back into my network with a simple chat forwarding service but keep SES for outbound. They handle all the SPF and DKIM things and ensure their networks aren’t on blacklists.
It’s spam they’re concerned about. Spam email is kinda “big business” and one way they thrive is by using bots to just scan for poorly-configured or vulnerable systems to hack and install an app that will let them send email from your system. By compromising hundreds or thousands of individual machines it makes it hard for mail providers to block them individually. It also uses a ton of bandwidth on internet service providers networks.
So some time ago service providers started to simply block port 25 (used to send email) on their networks except to certain services. I think they’ve backed off a bit now but inbound port 25 can often be blocked still. It may even be against their TOS in some cases.
Porkbun is sort of the darling of the self hosting community. I settled on them after doing a huge comparison of prices and features of all the different registrars available to me. Porkbun was by far the best.
I would like this for my media server, basically like a drop-in replacement for NFS shares. I still need it to be some sort of share instead of having to prompt it to send media across. Great project though, thanks
Are you taking about security for your homelab? It essentially comes down to good key hygiene, network security and keeping everything updated.
Don’t open ports, use a good firewall at the border of the network, use a seedbox for torrenting. Use ACLs alongside VLANs in your network. Understand DNS in terms of how your requests are forwarded and how they are processed.
What does using a good firewall mean exactly? As I understand it a port is either open or closed right? So what does a good firewall do that a bad one doesn’t?
Projects like OpenWRT and OPNsense take care to maintain their code and address security issues in firewall/router software that can be exploited. Perhaps firewall might not have been the best way to put it, but companies like TP-Link aren’t really the most scrupulous with their software
It works the same either way. Borg does a lot of different backups on my home network. I also have more than just Borg backups that I want off-site, so an rclone of everything from that nas share once after everything else is done makes more sense than duplicating Borg everywhere. The rclone’d stuff can be used directly just like if it was put there by Borg itself.
email is one of the only services i just gave up on (after rolling my own exchange for over a decade). its too annoyingly complex, tedious to do correctly for just yourself. its not worth it.
Self hosting email on non-mission critical domain for learning purposes might be okay if your intention is to get into the industry. Self hosting email for others on more production like setting you’re going to find yourself in a world of pain.
All it takes is one missed email (be it not making into their intended recipient’s inbox, or them not receiving an important notice in their inbox) and you’re never going to hear the end of it.
You’d also be liable for content your users send out from your servers — and I don’t mean the spam type, though if you get your IP blacklisted, your provider may want to have a word with you.
I’d strongly advise against going down this path, but if you do, be sure to have ways to legally shield yourself from any sort of potential liabilities.
I do not understand why everyone calling hosting email difficult? IT is like 5 RFC you need to read and implement. Sofware wise you will need mail agent, something for DKIM ( if it not build in in agent), “local delivery agent” ( probably presenting it as IMAP) + mail reader of your choice. Nothing too complex
It’s not complicated until your reputation drops for a multitude of reasons, many not even directly your fault.
Neighboring bad acting IPs, too many automated emails sent out while you were testing, compromised account, or pretty much any number of things means everyone on your domain is hosed. And email is critical.
The complex part isn’t the hosting part. Its the security part, the reputation management part, the uptime part, the troubleshooting delivery part and basically every other aspect other than running postfix+dovecot
Hosting your own email is a bad idea. Hosting OTHER PEOPLE’S email is a REALLY BAD idea. Self-hosting mail on a vanity domain is a good exercise to learn how SMTP, DNS, IMAP and other protocols interact.
If you don’t like Google, Apple, or Microsoft then sign them up with Proton or another hosted provider. You don’t want to be the reason someone lost income because they missed out on a critical email from a client or their job application was blocked because it was sent from a host with poor reputation.
If you’re only trying to use Jellyfin at home, you don’t need any reverse proxy or domain. All you need is for both devices to be on the same network, and for the Raspberry Pi to have a fixed internal IP address (through your router settings).
On the Shield, you just give the Jellyfin app that IP address and port number (10.0.0.X:8096) to connect and you’re good to go.
Whether a device is wired or on wifi matters on some routers, because some routers have wifi and wired devices on different subnets by default. It’s unlikely, so I wouldn’t worry, unless you notice accessing it only works wired.
wlan and eth are network adapters in your raspberry Pi probably. Not subnets. Subnet is a range of IP addresses the router can use to give out IP addresses to devices. Basically, let’s assume that the router/the local network has only one subnet 192.168.1.0/24. This number means, the router can give out IP addresses from 192.168.1.0 to 192.168.1.254. If the router had two subnets, let’s say A: 192.168.1.0/24 B: 192.168.2.0/24 device on subnet A, would be able to talk to the device on subnet B.
Either way, in my opinion you’re overcomplicating things a lot for yourself. If you only wish to watch from home, on your couch, you don’t need reverse proxies, cloudflare and all that jazz. Docker and raspberry pi is enough. I can walk you through it if you want :)
So an IP address is divided into four section separated by dots. 123.123.123.123. Each of those section can go from 0 to 255, so 0.0.0.0 to 255.255.255.255. Why this number? There is 256 numbers from 0 to 255, and 256 is the biggest number you can make out of 8 bits. (If you’re interested in binary, please look it up, this is already long haha) If every number between the . can be made out of 8 bits that means the whole IP address is 32 bits. It’s 32 bits cos that’s what was convenient when it was decided basically. Makes sense?
Now, the subnets. Each network can be divided into sub networks or subnets. Subnets fall into 5 classes: ABCDE. D and E aren’t used as much so I don’t know much about them.
Class A: Subnet mask is 255.0.0.0 Class B: Subnet mask is 255.255.0.0 Class C: Subnet mask is 255.255.255.0
A subnet mask determines how many bits are reserved for the network, and how many bits are used for hosts (devices). Basically, each IP address is divided into a network part and a host part. Network part is used for identifying networks and how many you can make, while host part is used for identifying hosts/devices like your phone or PC or whatever and how many can be connected.
In class A, with 255.0.0.0, the first number is reserved for the network, and the other 3 for the devices for example.
In class A you have a small amount of possible subnets but a big number of devices, and the opposite in class C.
The 24 after the slash is just a different way of saying 255.255.255.0, called CIDR notation. 255.0.0.0 is /8 and 255.255.255.0 is /16.
So depending on the subnet class, what the numbers mean differs. Well except the port and CIDR subnet mask.
All in all, all you need to know is that your router most likely has one subnet lol
ok. I would still like to learn this stuff, so hopefully someone can come in and answer some of the questions - but it seems like, then, the challenge is just gluetun for now.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.