It doesn’t. If you’re running a laptop with a local web server for development, you wouldn’t want other devices in i.e. the coffee shop WiFi to be able to connect to your (likely insecure) local web server, would you?
If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded?
Who is “they”? What about all the other ports?
Imagine a family member visits you and wants internet access in their Windows laptop, so you give them the WiFi password. Do you want that possibly malware infected thing poking around at ports other than 80 running on your server?
Obviously you shouldn’t have insecure things listening there in the fist place but you don’t always get to choose whether some thing you’re hosting is currently secure or not or may not care too much because it’s just on the local network and you didn’t expose it to the internet.
This is what defense in depth is about; making it less likely for something to happen or the attack less potent even if your primary protections have failed.
#3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access
Mostly addressed by the above but also note that you likely do have applications listening on ports you didn’t know about. Take a look at sudo ss -utpnl.
#5 is the only one that makes some sense; if you install a program that you do not trust (you don’t know how it works), you don’t want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device’s actions.
It’s rather the other way around; you don’t want the outside world to be able to talk to untrusted software on your computer. To be a classical “door”, the application must be able to listen to connections.
OTOH, smarter malware can of course be something like a door by requesting intrusion by itself, so outbound filtering is also something you should do with untrusted applications.
People seem to treat it as if it’s acting like the front door to a house, but this analogy doesn’t make much sense to me – without a house (a service listening on a port), what good is a door?
I’d rather liken it to a razor fence around your house, protecting you from thieves even getting near it. Your windows are likely safe from intrusion but they’re known to be fragile. Razor fence can also be cut through but not everyone will have the skill or patience to do so.
If it turned out your window could easily be opened from the outside, you’d rather have razor fence in front until you can replace the window, would you?
You gave them an irrevocable license to basically use your content in any way they see fit. Them not showing posts you deleted is just them being nice, not being obligated to do so. They could simply ignore your request or restore posts later.
You should have thought about that when you gave them that license to your content.
Why does it need to be public-facing? There may be solutions that don’t require exposing it to billions of people.
Security is always about layers. The more independent layers there are, the fewer the chances someone will break through all of them. There is no one technology that will make your hosting reasonably secure, it’s the combination of multiple.
You’ve already mentioned software ran inside an unprivileged sandbox.
There’s also:
Sandbox ran unprivileged inside a VM
VM ran inside unprivileged sandbox
Firewall only allowing applications to open certain ports
Server running all of that hosted by someone else on their network with their own abstractions
The best way I know of is to get yourself a VM and get into the weeds; try to configure a system to your liking.
Follow the NixOS manual. The Wiki is unofficial; often opinionated, out of date or just plain wrong. Take it with a grain of salt. The canonical source of documentation is the NixOS manual and it’s not nearly as bad as you may have heard.
Make extensive use of search.nixos.org/options or man configuration.nix. Finding and making proper use of options and the module system is the bread and butter of using NixOS.
Eventhough everyone and their mom will recommend them to you for nebulous reasons, ignore flakes for now. You will know when you’ll benefit from using them; namely when you need to use something outside of NixOS/Nixpkgs. You’re going to have enough to figure out with plain old NixOS on its own though; I don’t have external dependencies in my config to this day.
One “hammer” mitigation to most threats could conceivably face when self-hosting is to never expose your services to the internet using a firewall. “Securing” your services against a small circle of guests/friends/family members in your home network is a lot simpler than securing against the entire world.
If you need to access your services remotely, there are ways to achieve that without permanently opening a single port to the internet such as Tailscale or ZeroTier.
Otherwise, commonly used tools in self-hosting such as Docker or VMs usually offer quite decent separation even if a service is compromised.
Nothing replaces good security hygiene though. Keep your stuff up-to-date. Use secure methods of authentication such as hard to guess passwords or better. Make frequent backups (3-2-1). The usual.
In my case I have a number of sockets from spotify, and steam listening on port 0.0.0.0. I would assume, that these are only available to connections from the LAN?
That’s exactly the kind of thing I meant :)
These are likely for things like in-house streaming, LAN game downloads and remote music playing, so you may even want to consider explicitly allowing them through the firewall but they’re also potential security holes of applications running under your user that you have largely no control over.
If I am packaging software for gentoo, all I have to do is translate the build instructions from the project’s documentation to gentoo’s package recipe.
It’s the same for Nixpkgs.
In nix, it seems that it is not that simple and you’ll have to do some exploration. Am I wrong?
In well behaved build systems, it’s likely easier to package than most other distros. If it’s not as well behaved you will have to do some “exploration” and the complexity can get quite out of control if the build system is exceptionally terrible.
Here is the package for the GNU hello program which uses a well-behaved build system:
If you ignore the optional passthru.tests, this is very simple. You provide metadata, sources etc. to the generic mkDerivation function and that’s it. The most complex non-standard thing this derivation does is enable the build system’s tests.
You don’t even need to run the provided build instructions because Nixpkgs’ stdenv abstracts those away. If it finds a makefile, it’ll automatically run make and make install with the correct flags for instance. Same for other standard build systems; if you pass cmake into nativeBuildInputs, it’ll attempt to build, install, check etc. using cmake’s standardised interfaces.
If the build system is poorly behaved however (like for instance Anki’s), you will have to get into the weeds and do some rather advanced things: