Well docker tends to be more secure if you configure it right. As far as images go it really is just a matter of getting your images from official sources. If there isn’t a image already available you can make one.
The big advantage to containers is that they are highly reproducible. You no longer need to worry about issues that arise when running on the host directly.
Also if you are looking for a container runtime that runs as a local user you should check out podman. Podman works very similarly to docker and can even run your containers as a systemd user service.
You can run rootless containers but, importantly, you don’t need to run Docker as root. Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
It’s the opposite - you don’t really need to care about docker networks, unless you have an explicit need to contain a given container’s traffic to it’s own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.
I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I’ve created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.
It’s not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.
Why? I like to play.
Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.
Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.
Let’s say there’s a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).
I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.
I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos… hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.
Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
There is no daemon in rootless mode. Instead of a daemon running containers in client/server mode you have regular user processes running containers using fork/exec. Not running as root is part and parcel of this approach and it’s a good thing, but the main motivator was not “what if someone breaks out of the container” (which doesn’t necessarily mean they’d get all the privileges of the running user on the host and anyway it would require a kernel exploit, which is a pretty tall order). There are many benefits to making running containers as easy as running any kind of process on a Linux host. And it also enabled some cool new features like the ability to run only partial layers of a container, or nested containers.
Yep, all true. I was oversimplifying in my explanation, but you’re right. There’s a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.
Its all about companies re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer.
We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker or isn’t a 3rd party cloud xyz deploy-from-github service.
oh but the underlying technologies aren’t proprietary
True, but this Docker hype invariably and inevitably leads people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. In this particular case is Docker Hub / Kubernetes BS and all the cloud garbage around it.
oh but there are alternatives like podman
It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT/Podman and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.
lots of mess in the system (mounts, fake networks, rules…)
Yes, a total mess of devices hard to audit, constant ram wasting and worse than all it isn’t as easy change a docker image / develop things as it used to be.
It’s not true. I mean sure there are companies that try to lock you into their platforms but there’s no grand conspiracy of the lizard people the way OP makes it sound.
Different people want different things from software. Professionals may prefer rootless podman or whatever but a home user probably doesn’t have the same requirements and the same high bar. They can make do with regular docker or with running things on the metal. It’s up to each person to evaluate what’s best for them. There’s no “One True Way” of hosting software services.
This is a really bad take. I’m all for OSS, but that doesn’t mean that there isn’t value with things like Docker.
Yes, developers know less about infra. I’d argue that can be a good thing. I don’t need my devs to understand VLANs, the nuances of DNS, or any of that. I need them to code, and code well. That’s why we have devops/infra people. If my devs to know it? Awesome, but docker and containerization allows them to focus on code and let my ops teams figure out how they want to put it in production.
As for OSS - sure, someone can come along and make an OSS solution. Until then - I don’t really care. Same thing with cloud providers. It’s all well and good to have opinions about OSS, but when it comes to companies being able to push code quickly and scalably, then yeah I’m hiring the ops team who knows kubernetes and containerization vs someone who’s going to spend weeks trying to spin up bare iron machines.
Be careful OP that after first year you have to pay the ‘renew’ price, which is generally higher than ‘register’ price. A lot of cheap domain offers use that trick expecting users to become attached to their domains.
Because I am school student (16yr) from INDIA. Here u have to give record of each penny to parents and If say them that I just want a domain for self hosting my personal stuff I will not be able to say something else.
Can you make the domain somehow personalized to you so you can say its for an online resume to further your education and employability? If you happen to host other personal stuff that won’t cost you anything extra, just make sure you have a fancy looking CV at the root.
If you have a stable IP, there also free top level domains .TK / .ML / .GA / .CF / .GQ over at www.freenom.com . Their frontend is down sometimes, but once you have a domain and are point it to an IP, you should be dandy.
Check whatismyipaddress.com to see your IP address once you’re connected to either network, but with a high likelihood, it’s almost certainly different IPs. In that case, Dynamic DNS is probably best.
But if you’re using your neighbor’s wifi, I doubt there’s a way for you to host stuff unless you have access to their routers, can open ports 80 (HTTP) and 443 (HTTPS), and forward them to your server. It’s best to use hardware you control (including the router).
Not sure which ports are required for your usage but maybe cloudflared would work? It works on the free tier as well, you can install cloudflared on your linux/windows server (no BSD support afaik).
Freenom’s domains are pretty unstable, they lost management for .ga domains last year and they often claim others’ free domain when they have high usage.
though if you have unstable network I won’t suggest self hosting fediverse stuff.
Had really good experience with this option. Namecheap seems quite reasonable. Also, self hosting on other’s domain can cause a lot of issues as you try creating enough paths for everything. I have found subdomain routing to work much better as a lot of applications get sad when their host url is something like blarg.com/gitea or something.
I can’t remember what I was watching, but I remember watching something where they said Kubernetes is designed for something so large in scale that the only reason people have heard about it is because some product manager asked what Google use and then demanded that they use it to replicate the success of Google and subsequently, hobbyists also followed and now a bunch of people are using stuff that’s poorly optimized for such small scale systems.
Haha yeah true, but it does come with the advantage that it’s super prevalent and so has a lot of tools and docs. Nearly every self-hosted service I use has a docs page for how to set it up with Kubernetes. (Although it’s not nearly as prevalent as plain docker)
With a basic understanding of how k8s works and an already running cluster, all one needs to know is how to run a service as a docker file to have it also run in k8s
The problem is Opnsense, as the BSD kernel used is doing single thread network routing. So the APU can saturate 1gbit with multiple connections/threads or if you switch to a firewall with a Linux kernel like OpenWRT.
That said, a N100 probably does have enough single thread performance to do 1.2 Gbit. Not sure about the full 2.5gbit though.
Thank you for the answers. I enjoy opnsense, it’s easier to use then openwrt for me personally.
I was thinking to do some testing of the new device before I replace the old one. But I wanted to hear if anyone has experiences.
I looked at CPU benchmark net, and saw that N100 is about 8 times faster then the AMD SOC. I’m not sure if this is linear with performance increase. Currently max download is about 600-700 while upload is 300-400.
How are you measuring your speeds? I think cloudflare speed tests were more accurate for me then ookla, but in the end downloading a large file over usenet gives me the best picture
Edit- and that made me realise my ssd was a bottleneck, replacing that helped me go from 500-600 to about 900-950 on my gigabit connection
GoDaddy is notorious for terrible service and NameCheap has started doing some shady stuff too lately. Luckily there are other decent registrars out there. I can recommend Netim.com or INWX.de in the EU – they also provide EU-specific TLDs which American registrars don’t.
If you need more than one mailbox you can’t beat the offers from providers like PurelyMail/MXRoute/Migadu, where you pay for the storage instead of per-mailbox. I’m using Migadu because, again, they work under EU/Swiss privacy laws.
You do not need to spin up your own mail service and should not. Email and DNS hosting are the most abuse-prone and easy to mess up services; always go to an established provider for these.
Are there concerns tying my accounts to a service that might go under or are some “too big to fail”?
Look into their history. Generally speaking a provider that’s been around for a decade or more probably won’t dissapear overnight; they probably have a sustainable income model and have been around the block.
That being said nothing saves even long-established providers from being acquired. This happened for example to a French service (Gandi) with over 20 years of history.
The only answer to that is to pick providers that don’t lock you into proprietary technologies and offer standard services like IMAP, and also to keep your domain+DNS and your email providers separate. This way if the email service starts hiking prices or does anything funny you can copy your email, switch your domain(s), and be with another provider the very next day.
A general reduction in service quality, increasing domain prices (double check your renewals) and there are reports of domain name sniping (where they grab names that people are looking up).
Still much less bullshit than other providers. It has less dark patterns than OVH. I would also recommend their VPN service for beeing so cheap the first year
I’ve done this in the past using Gmail. You pick a domain provider and get their email plan. Most offer both services. I’ve used name cheap.
Then in your regular Gmail account you can configure the IMAP settings from the domain registrar to receive the email from that inbox. Then in Gmail find the settings where you can send as another address. This lets you use that new address in our outbound mail. From there I just auto label the incoming mail to help sort the two addresses.
Now you should have your regular Gmail and your new novelty email all in one place.
Would’ve loved to gotten one of those. But the power consumption of a Xeon is a bit higher than I’d like. This was a nice to have, not need to. It was a Christmas gift from my wife 🥰
I’m using a workstation board in my server. Asus Pro WS W680M-ACE SE along with a Core i5-13500. Intel support ECC for consumer CPUs but only when using workstation motherboards :/. The IPMI on this board works well though.
Don’t have some Intel CPUs Intel vPro making you also able to control the PC in a LVM manner?
I havent tried it yet but my (so far) research suggests its possible and it would be a useful feature for those repurposing old workstation pcs as servers.
I think some other CPU/MBs also have this feature.
But I would giess they are only implemented in business scopes like the Pro/EliteDesk line from HP and the other SIs equivalent.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.