I’ll say that as someone who stopped using docker and went back to deploying from source in lxc containers: dockers is a great tool for the majority of people and that is exactly what it aims to be, easily reusable in as many different setups as possible.
On the flip side, yes it may happen that you would not benefit from docker for a reason or another. I don’t, in my case docker only adds another layer over my already containerized setup and many of the services I deploy are already built from source in a CI/CD workflow and deployed through ansible.
I do have other issues with docker but those are usually less with the tool and more with how some project use docker as a mean to replace proper deployment documentations.
It has been years since I played with it but OpenStack is a suite of tools to build a data center like AWS or Azure. You can get the VM bit up and running pretty quick with basic packages on an Ubuntu system if you want to play with it, but again it has been years.
What is your goal? Playing with kvm may be a better path if you want to understand virtualization.
If you want to upskill for a job, I’d see if there is a certificate to work on. Even if you don’t want the cert, the curriculum might be a good starting point.
the biggest selling point for me is that I’ll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it’s awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.
in short, it’s basically the exe-file of the server world
runs everything as root (not many well built images with proper useranagement it seems)
that’s true I guess, but for the most part shit’s stuck inside the container anyway so how much does it really matter?
you cannot really know which stuff is in the images: you must trust who built it
you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn’t I trust their images?
lots of mess in the system (mounts, fake networks, rules…)
that’s sort of the point, isn’t it? stuff is isolated
I am happy with my simple docker-compose setup - one root folder with one subfolder per project containing the compose file and any configuration mounted into the container. Traefik automatically exposes all services I want under a well-known URL using a single line in each compose file. Watchtower updates the containers.
This has been running stable for over two years with probably 2-3 reboots in between. If my current NUC ever breaks I’ll set it up again using Podman instead of Docker, but aside from that I couldn’t be happier!
Openstack is like self-hosting your own cloud provider. My 2 cents is that it’s probably way overkill for personal use. You’d probably be interested in it if you had a lot of physical servers you wanted to present as a single pooled resource for utilization.
How does one install it?
From what I heard from a former coworker - with great difficulty.
What is the difference between a hypervisor/openstack/a container service (podman,docker)?
A hypervisor runs virtual machines. A container service runs containers which are like virtual machines that share the host’s kernel (more to it than that but that’s the simplest explanation). Openstack is a large ecosystem of pieces of software that runs the aforementioned components and coordinates it between a horizontally scaling number of physical servers. Here’s a chart showing all the potential components: …wikimedia.org/…/Openstack-map-v20221001.jpg
If you’re asking what the difference between a container service and a hypervisor are then I’d really recommend against pursuing this until you get more experience.
It’s for getting acquainted with the whole software stack. Also I have enough free time for it :) I’m also very well aware what the difference between a container service and a hypervisor are, I’m just a little overwhelmed by what open stack can do.
Deploying openstack seems like a very fun and frustrating experience. If you succeed, you should consider graduating from selfhosting and entering hosting business. Then, maybe post your offering on lowendtalk. Not many providers there use openstack so you might be able to lead the pack there.
There is a lot of complexity and overhead involved in either system. But, the benefits of containerizing and using Kubernetes allow you to standardize a lot of other things with your applications. With Kubernetes, you can standardize your central logging, network monitoring, and much more. And from the developers perspective, they usually don’t even want to deal with VMs. You can run something Docker Desktop or Rancher Desktop on the developer system and that allows them to dev against a real, compliant k8s distro. Kubernetes is also explicitly declarative, something that OpenStack was having trouble being.
So there are two swim lanes, as I see it: places that need to use VMs because they are using commercial software, which may or may not explicitly support OpenStack, and companies trying to support developers in which case the developers probably want a system that affords a faster path to production while meeting compliance requirements. OpenStack offered a path towards that later case, but Kubernetes came in and created an even better path.
PS: I didn’t really answer your question”capable” question though. Technically, you can run a kubernetes cluster on top of OpenStack, so by definition Kubernetes offers a subset of the capabilities of OpenStack. But, it encapsulates the best subset for deploying and managing modern applications. Go look at some demos of ArgoCD, for example. Go look at Cilium and Tetragon for network and workload monitoring. Look at what Grafana and Loki are doing for logging/monitoring/instrumentation.
Because OpenStack lets you deploy nearly anything (and believe me, I was slinging OVAs for anything back in the day) you will never get to that level of standardization of workloads that allows you to do those kind of things. By limiting what the platform can do, you can build really robust tooling around the things you need to do.
First, hire a team of energetic full-time container bros. Half of them will help architect your setup, and other half will focus entirely on supporting the container cult.
How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?
In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.
Your first sentence proves that it’s different. The developer needs to know it’s going to be a Deb package. What about rpm? What about if it’s going to run on mac? Windows? That means they’ll have to change how they develop to think about all of these different platforms. Oh you run windows - well windows doesn’t have openssl, so we need to do this vs that.
I’d recommend reading up on docker and containerization. It is not a script for setting up software. If that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it. Like it or not it’s here, and if you’re doing any dev/ops work professionally you will be left behind for not understanding it.
Apparently I was unclear, I was referring to the security implications of using different manifestations of other people’s code. Those are rather similar.
I’d recommend reading up on docker and containerization. It is not a script for setting up software.
I was referring specifically to docker files. Those are almost to the letter scripts for setting up software.
if that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it.
I find your attitude not just uncharitable, but also rude.
and I find misinformation about topics like this also to be rude. It’s perfectly fine if you don’t understand something, but what I don’t like is you going out of your way to dissuade people from using a product when I don’t think you understand the core concepts of it. If you have valid criticisms like security of docker then that’s a different conversation about securing containers, but it’s hard to take them as valid criticisms if the criticism is based on a fundamental misunderstanding of the product.
I don’t think anyone I have ever talked to professionally or read about docker would ever describe a dockerfile as “scripts for setting up software”. It is much more nuanced then that.
So yes, I’m a bit rude about it. I do this professionally and I’m very tired of people who don’t understand containerization explain to me how containerization sucks.
I don’t think you understood the context of the comment you replied to. As a reply to “Here are all these drawbacks to Docker vs hosting on bare metal,” it makes perfect sense to point out that the risks are there regardless.
Unless I misread your comment and you’re suggesting that you think devs not having to deal with OS-specific code is a disadvantage of Docker. Or maybe you meant your second paragraph to be directed at OP?
1.) No one runs rooted docker in prod. Everything is run rootless.
2.) That’s just patently not true. docker inspect is your friend. Also you can build your own containers trusting no-one. FROM Scratchhub.docker.com/_/scratch/
3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.
Its all about companies re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer.
We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker or isn’t a 3rd party cloud xyz deploy-from-github service.
oh but the underlying technologies aren’t proprietary
True, but this Docker hype invariably and inevitably leads people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. In this particular case is Docker Hub / Kubernetes BS and all the cloud garbage around it.
oh but there are alternatives like podman
It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT/Podman and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.
lots of mess in the system (mounts, fake networks, rules…)
Yes, a total mess of devices hard to audit, constant ram wasting and worse than all it isn’t as easy change a docker image / develop things as it used to be.
It’s not true. I mean sure there are companies that try to lock you into their platforms but there’s no grand conspiracy of the lizard people the way OP makes it sound.
Different people want different things from software. Professionals may prefer rootless podman or whatever but a home user probably doesn’t have the same requirements and the same high bar. They can make do with regular docker or with running things on the metal. It’s up to each person to evaluate what’s best for them. There’s no “One True Way” of hosting software services.
This is a really bad take. I’m all for OSS, but that doesn’t mean that there isn’t value with things like Docker.
Yes, developers know less about infra. I’d argue that can be a good thing. I don’t need my devs to understand VLANs, the nuances of DNS, or any of that. I need them to code, and code well. That’s why we have devops/infra people. If my devs to know it? Awesome, but docker and containerization allows them to focus on code and let my ops teams figure out how they want to put it in production.
As for OSS - sure, someone can come along and make an OSS solution. Until then - I don’t really care. Same thing with cloud providers. It’s all well and good to have opinions about OSS, but when it comes to companies being able to push code quickly and scalably, then yeah I’m hiring the ops team who knows kubernetes and containerization vs someone who’s going to spend weeks trying to spin up bare iron machines.
I’ll answer your question of why with your own frustration - bare metal is difficult. Every engineer uses a different language/framework/dependencies/whathaveyou and usually they’ll conflict with others. Docker solves this be containing those apps in their own space. Their code, projects, dependencies are already installed and taken care of, you don’t need to worry about it.
Take yourself out of homelab and put yourself into a sysadmin. Now instead of knowing how packages may conflict with others, or if updating this OS will break applications, you just need to know docker. If you know docker, you can run any docker app.
So, yes, volumes and environments are a bit difficult at first. But it’s difficult because it is a standard. Every docker container is going to need a couple mounts, a couple variables, a port or two open, and if you’re going crazy maybe a GPU. It doesn’t matter if you’re running 1 or 50 containers on a system, you aren’t going to get conflicts.
As for the security concerns, they are indeed security concerns. Again imagine you’re a sysadmin - you could direct developers that they can’t use root, that they need to be built on OS’s with the latest patches. But you’re at home, so you’re at the mercy of whoever built the image.
Now that being said, since you’re at their mercy, their code isn’t going to get much safer whether you run it bare-iron or containerized. So, do you want to spend hours for each app figuring out how to run it, or spend a few hours now to learn docker and then have it standardized?
I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.
As for your user & permissions concern, are you aware that docker these days can be configured to map “root” in the container to a different user? Personally I prefer to use podman though, which doesn’t have that problem to begin with
I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.
Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn’t have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran docker compose up -d.
If my server ever crashes, I can just copy it over and start from scratch.
I previously used WikiJS, but since about a year ago I switched to Grav.
The really nice thing is not having an additional database anymore. It’s really just markdown pages, config files and php plugins.
By default it looks like a blogging platform, but with the learn2 theme it also works pretty well as a documentation website. The official docs are written using that theme.
I wasn’t completely happy with the defaults though so I did some modifications for my own wiki. Some limited knowledge in HTML, CSS is required and PHP or Javascript don’t hurt either.
You can find the theme, plugins and pages in my repo as well if you’d want to use any of it.
selfhosted
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.