13,574 totaling 1.7gb, not too bad. Hey OP how do you get to this view? It looks like we both use nautilus but when I select “properties” on the .cache folder it looks different.
I use thunar (with ePapirus-Dark icons which is probably what makes it look like nautilus), I liked nautilus when I used it but thunar has a bit more functionality that I like
3 and 4 are nice but as something someone would set themself. Too much character and detail to be the default when Plasma do not target any specific demographic.
1, 2 and 5 are nice abstract wallpapers, but honestly boring as we have stuff like that for years.
6 is the best. This is wallpaper with some style, but not too much character.
Edit: Just in my opinion and for my eye of course.
Use Nix expressions or flakes for that - just copy a simple example of default.nix or shell.nix from a git host and tweak it to your liking. Personally, I am not a fan of how Nix handles Python, and still can’t get used to how Python packages have to be included in expressions, so I create a temporary virtual environment for the time-being.
you should have no problem doing Python dev on nixos, it’s basically made for doing development environments like this without the need for containers. you should just be able to set up a nix shell for your project that contains python and all the necessary dependencies, and then enter the shell. then, you’ll have all the right dependencies installed for your project and still have access to any editors you have installed
I think Docker is a tool, and it depends on how you implement said tool. You can use Docker in ways that make your infra more complicated, less efficient, and more bloated with little benefit, if not a loss of benefits. You can also use it in a way that promotes high uptime, fail-overs, responsible upgrades, etc. Just “Docker” as-is does not solve problems or introduce problems. It’s how you use it.
Lots of people see Docker as the “just buy a Mac” of infra. It doesn’t make all your issues magically go away. Me, personally, I have a good understanding of what my OS is doing, and what software generally needs to run well. So for personal stuff where downtime for upgrades means that I, myself, can’t use a service while it’s upgrading, I don’t see much benefit for Docker. I’m happy to solve problems if I run into them, also.
However, in high-uptime environments, I would probably set up a k8s environment with heavy use of Docker. I’d implement integration tests with new images and ensure that regressions aren’t being introduced as things go out with a CI/CD pipeline. I’d leverage k8s to do A-B upgrades for zero downtime deploys, and depending on my needs, I might use an elastic stack.
So personally, my use of Docker would be for responsible shipping and deploys. Docker or not, I still have an underlying Linux OS to solve problems for; they’re just housed inside a container. It could be argued that you could use a first-party upstream Docker image for less friction, but in my experience, I eventually want to tweak things, and I would rather roll my own images.
For SoC boards, resources are already at a premium, so I prefer to run on metal for most of my personal services. I understand that we have very large SoC boards that we can use now, but I still like to take a simpler, minimalist approach with little bloat. Plus, it’s easier to keep track of things with systemd services and logs anyway, since it uniformly works the way it should.
Just my $0.02. I know plenty of folks would think differently, and I encourage that. Just do what gives you the most success in the end 👍
Unless you make your host OS read-only, it itself will keep writing while running your docker containers. Furthermore slapping read-only in a docker container won’t make the OS you’re running in it able to run correctly with an RO root fs. The OS must be able to run with an RO root fs to begin with. Which is the same problem you need to solve for the host OS. So you see, it’s the same problem and docker doesn’t solve it. It’s certainly possible to make an Linux OS that runs on an RO root fs and that’s what you need to focus on.
I'm not sure why Docker would be a particularly good (or particularly bad) fit for the scenario you're referring to.
If you're suggesting that Docker could make it easy to transfer a system onto a new SD card if one fails, then yes that's true ... to a degree. You'd still need to have taken a backup of the system BEFORE the card failed, and if you're making regular backups then to be honest it will make little difference if you've containerised the system or not, you'll still need to restore it onto a new SD card / clean OS. That might be a simpler process with a Docker app but it very much depends on which app and how it's been set up.
honestly, it’s not worth it. hard drives are cheap, just plug one via USB 3 and make all the write operations there. that way your little SBC doesn’t suffer the performance overhead of using docker.
The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.
docker images have a ton of extra processes from the OS they were built in. Normally a light distro is used to build images, like Alpine Linux. but still, you’re executing a lot more processes than if you were installing things natively.
Of course the images does not contain the kernel, but still they contain a lot of extra processes that would be unnecessary if executing natively.
To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).
Containers running multiple processes are possible, but hard to pull off and therefore rarely used.
What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.