linux

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Shape4985, in Reminder to clear your ~/.cache folder every now and then
@Shape4985@lemmy.ml avatar

Bleachbit is good for clearing up some space

OsrsNeedsF2P,

And deleting emails

zingo,

Even Hillery knows that one.

Come on!

/s

ArcaneSlime, (edited ) in Reminder to clear your ~/.cache folder every now and then

…yeah let me go check that…

13,574 totaling 1.7gb, not too bad. Hey OP how do you get to this view? It looks like we both use nautilus but when I select “properties” on the .cache folder it looks different.

Zangoose,
@Zangoose@lemmy.world avatar

I use thunar (with ePapirus-Dark icons which is probably what makes it look like nautilus), I liked nautilus when I used it but thunar has a bit more functionality that I like

ArcaneSlime,

Ah thanks!

kaesaecracker,

the screenshot does not look like nautilus, maybe xfce?

redd, in Reminder to clear your ~/.cache folder every now and then
@redd@discuss.tchncs.de avatar

Is it safe to clear ~/.cache/mozilla/ while Firefox is running?

Pantherina,

No.

Zangoose,
@Zangoose@lemmy.world avatar

Maybe not while it’s running, but .cache is intended to be temporary files only so expecting files to permanently be there should be treated as a bug

kariboka, in How far away is GIMP 3 from GIMP 4?
archy, in One of these 6 will become Plasma 6. Wallpaper Which one do you prefer?

3rd one fits KDE style, also 6 is amazing too

smileyhead, (edited ) in One of these 6 will become Plasma 6. Wallpaper Which one do you prefer?

3 and 4 are nice but as something someone would set themself. Too much character and detail to be the default when Plasma do not target any specific demographic.

1, 2 and 5 are nice abstract wallpapers, but honestly boring as we have stuff like that for years.

6 is the best. This is wallpaper with some style, but not too much character.

Edit: Just in my opinion and for my eye of course.

velox_vulnus, in CLI Editors with Distrobox?

Use Nix expressions or flakes for that - just copy a simple example of default.nix or shell.nix from a git host and tweak it to your liking. Personally, I am not a fan of how Nix handles Python, and still can’t get used to how Python packages have to be included in expressions, so I create a temporary virtual environment for the time-being.

Tier1BuildABear, in One of these 6 will become Plasma 6. Wallpaper Which one do you prefer?
@Tier1BuildABear@lemmy.world avatar

The one with the clock better have a moving clock otherwise I hate it. Static clocks should never be part of a wallpaper.

linuxoveruser, in CLI Editors with Distrobox?

you should have no problem doing Python dev on nixos, it’s basically made for doing development environments like this without the need for containers. you should just be able to set up a nix shell for your project that contains python and all the necessary dependencies, and then enter the shell. then, you’ll have all the right dependencies installed for your project and still have access to any editors you have installed

linuxoveruser,

I’d also check out poetry2nix if you’re a poetry fan and interested in building your package with nix. See www.tweag.io/blog/2020-08-12-poetry2nix/.

GnomeComedy, in I use linux for the same reason I wear fuzzy socks and sweaters

This all falls apart as a “reason” when you consider Windows Home vs Windows Enterprise.

The better reason is that Windows Home sucks.

Synthead, (edited ) in Am I wrong to assume that docker is perfect for single board computers that relies on low life expectancy drives (microsd)?

I think Docker is a tool, and it depends on how you implement said tool. You can use Docker in ways that make your infra more complicated, less efficient, and more bloated with little benefit, if not a loss of benefits. You can also use it in a way that promotes high uptime, fail-overs, responsible upgrades, etc. Just “Docker” as-is does not solve problems or introduce problems. It’s how you use it.

Lots of people see Docker as the “just buy a Mac” of infra. It doesn’t make all your issues magically go away. Me, personally, I have a good understanding of what my OS is doing, and what software generally needs to run well. So for personal stuff where downtime for upgrades means that I, myself, can’t use a service while it’s upgrading, I don’t see much benefit for Docker. I’m happy to solve problems if I run into them, also.

However, in high-uptime environments, I would probably set up a k8s environment with heavy use of Docker. I’d implement integration tests with new images and ensure that regressions aren’t being introduced as things go out with a CI/CD pipeline. I’d leverage k8s to do A-B upgrades for zero downtime deploys, and depending on my needs, I might use an elastic stack.

So personally, my use of Docker would be for responsible shipping and deploys. Docker or not, I still have an underlying Linux OS to solve problems for; they’re just housed inside a container. It could be argued that you could use a first-party upstream Docker image for less friction, but in my experience, I eventually want to tweak things, and I would rather roll my own images.

For SoC boards, resources are already at a premium, so I prefer to run on metal for most of my personal services. I understand that we have very large SoC boards that we can use now, but I still like to take a simpler, minimalist approach with little bloat. Plus, it’s easier to keep track of things with systemd services and logs anyway, since it uniformly works the way it should.

Just my $0.02. I know plenty of folks would think differently, and I encourage that. Just do what gives you the most success in the end 👍

avidamoeba, in Am I wrong to assume that docker is perfect for single board computers that relies on low life expectancy drives (microsd)?
@avidamoeba@lemmy.ca avatar

Unless you make your host OS read-only, it itself will keep writing while running your docker containers. Furthermore slapping read-only in a docker container won’t make the OS you’re running in it able to run correctly with an RO root fs. The OS must be able to run with an RO root fs to begin with. Which is the same problem you need to solve for the host OS. So you see, it’s the same problem and docker doesn’t solve it. It’s certainly possible to make an Linux OS that runs on an RO root fs and that’s what you need to focus on.

losttourist, in Am I wrong to assume that docker is perfect for single board computers that relies on low life expectancy drives (microsd)?
@losttourist@kbin.social avatar

I'm not sure why Docker would be a particularly good (or particularly bad) fit for the scenario you're referring to.

If you're suggesting that Docker could make it easy to transfer a system onto a new SD card if one fails, then yes that's true ... to a degree. You'd still need to have taken a backup of the system BEFORE the card failed, and if you're making regular backups then to be honest it will make little difference if you've containerised the system or not, you'll still need to restore it onto a new SD card / clean OS. That might be a simpler process with a Docker app but it very much depends on which app and how it's been set up.

micke, in One of these 6 will become Plasma 6. Wallpaper Which one do you prefer?

The red tree 👍

sir_reginald, (edited ) in Am I wrong to assume that docker is perfect for single board computers that relies on low life expectancy drives (microsd)?
@sir_reginald@lemmy.world avatar

honestly, it’s not worth it. hard drives are cheap, just plug one via USB 3 and make all the write operations there. that way your little SBC doesn’t suffer the performance overhead of using docker.

aksdb,

The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.

sir_reginald,
@sir_reginald@lemmy.world avatar

docker images have a ton of extra processes from the OS they were built in. Normally a light distro is used to build images, like Alpine Linux. but still, you’re executing a lot more processes than if you were installing things natively.

Of course the images does not contain the kernel, but still they contain a lot of extra processes that would be unnecessary if executing natively.

IAm_A_Complete_Idiot,

Containers don’t typically have inits, your process is the init - so no extra processes are started for things other than what you care about.

aksdb,

To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).

Containers running multiple processes are possible, but hard to pull off and therefore rarely used.

What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #