Snaps were designed to solve dependency hell, get modern software, security, among other issues. If it weren’t for the fact Flatpak does a better job, many more people would be praising Snap.
It’s good that Canonical is trying to make the desktop better. It would be better if they focused their efforts elsewhere.
I run all headless Linux machines, and snapd always managed to show up somehow. It’s got shared lib dependencies so that shit like Firefox would be installed and have snap mount points on my machine. Just a bunch of useless noisy garbage on a headless machine. I finally solved the problem by switching to Debian.
I don’t care what flatpak does or does not do, IMHO snap sucks objectively.
Flatpak is intended for end-user graphical applications, not many terminal applications are packaged by Flatpak so it makes sense why it wouldn’t show up. Snap IIRC was first intended for their embedded system.
Tbh, if you can’t tap out Ethernet frames with a Morse key and decode the response by watching the blinking of an LED wired to the RX pair then you really don’t deserve to be on the internet. Git Gud.
State governments usually are required to place all of their computers up for sale through surplus. (Hard drives usually removed and destroyed). I have been through that process at a State College and a University. They aren’t just thrown away. I imagine there is a similar process for federal computer.
Yeah, when access to raspberry pi’s and such was none existant I knew a few people who would pick up old Optiplex computers and such to use as media servers and such. Old dells used to be very reliable. Throw whatever distro on there gui or not, and the shitty graphics cards wouldn’t matter much
Literally just talked to my mother-in-law who was talking about throwing out her laptop because Windows 10 is losing support and she can’t upgrade to Windows 11.
It would probably run linux perfectly.
But I would never put linux on it. I am not doing tech support for my MIL who just admitted to me that she “locked down her machine because she fixed the registry issues windows has and turned on ipv6 on her router” and alluded to changing other settings but she cant understand why her “wifi keeps dropping out” and thinks its because the neighbors installed a ring doorbell.
A lot of businesses. I’ve stocked an entire network lab out of waste bins from buildings with tech companies in them. Laptops, monitors, network gear, cabling. I once scored a whole box of 100W USB-C chargers.
Few days ago I downgraded glibc(I’m dumdum) because it was recommended in a reddit thread for a problem I was having. I couldn’t even chroot. Fortunately I could update with pacman --root
Yay, happy hail Satan day everyone. I remember when Intel chickened out and rounded up their 666 megahertz pentium 3 processors to report as being 667 megahertz. Absolute cowards, no wonder China is kicking their ass.
iirc the no windows 9 thing was actually because a lot of software ran a compatibility check like:
<span style="color:#323232;">if windows version = “windows 9*” then open legacy mode
</span>
This worked for software written for newer windows like xp but still allowing a legacy mode on older windows versions like 95 and 98. Problem was this also put that same software running on windows 9 into legacy mode. So they called it windows 10 to sidestep the compatibility issues.
It’s great to see to what lengths Microsoft goes to keep backwards compatibility. Compared to how a minor glibc update broke Linux apps without much warning. Without supporting legacy workflows I don’t think Microsoft would’ve had the market share they have today.
I believe that’s apocryphal… Some people came up with that theory on twitter, but AFAIK it’s not been confirmed. It only matters in some edge cases of an edge case.
And let’s be real, if backwards compatibility really mattered, they could have made the API return “Nine” or “IX” or whatever and used “9” everywhere else in the UI, marketing, packaging, whatever.
The real reason is probably the simplest and stupidest: Microsoft’s marketing department got impatient and went for the big round number because 10>9. Also why NVIDIA went 9xx->10xx->20xx… bigger number = better, it’s really that mind-numbingly stupid.
If it’s anything like Korean (and it probably is), it’s specific when you can use each version of the word so it’s not like you could simply swap shi for yon
As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running.
Wow. 30 times in 3 years? I wonder if that’s specific packages or hardware you had. I had 5 computers (2 desktops, 3 laptops) running Manjaro for so many years, and still haven’t had a single system break. Including using a lot of AUR packages.
Though last year, I’ve moved all of my computers to Arch, Debian, and Proxmox. Arch mainly because I wanted to fully configure my systems more.
It’ll really depend on your local job market. I was on a serious job hunt earlier this year and I couldn’t find a single Linux job which asked for LFCS certs. There were a couple which asked for Red Hat certs though. Of course, this could be specific to where I live, so I’d recommend looking at some popular job sites for where you live (+ remote jobs too) and see how many, if any, ask for LFCS, and you’d get your answer.
Should I focus more on dev ops? Security? Straight SysAdmin?
From what I’ve seen so far, the days of “traditional” Linux sysadmin roles are numbered, if not long gone already - it’s all mostly DevOps-y stuff. Same with traditional security, these days it’s more about DevSecOps.
As a modern Linux sysadmin, the technologies you should be looking at would be Ansible, Kubernetes, Terraform, containers (Docker mainly, but also Podman/LXD), GitOps, CI/CD and Infrastructure as Code (IaC) concepts and tools.
Some Red Hat shops may also ask for OpenShift, Ansible Tower, Satellite etc experience. IBM shops also use a lot of IBM tools such as IBM Could Paks, Multicloud Management, and AIOps/Watson etc.
And finally there’s all the “cloud” stuff like AWS, Azure, GCP specific things - and they have their own terminologies that you’d need to know and understand (eg “S3”, “Lambda” etc) and they have their own certs to go with it. I suspect a “cloud” cert will net you more jobs than LFCS.
So as you’d probably be thinking by now, all of the above isn’t something you’d know from just using desktop Linux. Of course, desktop Linux experience is certainly useful for understanding some of the core concepts and how it all works under the hood, but unfortunately that experience alone just isn’t going to cut it if you’re out looking for a job.
As I mentioned before, start looking for jobs in your area/relevant to you and look at the technologies they’re asking for, note down the terms which appear most frequently and the certs they’re asking for, and start preparing for them. That is, assuming it’s something you want to work with in the future.
Personally, I’m not a big fan all this new tech (I’m fine with Ansible and containers, but don’t like the industry’s dependency on proprietary techs like Docker Desktop, Amazon or Red Hat’s stuff). I just wanted to work on pure Linux, with all the all standard POSIX/GNU tools and DEs that we’re familiar with, but sadly those sort of jobs don’t really exist anymore.
Sorry, I guess I meant Docker Desktop, and some of their other proprietary business/enterprise tools (like Docker Scout) that companies have started to use, the stuff that requires a paid subscription. The Docker engine itself remains opensource of course, but a lot of their stuff that’s targeted at enterprises isn’t. These days when companies say “Docker” they don’t mean just the engine, they’re referring to the entire ecosystem.
Also, I have a problem with Docker itself. My main issue is that, on Linux, native container tech like Podman/LXD work, perform and integrate better (at least, from my limited experience), but the industry prefers Docker (no surprises there). As a Linux guy, naturally I want to use the best tool for Linux, not what’s cross-platform (when I don’t care about other platforms). But I can understand why companies would prefer Docker.
Ah I see what you mean, that stuff is pretty annoying.
Well at least the core tech remains open, though I agree, I would like to see more agnosticism from the industry in regard to the tool implementing the containers, since they’re pretty much all interoperable to a certain extent, as I understand
Probably a good idea to look for a different client, call me tinfoil but I wouldn’t want to touch a very old mechanism that is supported/pushed by a very recognisable 3 letter agency
A surprising amount of services (including Azure last I tried) can only handle RSA keys, so after trying ecdsa only for a while I ended up adding a RSA key again.
With that said - it’s 2023, in almost all cases you should have your keys in a hardware module nowadays, in which case you’d use a different command for keygeneration.
Actually it is the same story with TLS 1.3 and TLS 1.2. A bunch of sites still doesn’t support TLS 1.3 (e. g. arstechnica.com, startpage.com) and some of them only support TLS 1.2 with RSA (e. g. startpage.com).
You can try this yourself in Firefox by disabling ciphers (search for security.ssl3 in about:config) or by setting the minimum TLS version to 1.3 (security.tls.version.min = 4 in about:config).
Strange enough TLS 1.3 still doesn’t support signed ed25519 certificates :| P‐256, NIST P‐384 or NIST P‐521 curves are known to be “backdoored” or having deliberately chosen mathematical weakness. I’m not an expert and just a noob security/selfhoster enthusiast but I don’t want to depend on curves made by NSA or other spy agencies !
I also wondering if the EU isn’t going to implement something similar with all their new spying laws currently discussed…
I’d recommended Arch because with NixOS you end up having to tinker too much. Besides, if you need to use Linux for development purposes, Arch follows the usual Linux/Unix conventions, while with NixOS you would end up tinkering…And you can always use the Nix app from Arch.
Just use Arch with Gnome or KDE, that will save you a ton of time.
Huh, I never expected anyone to recommend Arch to me because you have to tinker too much with an alternative distro. I thought simplicity was the reason why people liked NixOS, no?
I have set up my archlinux os in a weekend with btrfs snapshots and everything I need. About once a quarter I tinker with it for 30 minutes to either fix a broken update or do some custom solutions to minute problems. It has been running like this for 5years. And snapshots allow me to rollback any fuckups in 1 minute.
I tried to setup nixos twice, because I love the concept. Both times I tinkered with it for 1 to 2 weeks, had to take paid leave. At the end, some stuff still didn’t work as I wanted it to. Any customization that is not already natively implemented in nix is a huge pain in the ass to add. Things that would be a 5min config edit on arch took hours on nix to make them rEpRoDuCaBLe. I have experienced no additional benefit over btrfs snapshots.
Tldr: If I could pay somebody 100$ to set up nixos just the way I want it, I’d use it. But since I have to do it in my own free time, I won’t.
Nix is a pain. Not everything works. Example Netword supposed to be able to put options in some confines. Sure most work but I have two in my config that nix well not put in. Why they are valid an I’m running them on my current Os but my nix van refuses to build with them. Another nftsble rules. Again supposed to put them in config file. But I have some nix does not like, completely valid rules but nix won’t build with them. I’ll tinker with it but it still needs work.
Comparatively, NixOS is complex, while Arch is simple. NixOS diverges very much from traditional Linux distributions, beginning with using a diferent filesystem hierarchy, which breaks a ton of apps, requiring workarounds like patches, simulating a standard filesystem… In the long run, you will have to deal with many NixOS-specific issues.
Because you’re going to Uni, it’s better to focus on having a mostly just works distro with updated repository, and that’s Arch. On your free time in the future, maybe try NixOS in a VM just so you have a feel for it. And again, you can use Nix on Arch so you use apps from Nixpkgs.
This all comes from an originally Arch user turned into an experienced NixOS user.
This is my experience as well. I went back to Arch after trying NixOS for a few weeks. I just ended up spending way too much time tinkering with the system instead of using it. Also, I feel like a major advantage to nixos is only viable if you have multiple machines. I only have a main desktop.
I’m not sure I agree with this… I’m using nix on several different generation thinkpads, two older generation MacBooks (one air and one pro), two different older generation imacs, as well as my home built PC, and an OEM built pc… All with little to no tinkering whatsoever.
All my tinkering was first setting nix up and figuring out how to use it… Then I saved and copied my config and use the same one on all the machines (albeit with subtle changes on first install).
I’ve used arch a handful of times over the years, and it is without question, significantly more “needy” over time, imo.
Guess you never had to package general or hard to package software like those that require fixed output derivation or undersupported ecosystems, trying to use common development environment for Python under NixOS, running binaries under NixOS, the list goes on.
I have not… And in fairness to me, OP didn’t mention the need for any of those things. OP mentions having not even installed anything with the AUR in Arch, which to me just means they are looking for something stable out of the box, which nix has been for me across many platforms.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.