I find the concept of NixOS to be incredibly cool, and in terms of immutable operating systems it would in theory be one that I’m really interested in!
But the last time I tried it, I found that I was constantly fighting the system, and the documentation is all over the place and confusing. There’s things like “Oh hey use Flakes!” but then most of the documentation doesn’t really cover Flakes because it’s still considered experimental, yet it feels like the majority of the community uses it.
I also had software that would just randomly break, and when trying to track down the changes from Nixpkgs I couldn’t find anything that would prompt why it broke. Which… seems counterproductive to one of the strong points of Nix.
One example I ran into, is OpenRazer - the service is no longer being exposed and was reported 7 months ago. I did my best to try to track down the changes that broke it, but I suspect it’s possibly a lower level change outside of the OpenRazer package/module that caused it to break.
I get the impression that if I wanted to try to fix it, I’d have to take on the massive gauntlet of understanding how all of NixOS’ internals work, and while yes someday I’d love to have a better understanding, right now I’m more focused on just making sure the things I’d like (or even need in some cases, like software for my job) just works.
These comments really speak to me as someone who is comfortable in Arch but mildly interested in NixOS. The concept seems great, and it seems to work very smoothly when it works. Yet there are always these war stories where people have had to fight the system, to debug some misbehaving hack that is nonetheless required to smash a particular package into the NixOS mould. It is discouraging. The idea I get is that NixOS involves more time doing OS curation chores than does Arch, which already hits the limit of my willingness.
Flakes are another issue. The pre-flakes way seems to be de-facto deprecated, yet the new, flaky way is experimental. I don’t want to waste time learning a doomed paradigm, and I don’t want to depend on anything experimental.
For me, configuration files in git plus btrfs snapshots is just so straightforward. I want to see NixOS as a better way, but I can’t.
Pretty much, unfortunately. It sucks, because in order for Nix to accomplish its vision, things have to be like this - I don’t really see a way around it.
I am amazed by what the Nix[OS] community has accomplished and give high respect to them for it, but I can’t do it. If the documentation (and procedures, eg Flakes) were a bit more structured I’d probably be a bit more willing to put more time into trying to figure it out but… that’s just not the case currently.
I have similar feelings about immutable distros, it is a very intriguing concept but every single time I’ve tried one out, I run into some issue that requires hacks to get around it. If I did end up using one long-term, it’d probably be something from Universal Blue because it seems fairly easy to just modify the image. However, it’s still a massive paradigm shift of getting used to making changes at build-time (of the image), rather than making changes to your system at runtime.
For now, I just do pretty much the same thing you do, important dotfiles go into git, and btrfs snapshots for “Uh oh, something broke and I need things to work right now” moments (which is thankfully quite rare).
Sorry, I don't know if it is documented anywhere, but in summary the project started with bcache (block cache) from a single developer (Kent Overstreet A.K.A Evil Pie Pirate) in 2010 that explained he was building a module for the Linux Kernel.
Bcache is a method of using a fast ssd drive as a caching mechanism for slow but large hdds. As is, the project was quite ambitious but then, when the developer was working in an evolution of bcache (kind-of lessons learned re-implementation), the project grew into a general-purpose POSIX filesystem.
Considering the origins of the most popular file system implementations, expecting a single individual being successful creating a general-purpose one sounded over ambitious.
Then in 2013, out of the blue, Kent left Google to solely work in this project. (In reality though, he spent two years later in Datera as well.)
Then, how do you finance a single developer for a file system from 2013 onwards up to today, when it finally merged into the kernel?
Patreon. The whole thing was financed through it.
That said, there are other collaborators like Daniel Hill, Dave Chinner or Brian Foster, yet what's surprising is how this started as a side project and eventually became the main competitor of corporate-developed file systems by Patreon funding.
Note: A bit of hype-control here, btrfs which would be the main "competitor" was merged into kernel 14 years ago, so bcachefs still has a long way to go before we can trust it with our data.
apk isn’t any more or less than using dpkg by itself, or opkg. As for what I use, I use Arch at home and Ubuntu on my virtual machines (because they’re officially supported by my hosting provider). They work for me. I like them.
I think a lot of what drives the creation of redundant open source tools is that the urge to address a matter of personal taste meets the urge to start a new project, so people create new things that are different in key ways from older ones, but not necessarily better, and not necessarily even different enough to justify the amount of work that goes into them.
In some ways it feels a lot easier to start a new project then to build off an existing one:
You don’t have to familiarize yourself with the old code, which may be in a language you don’t know or don’t like
You don’t have to deal with the existing maintainers, who may or may not be supportive of the changes you want to make
You don’t have to support use cases that don’t matter to you personally
Built-in encryption in bcachefs sounds great, that’s the only thing that BTRFS has been missing for me so far.
Bonus points if it can be decrypted on boot like LUKS, and double bonus points if its scriptable like cryptsetup (retrieve key from hardware device, or network, or flash stick etc)
Yeppp this is what I currently do, and offers the best performance IMO compared to using something like gocryptfs in userspace on top of BTRFS. Pretty happy with it except a few small things…
It can be a bit of a faff to mount on a new machine if its file manager doesn’t support encrypted volumes natively ☹️. On your daily you can have it all sorted in your crypttab and fstab so it’s not an issue there
My main problem though is if it’s an external USB device you have encrypted with LUKS, the handles and devices stay there after an unexpected USB disconnect… so you can’t actually unmount or remount the dm-crypt device after that happens. Anytime you try, the kernel blocks you saying the device is busy - only fix i’m aware of is a reboot.
If the encryption is managed by the filesystem itself, one would probably assume this kind of mounting & unexpected disconnect scenario would be handled as gracefully as possible
I have also experienced that dangling devices break remounting it, but I think there’s a quicker solution for it: dmsetup remove insert_device_name_here.
It’s still a manual thing, though, but 2 steps better. Maybe it can be automated somehow, I haven’t looked into that yet.
I tried out the beta version of 545 last week, I swear it made the render issue with XWayland apps worse. Even if it’s back to the 535 state, it still makes using Wayland on Nvidia very difficult unless every application you plan to use is Wayland native. It’ll be a while before that’s the case for me.
I plan to just pick up a 6700 XT next week. I’m tired of being a second class citizen in Nvidia’s eyes.
That being said, I appreciate the devs themselves who’ve been working on improving what they can (there’s a couple that I’ve even seen participating in the Freedesktop GitLab). I assume the lackluster Linux support comes from the management side of things. I may not like the company, but I obviously don’t have disdain for every single person there.
I replaced my 3080 Ti with a 7900 XTX, reinstalled Tumbleweed to start fresh, and KDE on Wayland has been running great so far. Before, visual glitches galore, GPU refusing to output a signal if iGPU is not blacklisted, hardware video decoding outright does not work, etc.
Now, with AMD, I have not yet experienced graphics-related issues in weeks, fingers crossed.
Yeah it’s absolutely ridiculous. The “stable” release is out in the extra-testing repo for Arch, and I just had an absolute nightmare trying to get it to work. Installed it, added the suggested nvidia-drm.modeset=1 nvidia-drm.fbdev=1 kernel parameters to systemd-boot, ensured all of the Nvidia kernel modules were present in initrd to do early KMS loading - tried to start a KDE Wayland session and the desktop ran no more than maybe 5 FPS and I wish I were exaggerating that. A very similar issue was reported on their forums but the error I’m getting from kwin_wayland_drm is slightly different.
Tried install GNOME, but its Wayland session wouldn’t even launch at all. Loaded into its X11 session and it seemed to not be using accelerated graphics whatsoever.
Now of course, part of the blame goes to me for opting into the testing repo… but at the same time, I shouldn’t have to go through those hoops just to potentially get a working Wayland desktop (and I suspect even if I had succeeded, the same issues will have still been present). As far as I understand, AMD/Intel’s drivers are just part of mesa and are included in the kernel - no modifying your initrd, no worrying about DKMS, no trying to mess with .run files…
I have a Windows partition on one of my SSDs for the few occasions that I need to do something that can only be done from Windows, and I think I’m just going to use that till my GPU comes in. Funnily enough, Nvidia’s drivers aren’t even that great on Windows either - I still get a screen flicker issue whenever (I believe) the power state of the GPU changes, so for example playing a YouTube video, or even Steam popping a toast notification saying that a friend has launched some game. And plenty of my friends have tales of nightmares with trying to install and manage the Nvidia driver on Windows.
I would’ve never bought an Nvidia GPU in the first place if I had known how bad it was on Linux, and my current Nvidia GPU (a 2080) wasn’t actually purchased by me, but handed down by a very gracious friend at the beginning of the year since times have been really tough for me. Thankfully this last month I was able to put in some extra hours to be able to set aside some money for a used 6700xt because if I have to deal with this any longer I’m going to lose my sanity.
Some package managers do have differences that justify a separate project (nix, gentoo’s portage, etc).
For others, sometimes package managers are very similar feature-wise. But some developers would rather remake the thing because they would understand their code a lot better than someone else’s. Or because it would be far easier for them to customize rather than extend another project.
Imo it is developer laziness. Being able to use other people’s work is a valuable skill. But then again, this is open source, and people are free to develop the software they want the way they want.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.