Not very (new) user friendly, wouldn’t use it. Too complicated for me
BlendOS
Doesn’t offer much new stuff for me, nothing they offer is substantial for me.
Small dev team
VanillaOS
"The new Linux Mint"
Huge focus on usability and user friendliness
Apx is basically only a wrapper for distrobox
Small dev team (the same one that also develops Bottles)
Huge potential, but not quite there yet
Will recommend it to new users when it’s updated to 2.0
Silverblue
My recommendation
Is one of the oldest immutables and very well thought out
Biggest dev and userbase
You can not only install Flatpaks, but also everything else with Distrobox and rpm-ostree
Best feature: you can easily rebase to it’s other spins or the custom ones from uBlueI just rebased this weekend from the SB to the Kinoite-Spin in just one command. I was able to “change distro” without resinstalling, and KDE was installed very cleanly without leftovers.
I mean seeing how people here act after having been on nixos for a few weeks I would say it’s an apt comparison. I swear we weren’t that obnoxious when I started using the distro in 2019 D:
I don’t think it’s an apt comparison of the distros, but I agree that both have a cult-like following. I also feel like there’s a bit of a difference in the evangelism of both distros… I don’t really understand why people evangelize Arch, and my impression is largely that (1) people mention that they’re on Arch so others know they might be having different configuration issues, or less charitably (2) people mention Arch as a weird brag because it’s seen as an “advanced” distro. In contrast people seem to recommend nix and NixOS because it solves a frankly ridiculous amount of real problems that people experience with development environments, package managers, and system management. I.e., we bring up nix and NixOS because we care about you and think it might actually be useful for you. I don’t really want to dictate what other people use or brag about using nix / NixOS, but people complain to me about different problems constantly that are just resolved by nix, so it feels wrong not to mention it. It’s frustrating because it definitely makes you seem like you’re in a cult, but it really is the right level of abstraction for package management, and as a result it solves so many problems and little frustrations.
Honestly, it’s kind of frustrating to watch people not use nix. I have nix set up for the projects at work because I got tired of them not building and people randomly changing dependencies and it taking 3-4 weeks for somebody new to the project to get the thing to compile. Everybody new that I have set up with nix gets the project working instantly, and everybody else ends up spending weeks flailing around with installation. Unfortunately, I’ve given up on recommending people use nix for the project because a number of senior people have decided that they don’t like nix and there’s a bizarre amount of drama whenever I recommend a newbie just use it to get set up (even though it has always worked out better for them). It’s just not worth the headache for me to stick my neck out, but I feel bad and it’s really frustrating how literally everybody else takes 3-4 weeks to get up and running without nix :|.
I tried NixOS and was quite frustrating when I needed community help / documentation. I guess that’s the aspect of “the new arch”, the community will go “not my problem fix it yourself”. I’ve seen some good tutorials on YT popped up since then, so I’ll try it again once I get college vacation. It’s hard for me as a non programmer/psychology student. My field doesn’t overlap with programming not by a little, lmao. I think you need to recommend nix and have the way people need to do things. Like, a nix flake? You can get it to work 100 ways, and nix uses its own language and way of declaring things. That’s one thing that made me go “I just need to have a working system and I have a Arch install script done”. I like to fiddle around with things, but when you are stuck with something and there isn’t a clear path to do it, it gets frustrating. The 100 ways to 1 thing makes copycat difficult, because you have to copy the same person, which will not have all the needs for you, or find people that did their config the same way (which is really hard). Like, overlays, packaging programs, making modules, even Arch had a “this is how you get things done” wiki. I really think Nix and NixOS is really good and I will try it out again in some months.
Yeah, I don’t have good answers for you… I honestly don’t know what the best way to get people into it is. The resources really are not great.
FWIW I think when it does end up clicking everything is a LOT less complicated than it seems at first. Nix is sort of all about building up these attribute sets and then once that really sinks in everything starts to make a lot more sense and you start to realize that there aren’t that many moving parts and there isn’t much magic going on… but getting there is tricky. A lot of people recommend the nix pills, and honestly I think it’s the best way to understand nix itself. If you do earnestly read through them I think there is a good chance you will come out enlightened… they just start so slow and so boringly that it’s tempting to skip ahead and then you’re doomed. They also have a bit of a bad habit of introducing simple examples that don’t work at first which can be confusing, and eventually some of the later stuff seems like “ugh, I thought we already solved this” but it’s building up nicer abstractions. The nix pills give a pretty good overview of best practices in that sense, I think… so maybe it’s the source of truth you’re looking for (or part of it anyway). I think the nix pills are a bit more “how the sausage is made” than is necessary to use nix, but it’s probably the best way to understand what all of these weird mkDerivation functions you keep seeing are actually doing, and having an understanding of the internals of nix makes it a lot easier to understand what’s going on.
ah I think that’s where I’m at odds with a lot of lemmy NixOS users then 😅, since I am and have always been pretty hesitant to recommend NixOS to anyone in particular. I find the upfront costs of NixOS too big for me to recommend the OS to anyone who wasn’t already looking into it and knows its downsides and upsides.
I do agree however on the fact that using nix is purely beneficial. It doesn’t hurt if you just add a .nix file to your project, since it doesn’t do any harm to an already existing project. It can just install your build tools and then consider itself done, and if you don’t happen to like nix after all, the new installer makes uninstalling easier than ever. There is pretty much no downside to downloading the package manager, something I can’t say about the OS.
Having said that, I don’t think nix should be the end-all be-all standard in package management. I’m sure there will be other package managers that will be better than “nix but with yaml sprinkled in”, and are capable of improving the state of the art. At least, that’s something I hope to happen. For example, I have reservations about using a full-blown programming language for doing my project configuration (see people’s problems with Gradle for why you might not want that). I think a maven-style approach (where you’d have just limited config options, but can expand the package manager’s capabilities by telling it to install certain plugins (in the same config file!)), could be worth looking into, and I’d be lying if I said I wasn’t on the look out for a potential better nix alternative
For sure! I don’t think we’re actually in disagreement at all, just the limits of text communication :). NixOS is certainly less important to me and I don’t really care if people use it or not at all (it’s nice but there’s enough differences that you have to be aware of that it’d be frustrating to some people — even if ultimately those differences are something that can be worked around… If you’re well versed in nix and Linux NixOS is kind of a no brainer, though). Nix for development (or something like it) is legitimately enough of a game changer to warrant some of the evangelism in my opinion, particularly since as you mention it’s pretty much free to try on any (non-windows) system, and adding nix to a project doesn’t harm non-nix users (more than they’re already harmed anyway, haha). I’ll admit that I worry about how “nix ugly and unintuitive” seems to be a huge problem for adoption, and frankly I don’t blame people for bouncing off of nix (I bounced off of nix in 2011 or so and didn’t come back to it for like 10 years — though it was a bit of a brain worm nagging at me the whole time). That said I think the impression people have of nix being this horrible and completely ugly language (an impression I’ve had in the past as well) is also somewhat untrue. The nix language itself isn’t so bad, but the expectation is for it to just be yaml because “I just want to list dependencies”, which is fair and it might be nice if we had some better abstractions to make that more clear. All of the phases in a nix derivation are confusing and poorly documented, and some operations on attribute sets should probably just have nice special syntax instead of these fancy update fixpoints that the average developer isn’t going to understand… ultimately I’m a little unclear on how much of this is “the nix language sucks and needs to be thrown out” and how much is “we really need a better introduction to what this is and how to use it, especially with some beginner examples and best practices for different languages”. I worry a bit about non-nix nix package managers just from the perspective that it’s really nice to have the one tool to rule all development environments, but maybe fragmentation won’t be a huge problem.
Edit: Fedora got an upgrade today and vm-manager works again without any issue. Docker remains broken, maybe its matter of time. Thank you for your response!!!
Features include caching,[4] full file-system encryption using the ChaCha20 and Poly1305 algorithms,[5] native compression[4] via LZ4, gzip[6] and Zstandard,[7] snapshots,[4] CRC-32C and 64-bit checksumming.[3] It can span block devices, including in RAID configurations.
The main takeaway from the article is that the developer’s name is Kent Overstreet, who beat his bitter rival Surrey Underpath, who are both canonically related to famed developer Cornwall Midroad.
As someone else said, it’s similar to btrfs. bcachefs has a lot of functional overlap with btrfs, which is great. There have also been a few benchmarks showing that bcachesfs is faster for some situations (cold-cache warming, IIRC). One of the big advantages over btrfs is that bcachefs’s RAID is more robust - several of btrfs’s RAID levels have been marked as experimental and prone to data loss, for years. There’s been improvement in btrfs RAID lately; the skeptic in me believes this is directly a result of pressure from bcachefs, which is in a position to become a favored fs in Linux.
I really hope it would be a working one, not like xfs where your files may just disappear with no trace (never on Irix, never on any other fs) or like btrfs which may just suddenly go read only and be dead on reboot with no fsck and all data unreachable.
How hard is it to get the basics right? Doesn’t matter how much rice there is if it keeps blowing up.
Me too. I’ve run 30 years with ext and bsd filesystems with no failure. Many years with various UNIX native fs as well. But Linux xfs, reiserfs, btrfs all have resulted in catastrophic failure within a year on several machines. They’re permanently off my list, but I have some hope that someone will get a new fs right.
A lot of the time it obviously takes a little while for userland tools to catch up and for distros to include both the new kernel and userland tools for it into their latest versions but once that is done average users certainly do notice differences. Literally all the features that are talked about a lot like BPF or io_uring or all the features that make containers possible were introduced in a kernel release at some point.
I think they did that because of old disks, avoid fragmentation and if one partitions is corrupted you can always recover the important files on /home and things like that, not sure neither. 🫤
It’s not outdated, just less necessary now. With SSD’s, you can just copy your /home back from your daily backup after reinstallation, which takes all of 5 minutes.
OpenSUSE (and probably some other distros) have it built-in, you just have to activate it. If yours doesn’t, you have to install a program that does it or configure one manually.
I have daily backups for brtfs but for my / only via Linux Mint’s Timeshift. I do manual backups for some of my home folders every week. I take it the backups you mention would be lost over a reinstall?
How long that takes depends entirely on the size of your home, the number of files in there and how you store your backups.Not everyone has tiny home directories.
If your home is smaller than 2TB, it’s not an issue.
And if it’s larger than 2TB, then why the hell is all that data on your /home SSD and not a separate HDD, NAS or file server?
It’s not wrong, as such, but simply not right. Since you’re using btrfs, having a separate partition for home makes little sense. I, personally, also prefer using a swapfile to a swap partition, but that’s potato/potato.
Alright, but actually I don’t think I’m maximizing my use of btrfs. I only use btrfs because of its compatibility with Linux Mint’s Timeshift tool. Would you be implying if I used btrfs for the whole partition, I can reinstall / without overwriting /home?
BTRFS has a concept called a subvolume. You are allowed to mount it just like any other device. This is an example /etc/fstab I’ve copied from somewhere some time ago.
/efi (or /boot, or /boot/efi, whatever floats your boat) still has to be a separate vfat partition, but all the other mounts are, technically speaking, the same partition mounted many times with a different subvolume set as the target.
Obviously, you don’t need to have all of them separated like this, but it allows you to fine tune the parts of system that do get snapshot.
I routinely 100% my root volume accidentally (thanks docker), but my machine has never crashed, it does tend to cause other issues though. Does having a full /usr, /var or /tmp not cause other issues, if not full crashes?
Of course it does, it’s actually filling those that crashes the machine, not /.
When space runs out it runs out, there’s no magical solution. Separating partitions like that is done for other reasons, not to prevent runaway fill: filesystems with special properties, mounting network filesystems remotely etc.
It depends, if your docker installation uses /var, it will surelly help to keep it separated.
For my home systems, I have: UEFI, /boot, /, home, swap.
For my work systems, we additionally have separate /opt, /var, /tmp and /usr.
/usr will only grow when you add more software to your system. /var and /tmp are where applications and services store temporary files, log files and caches, so they can vary wildly depending on what is running. /opt is for third-party stuff, so it depends if you use it or not.
Managing all that seems like a lot of effort, and given my disk issues havent yet been fatal, ill probably not worry about going that far. Thanks for the info though.
Last time i used LVM was way back in fedora 8 days, when it was the default partition. It was super annoying to use, as gparted didnt support it, and live cds often had trouble with it. Having to read doco to resize it was pretty not good for a newbie to linux. Has it improved since?
Why do you have a btrfs volume and an ext4 volume? I went btrfs and used sub volumes to split up my root and home but I’m not sure if that’s the best way to do it or not
I use btrfs for my / because I can use Linux Mint’s Timeshift tool to make snapshots, but I don’t want snapshots of /home to be included. Am I doing this wrong?
As long as you don't re-format the partition. Not all installers are created equal, so it might be more complicated to re-install the OS without wiping the partition entirely. Or it might be just fine. I don't really install linux often enough to know that. ¯_(ツ)_/¯
Not sure if that’s wrong or not tbh, I use snapper instead of timeshift and I wanted /home included in the snapshots anyway (I think it let me set them up as 2 separate jobs). The reason I went with subvolumes instead of separate partitions is that I didn’t have to worry about sizing. I also know I can reinstall to my root subvolume without affecting the others, depending on the installer for your distro I don’t know how easy that is vs just having separate partitions. I played around with it in a VM for a while to see what the backup and restore process is like before I actually committed to anything!
I have BTRFS on /, which lives on an SSD and ext4 on an HDD, which is /home. BTRFS can do snapshots, which is very useful in case an update (or my own stupidity) bricks the systems. Meanwhile, /home is filled with junk like cache files, games, etc. which doesn't really make sense to snapshot, but that's, actually, secondary. Spinning rust is slow and BTRFS makes it even worse (at least on my hardware) which, in itself, is enough to avoid using it.
I have a 120 gig SSD. The system takes up around 60 gigs + BTRFS snapshots and its overhead. A have around 15 gigs of wiggle room, on average. Trying to squeeze some /home stuff in there doesn't really seem that reasonable, to be honest.
When I started with Linux, I was happy to learn that I didn’t need a bunch of separate partitions, and have installed all-in-one (except for boot of course!) since. Whatever works fine for you (-and- is easiest) is the right way! (What you’re doing was once common practice, and serves just as well. No disadvantage in staying with the familiar.)
After I got up to 8GB memory, stopped using swap … easier on the hard drive -and- the SSD. (I move most data to the HD … including TimeShift … except what I use regularly.)
I use Mint as well; for me this keeps things as simple as possible. When I install a new OS version (always with the same XFCE DE) I do put THAT on a new partition (rather than try the upgrade route and risk damaging my daily driver) using the same UserName. A new Home is created within the install partition (does nothing but hold the User folder.)
To keep from having to reconfig -almost everthing- in the new OS all over again I evolved a system. First I verify that the new install boots properly, I then use a Live USB to copy the old User .config file (and the apps and their support folders I keep in user) to the new User folder. Saves hours of reconfiguring most things. The new up-to-date OS mostly resembles and works like the old one … without the upgrade risks.
In my next reinstall, can I combine the / and swap partitions (they’re next to each other so I can do this) and will swap files just be automatically created instead?
They won’t be automatically created but you can create your own swap file on /, no need for a dedicated partition:
Use dd to create a file filled with zeros of appropriate size.
Format the file with mkswap.
Activate the swap file instantly with swapon.
Add it to /etc/fstab so it will be automatically used on reboot.
Appropriate size will vary but I suggest starting with something like 100 MB and check once in a while to see how much is actually used. If it fills up you can replace it with a larger swap file or you can simply create another one and use it alongside the first.
Well technically, if you’re using BTRFS, you might want to check out subvolumes. Here’s my subvolume setup:
Subvolume 1, named @ (root subvol)
Subvolume 2, named @home (/home subvol)
Subvolume 3, named @srv (/srv subvol)
Subvolume 4, named @opt (/opt subvol)
Subvolume 5, named @swap (which is - you guessed it - the swap subvol)
You then set up fstab to reflect each of the subvolumes, using the subvol= option. Here’s the kicker: they are all in one partition. Yes, even the swap. Though caveat, swap still has to be a swapfile, but in its own separate subvolume. Don’t ask me why, it’s just the way to do it.
The great thing about subvolumes is that it doesn’t do any size provisioning, unless specified by the user. All subvolumes share the space available within the partition. This means you won’t have to do any soul searching when setting up the partitions regarding use of space.
This also means that if I want to nuke and pave, I only need run a BTRFS command on my @ subvolume (which contains /usr, /share, /bin), because it won’t be touching the contents of @home, @srv, or @opt. What’s extra cool here is that I’ll lose 0% FS metadata or permission setup, since you’re technically just disassociating some blocks from a subvolume. You’re not really “formatting”… which is neat as hell.
The only extra partitions I have is the EFI partition and an EXT4 partition for the /boot folder since I use LUKS2.
Have you had any luck with hibernation with a BTRFS swapfile? My computer still does not start from hibernation, and I am not sure why, even though I followed the Arch wiki to set it up.
Can’t say I have. Haven’t used hibernation mode for years even. Sleep mode is just too good nowadays for me to use it, so I guess we could chalk that up to a fault of the setup.
According to ReadTheDocs (BTRFS, swapfile) it’s possible under certain circumstances, but requires the 6.1 kernel to do it in a relatively easy way.
In tools like lsblk? Nope. They appear as directories, usually in the top-level subvolume, which typically isn’t mounted anywhere in the system.
Then you just create mount entries in /etc/fstab just like you would with partitions, this time just using the subvol= option as mentioned above. I don’t know if there are any installers that do this for you. Archwiki – as usual – has good documentation on this.
So, it doesn’t sound like it would be useful for me, since the reason why I have separate partitions in the first place is so that I can re-install a distro or install a new distro without having to back up /home first.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.