The principled “old” way of adding fancy features to your filesystem was through block-level technologies, like LVM and LUKS. Both of those are filesystem-agnostic, meaning you can use them with any filesystem. They just act as block devices, and you can put any filesystem on top of them.
You want to be able to dynamically grow and shrink partitions without moving them around? LVM has you covered! You want to do RAID? mdadm has you covered! You want to do encryption? LUKS has you covered? You want snapshotting? Uh, well…technically LVM can do that…it’s kind of awkward to manage, though.
Anyway, the point is, all of them can be mixed and matched in any configuration you want. You want a RAID6 where one device is encrypted split up into an ext4 and two XFS partitions where one of the XFS partitions is in RAID10 with another drive for some stupid reason? Do it up, man. Nothing stopping you.
For some reason (I’m actually not sure of the reason), this stagnated. Red Hat’s Strata project has tried to continue pushing in this direction, kind of, but in general, I guess developers just didn’t find this kind of work that sexy. I mentioned LVM can do snapshotting "kind of awkward"ly. Nobody’s done it in as sexy and easy way to do as the cool new COWs.
So, ZFS was an absolute bombshell when it landed in the mid 2000s. It did everything LVM did, but way way way better. It did everything mdadm did, but way way way better. It did everything XFS did, but way way way better. Okay it didn’t do LUKS stuff (yet), but that was promised to be coming. It was Copy-On-Write and B-tree-everywhere. It did everything that (almost) every other block-level and filesystem previously made had ever done, but better. It was just…the best. And it shit all over that block-layer stuff.
But…well…it needed a lot of RAM, and it was licensed in a way such that Linux couldn’t get it right away, and when it did get ZFS support, it wasn’t like native in-the-kernel kind of stuff that people were used to.
But it was so good that it inspired other people to copy it. They looked at ZFS and said “hey why don’t we throw away all this block-level layered stuff? Why don’t we just do every possible thing in one filesystem?”.
And so BtrFS was born. (I don’t know why it’s pronounced “butter” either).
And now we have bcachefs, too.
What’s the difference between them all? Honestly mostly licensing, developer energy, and maturity. ZFS has been around for ages and is the most mature. bcachefs is brand spanking new. BtrFS is in the middle. Technically speaking, all of them either do each other’s features or have each other’s features on their TODO list. LUKS in particular is still very commonly used because encryption is still missing in most (all?) of them, but will be done eventually.
Real question- I have a steam deck and am incredibly pleased with the playability. I also have a desktop with a newer nvidia card. Does Linux have support for DLSS yet? It make a huge difference in oerformance and honestly it’s the only thing holding me back
That depends which DLSS. In my testing DLSS 1 and 2 work fine in games that I tried, with recent Proton enabling it as well as ray tracing shouldnt require extra steps anymore (it was experimental and opt-in using environment variables). DLSS 3 with frame generation is known as no go yet and it’s unfortunately on NVIDIA to provide support for it as it’s very much locked down guarded proprietary stuff.
It should support DLSS unless you have an older video card, which the drivers don’t work well with. I heard the newer Nvidia cards work better though. Of course, is all up to you whether you like it or not, so just try out Linux and see. If you don’t like it just reinstall Windows. Make a recovery Windows USB beforehand though, makes it easier to reinstall.
Linux and Nvidia don’t mix well, at least not until Nvidia’s official open source kernel module has been upstreamed to the Linux kernel which will take years.
Breakages, workarounds for breakages, etc. are common occurrences, especially when you want to use a modern desktop using Wayland.
Other than being completely unable to run Wayland, secure boot, and being forced to use a propietary driver what kind of things are specifically wrong with Nvidia on Linux? Maybe it’s because I switched to Linux fairly recently but I haven’t noticed many Nvidia specific issues yet.
One rule of thumb I discovered when doing research about a year ago is that AMD chips are generally way better than Intel chips when it comes to power consumption.
Does anyone have one of these that could confirm if that’s realistic? I’ve seen many laptops with similar specs and claims that come out to significantly lower battery life unless you do nothing but stare at an empty desktop.
The optimization might just be the rather large battery. Usually laptops with U-series processors have 40-60Eh batteries, the spec sheet shows a 73Wh battery in there.
Where the Lemur Pro really shines is battery life. System76 claims 14 hours, and I managed 11 hours in our battery drain test (looping a 1080p video). In real-world use, I frequently eked out over 13 hours. That’s off the charts better than any other Linux laptop I’ve tested recently.
But at the same time, they do offer increased security when they work correctly. It’s like saying we shouldn’t use virtualization anymore because historically some virtual devices have been exploitable in a way that you could escape the VM. Or lately, Spectre/Meltdown. Or a bit of an older one, Rowhammer.
Sometimes, security measures open a hole while closing many others. That’s how software works unfortunately, especially in something as complex as the Linux kernel.
Using namespaces and keeping your system up to date is the best you can do as a user. Or maybe add a layer of VM. But no solution is foolproof, if you really need that much security use multiple devices, ideally airgapped ones whenever possible.
I think the easiest way is to take them from the ‘experimental’ branch of debian’s own repository. But read about the consequences of enabling experimental, first.
Quoting the Debian FAQ:“project/experimental/: This directory contains packages and tools which are still being developed, and are still in the alpha testing stage. Users shouldn’t be using packages from here, because they can be dangerous and harmful even for the most experienced people.”
Yeah, you’re right. If you absolutely need the latest NVidia drivers, you kind of have to choose between the devil and the deep blue sea. You can pull it from some random place on the internet, or use whatever script NVidia provides you with and do it under your own responsibility… Or use experimental, but it may be not be tested or be incompatible with your kernel version. Neither option is recommended. I’ve had some success with experimental. Debian have high standards and at least it’s packaged and tied into the distribution at all. But there is no guarantee. (I’m not sure if you can mix that with the stable version of Debian, though. I use Debian Testing…) (Their Backports are a better option for Debian Stable.)
Maybe somebody else has an idea, I don’t know any better way to do it. The proper way is to wait until it’s tested and becomes available in Debian.
I don’t know when that’s going to happen. It usually takes quite some time with Debian. Probably some more months. You can have a look at the Package tracker
Copying back the files to the right partition/directory works, but if you didn’t backup the owner and permissions for each file it’s gonna be a pain to restore those.
After reinstalling, you can compare your new system with your backup to see what changes/configs you had made
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.