So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?
Honestly, a colour picker is the last piece of software you should be translating names for. Even everyday colour names don’t have a direct translation. The line between “blue” and “green” is very slightly different than the line between “bleu” and “vert”, and the same goes for any other two languages. If you’re serious about your colour picker accuracy and you want to localize to another language, it would actually be more correct to have a completely different set of colour values, rather than trying to translate them. (Though “Liquid Nyquil” may be perceived the same across languages. I haven’t seen any studies on that one)
I don’t know about this specific program, but pretty much every other time I’ve seen something like this it’s been treated as another language and is a way for developers to test that that feature actually works.
iirc sudo has a bunch of quotes to spit out when an incorrect password is typed. Gentoo exposes that feature with the offensive USE flag.
Argh, why tho?
Like, I get that it is sometimes fun to throw some humor and things like that, but it is just too much trouble. It looks unprofessional and makes translation more of a pain than it needs to be. And that isn’t even opening the can of worms that insults actually are
Often times, projects like this aren’t necessarily going for “professional” - its something the developer has made for themselves and is just being nice to share it and the source to the world.
Also, sometimes that sort of thing is directly related to making sure translations do actually work. While I doubt that was the case here, I remember seeing RedHat Linux for a while had a specific language option that changed the phrasing quite a bit (I believe it was in relation to how one of the devs on the team commonly spoke) and it was done to make sure that translations were working.
IIRC It was added because too many people had been hacking together such a feature in their configurations, more often than not compromising their security. They added the option to reduce the amount of damage such a stupid much-asked-for feature deals.
P.S.: Honestly, I have used the feature before. While it’s usually funny, it can be brutal from time to time.
So usually people do install Linux software from trusted software repositories. Linux practically invented the idea of the app store a full ten years before the first iPhone came out and popularized the term “app.”
The problem with the Mullvad VPN is that their app is not in the trusted software repositories of most Linux distributions. So you are required to go through a few extra steps to first trust the Mullvad software repositories, and then install their VPN app the usual way using apt install or from the software center.
You could just download the “.deb” file and double click on it, but you will have to download and install all software security updates by hand. By going through the extra steps to add Mullvad to your trusted software repository list, you will get software security updates automatically whenever you install all other software updates on your computer.
Most Linux distros don’t bother to make it easy for you to add other trusted software repositories because it can be a major security risk if you trust the wrong people. So I suppose it is for the best that the easiest way to install third-party software is to follow the steps you saw on the website.
...What does the writer think support end means? Microsoft bricks the PC as soon as the support period ends?
They're going to just keep using Windows 10, security be damned. Probably a good number of users who weren't keeping their PC up to date even when Microsoft was forcing updates on them.
I work in the behavioral health field as an IT security admin and network with hospitals/health clinics all all the time. The amount of them using XP and 7 in some capacity should scare everyone. The other security admins know it’s an issue, but they just laugh it off.
I tell them if I were an immoral man, their company would be compromised just based off of that information.
Yeah I work for an emergency management SaaS company and we block outdated OSs and browsers and it’s wild how we will occasionally get pushback from potential new customers who are surprised we don’t support their outdated IT infrastructure due to the security risk
This. A lot of our lab’s instruments are proprietary garbage. I wish the people buy these extremely expensive instruments would actually research if there’s open source alternatives or help pressure the government’s into forcing the code to be open. A lot of (public) spending for research is due to this sort of bs “instruments which only works with its own proprietary software” btw. The other good portion is eaten up by scumbags like Elsevier and other publishers.
As long as that machine is disconnected from the internet it’s OK but as soon as you connect it you are cooked.
It’s been getting absolutely worse and worse with hardware as they shovel crap at you and then also expect you to buy subscriptions to make it usable. Keysight/agilent/ whoever they are had been really annoying about this.
This has Systemd vs Runit vibes. No matter how many anti-systemd folks scream to me about how horrible it is for XYZ technical reasons, every Linux distro I’ve ever used for years, desktop and server, has used systemd and I’ve never experienced single problem that those users claim I will.
Same here with Wayland. All the major desktop environments and distros have or are implementing Wayland support and are phasing out X. The only reason I’m not on Wayland on my main computer already is because of a few minor bugs that should be ironed out in the next 6-12 months with the newest release of plasma.
It’s not because Wayland is unusable. I try switching to Wayland about every 6-9 months, and every time there have been fewer bugs and the bugs that exist are less and less intrusive.
Any time you get hardcore enthusiasts and technical people together in large community, this will happen. The mechanical keyboard community is the same way, people arguing about what specific formula of dielectric grease is optimal to lube your switches with and what specific method of applying it is best.
At a certain point, it becomes fundamentalism, like comic book enthusiasts arguing about timeline forks between series or theology majors fighting about some minutia in a 4th century manuscript fragment. Neither person is going to change their views, they are just practicing their arguments back and forth in ever-narrowing scopes of pros and cons, technical jargon, and the like.
Meanwhile the vast majority of users couldn’t care less, and just want to play games, browse the web, and chat with friends, all of which is completely functional in Wayland and has been for a while.
This has Systemd vs Runit vibes. No matter how many anti-systemd folks scream to me about how horrible it is for XYZ technical reasons, every Linux distro I’ve ever used for years, desktop and server, has used systemd
You’ll one day learn the difference between Popular and Correct.
Trump is popular, for instance.
and I’ve never experienced single problem that those users claim I will.
This is a “everyone tells me to get smoke detectors and I’ve never had a fire in all my 23 years of life” comment.
There’s a reason we have building codes, seat belts, traffic lights, emergency brakes, FDIC, and pilots’ licenses.
Systemd isn’t “correct” what does that even mean? If you don’t agree with the standards and practices of the systemd project, that’s fine, but don’t act like there is some golden tablet of divine standards for system process management frameworks.
I wasn’t making an argument that systemd is perfect or that other frameworks like runit are inferrior. My argument was that I’ve been running a lot of Linux servers and desktop systems for years and I’ve never experienced the “huge stability problems and nightmare daemon management” that multiple systemd haters claim I will inevitably experience.
Maybe I’m incredibly lucky, maybe I’m not actually getting deep enough into the guts of Linux for it to matter, or maybe systemd isn’t the devil incarnate that some people make it out to be.
And also, free software is a thing. So I absolutely support and encourage alternatives like runit to exist. If you want your distros and servers to only use runit, that’s totally fine. If it makes you happy, or you have some super niche edge-case that makes systemd a bad solution, go for something else, you have my blessing, not that you need it.
All of the technically-minded posts I’ve read about systemd have been positive. The only detractors seem to be the ones with less technical knowledge, complaining about “the Unix philosophy” and parroting half-understood ideas, or worse, claiming that it’s bad because they have to learn it.
I know xorg has problems, but it was good to get some insight into why Wayland is falling short. Every argument I’ve seen in favor of Wayland has been “xorg bad”.
X code is convoluted, so much so that the maintainers didn’t want to continue. AFAIK, no commercial entity has put any significant money behind Xorg and friends. Potentially unmaintained code with known bugs, unknown CVEs and demands for permission system for privacy made continuing with Xorg a near impossibility.
If you don’t want new features and don’t care about CVEs that will be discovered in future as well as the bugs (present and future), then you can continue using Xorg, and ignore all this. If not, then you need to find an alternative, which doesn’t need to be Wayland
Oh, and you might need to manage Xorg while other people and software including your distro move onto something else.
So yeah, “xorg bad” is literally the short summary for creating Mir and Wayland
Meanwhile the vast majority of users couldn’t care less, and just want to play games, browse the web, and chat with friends, all of which is completely functional in Wayland and has been for a while.
The last couple of times I tried Wayland, it broke my desktop so badly that I couldn’t even use it.
Granted, that was “a while” ago, so my experience might be better now, but it’s made me very wary of it.
What does “broke my desktop so badly that i coudlnt even use it” even mean? Such an over the top statement lol, makes it seem as if wayland is malware or smt.
I think the vast majority of users won’t change their display server without doing a fresh install, so I’m not sure if that’s a fair comparison to the average use case. That being said, you experiencing that issue is a fair reason for staying wary.
Been using Wayland through Fedora for years on multiple systems and its all been transparent. I’m not even sure what “it broke my desktop” could even mean except that you were using KDE, and that has been a buggy mess for a while when using their Wayland fork. That’s not Wayland, that’s KDE as Sway and Gnome have been stable for me for a very long time.
every Linux distro I’ve ever used for years, desktop and server, has used systemd and I’ve never experienced single problem that those users claim I will.
That means simply that you have never used systemd. You have only used a linux distro.
When you use a car only from the backseat and have some driver driving it for you, then you aren’t going to have any complaint about the engine.
Systemd becomes the more horrible, the closer you get to it.
I run a bunch of Linux servers, multiple desktop instances, manage multiple IT clients, and took my first Linux certs working with Systemd management, all for years now.
But I’ll be sure to switch away from systemd when it becomes an issue…
With pipes/sockets, each program has to coordinate the establishment of the connection with the other program. This is especially problematic if you want to have modular daemons, e.g. to support drop-in replacements with alternative implementations, or if you have multiple programs that you need to communicate with (each with a potentially different protocol).
To solve this problem, you want to standardize the connection establishment and message delivery, which is what dbus does.
With dbus, you just write your message to the bus. Dbus will handle delivering the message to the right program. It can even start the receiving daemon if it is not yet running.
It’s a bit similar to the role of an intermediate representation in compilers.
A message bus won’t magically remove the need for developers to sit down together and agree on how some API would work. And not having a message bus also doesn’t magically prevent you from allowing for alternative implementations. Pipewire is an alternative implementation of pulseaudio, and neither of those rely on dbus (pulse can optionally use dbus, but not for its core features). When using dbus, developers have to agree on which path the service owns and which methods it exposes. When using unix sockets, they have to agree where the socket lives and what data format it uses. It’s all the same.
It can even start the receiving daemon if it is not yet running.
We have a tool for that, it’s called an init system. Init systems offer a large degree of control over daemons (centralized logging? making sure things are started in the correct order? letting the user disable and enable different daemons?). Dbus’ autostart mechanism is a poor substitute. Want to run daemons per-user instead of as root? Many init systems let you do that too (I know systemd and runit do).
It can even start the receiving daemon if it is not yet running.
We have a tool for that, it’s called an init system.
The init system is for trusted system services that can talk directly to hardware. Unless you are working on a single-user system with no security concerns of any kind, you might consider using init to launch persistent user land or GUI processes.
DBus is for establishing a standard publish/subscribe communication protocol between user applications, and in particular, GUI applications. And because it is standard, app developers using different GUI frameworks (Gtk, Qt, WxWidgets, FLTK, SDL2) can all publish/subscribe to each other using a common protocol.
It would be certainly be possible to establish a standard place in the /tmp directory and a standard naming scheme for sockets and temporary files so that applications can obtain a view of other running applications and request to receive message from other applications using ordinary filesystem APIs, but DBus does this without needing the /tmp directory. A few simple C APIs replace the need for naming and creating your temporary files and sockets.
…systemd very much does use the init system to launch userland and GUI processes. That’s how GNOME works.
Dbus is for interprocess communication. The fact that its primary use case is communication between desktop applications is hardly relevant to its design. I don’t see how GUI frameworks are at all relevant, or how it would be possible to create an interprocess communication mechanism that only worked with one GUI framework without some heroic levels of abstraction violation (which I would not put past Qt, but that’s another story).
I don’t see why having an entire dbus daemon running in the background is better than having a cluttered /tmp or /run directory.
“Bro just use sockets lol” completely misses the point. When you decide you want message based IPC, you need to then design and implement:
Message formatting
Service addressing
Data marshalling
Subscriptions and publishing
Method calling, marshalling of arguments and responses
Broadcast and 1:1 messaging
And before you know it you’ve reimplemented dbus, but your solution is undocumented, full of bugs, has no library, no introspection, no debugging tools, can only be used from one language, and in general is most likely pure and complete garbage.
Well said. There are so many details to making code work that can so often be avoided by using the right tooling. OP said it was harder to get started, which implies they did not handle the details and have code not actually robust to all kinds of edge cases. Maybe they don’t need it but they probably do.
They’re Unix sockets, dude, they’re file paths in /run
Data marshalling
Still have to do that with dbus, also that’s the same thing as message formatting
Pubsub
Again, sockets. One application binds and many can connect (how often do you really need more than one application to respond to a method call? That’s a valid reason to use dbus in lieu of sockets, but how often do you need it?)
Method calling, marshalling of arguments and responses
They’re called “unix doors”, and that’s the third time you’ve said marshalling. As for that, language agnostic data marshalling is kind of a solved problem. I’m personally a fan of msgpack but JSON works too if you want to maximize compatibility. Or XML if you really want to piss off the people who interact with your API.
Broadcast and 1:1 messaging
Sockets and doors can only do 1:1, and that’s true enough, but it occurs to me that 99% of use cases don’t need that and thus don’t need dbus. dbus can still be used for those cases, but less load on dbus daemon = less load on system. Also you said that already with pubsub.
As for that blob at the bottom, again, who said anything about there not being a language agnostic library? It’d be a lot of work to make one, sure, but that doesn’t mean it’s impossible. Besides, most of the work has been done for you in the form of language agnostic marshalling libraries which as you said are like 50% of the problem. The rest is just syscalls and minor protocol standardization (how to reference FDs passed through the door in the msgpack data etc.)
And what I’ve just described isn’t a reimplementation of dbus without any of the good parts, it’s a reimplementation of dbus on top of the kernel instead of on top of a daemon that doesn’t need to be there.
This reminds me of QT’s signal/slot system. I.e. instead of calling functions directly, you just emit a signal and then any number of functions may have the receiving slot enabled.
Lot’s of similar systems in other frameworks too I’m sure.
It occurs to me that sendmsg() is already kind of a standard, and the problem of drop in replacements could be solved by just making the replacement bind to the same file path and emulate the same protocol, and the problem of automatically starting the daemon could be handled by a systemd socket (or even inetd if you wanna go old school). The only advantage that I can see dbus really having over Unix sockets is allowing multiple programs to respond to the same message, which is a definite advantage but AFAIK not many things take advantage of that.
I remember in 1995-ish or something when I used the internet for the first time using the Netscape browser… And I was asking a friend if he had tried all the web sites yet. Just got a weird look back… :) I didn’t know what the internet was back then at first.
There are portals: docs.flatpak.org/en/…/desktop-integration.html#po… . they allow secure access to many features. Also any flatpak app still has access to a private app-specific filesystem, just not to the host.
Doesn’t work for all applications but for many sand boxing is possible without a loss of features.
No filesystem access for a flatpak app just means it cant read host system files on its own, without user permission. You can still give it files or directories of files through the file explorer for the app to work with, just that it’s much safer since it can only otherwise view files in its sandbox.
[…] aren’t there some folks who want flatpak/snap/appimage to basically replace traditional package managers?
There might be people who think that, but that isn’t realistic. Flatpak is a package manager for user facing apps, mostly gui apps.
The core system apps will still be installed by a system package manager. I.e rpm-ostree on immutable Fedora or transactional-update/zypper on OpenSUSE MicroOS.
Snap can do system apps and user facing apps and fully snap-based Ubuntu might come in the future.
But this won’t force people to use them. Traditional package managers will keep existing for system apps and maintainers will proabably keep their gui packages in the repos.
There’s Obfuscate, an image redactor, and Metadata Cleaner which is self-descriptive. Both works properly without any filesystem access at all, because they use the file picker portal to ask the user for the files to be processed.
It’s not just a percentage thing. 1 person yesterday to 2 people today is a 100% increase. Not much of a surge, at least in terms of news worthiness. Going from 6% to 10% sounds more news worthy than going from 1% to 2% despite the latter being a much larger percentage increase.
That’s why we’re talking about relative percentages.
In your example we would need to know how many trees existed on your road/city before. If there were less than 3 or 4 trees in your city before this, saying there was a surge is likely fine.
I gave you that information, I said “from 1 to 2” and added context of “a tree” (singular)
My terribly made point is that although technically correct when talking about relative increase it’s dumb as fuck to say trees “surged in population” after adding just one more on one street. It’s a drop on the ocean.
I feel like the term surge respects the final total relative to what its maximum could be as well as the relative increase. But obviously language is regional and up for interpretation
If you meant onlyoffice, then I think it promises better compatibility with ms office stuff and also itsinterface is closer to it, compared to libreoffice.
Collabora is a company, they funded some work on OnlyOffice which is a FOSS office suite like LibreOffice. I think they also worked on making it web hostable like Google docs (through nextcloud?)
Edit: Apparently now there’s also collabora office suite?
OnlyOffice and LibreOffice are both very good. The former promises better compatibility with ms office files and has an easier interface imo. LibreOffice seems way more featureful
As for why fewer distros have onlyoffice in their repository, maybe because it’s relatively newer? Anyway, it’s available through flatpak and that’s how I use it. I haven’t tried Collabora online stuff
Is abiword foss?
It is the most reasonable of editors/wp I have found, LO gives me a headache looking at 1000 menus/items.
The gtk2 version is stable as a rock, despite of some bad wrap it got last few years.
You can look up beer recipes and buy equipment and ingredients from it though. And use web based or spreadsheet calculators on it to do beer related calculations
That beer is also not free, but assuming you make beer for a long time the price per pint (half litre to split the difference between UK and US pints) tends toward about 20c (though highly hopped beers like hazy pale ale can get towards a dollar a pint) which is pretty cheap
But can it run proprietary software used in the industry? From Excel to Photoshop, if you are in a collaborative professional environment, you can’t run away from those, and don’t tell me you can use the alternatives in Linux, because no, you can’t. This is not linux fault, but it’s still an issue you can’t handwave.
I love linux, but you can’t expect people to adopt it just because it’s objectively better than windows.
Microsoft has built a proprietary moat around their operating system. The reason why it’s hard to switch from Windows is by corporate design. A mix of early adoption, network effects, and just plain cold hard cash makes them dominate the operating system market. Of course it’s infeasible for your 60yo coworker to switch; but KDE presents an alternate reality, an opportunity, for people fed up with big tech’s bullshit. Yes, figure out how to run and use alternatives you fucking nut. Way to go disparaging countless volunteer hours spent on open source projects so that people like me can switch to linux.
Comments like these make me irrationally angry. Why complain about open source software and give bad PR? It’s open source; contribute.
Read my other replies. 1 and 2 don’t really work, the performance of using wine, or the alternatives, is just not there, if you do amateur work, maybe that’s fine, but for professional collaborative work, good luck using freecad instead of autocad.
Personally, I use 3 and 4, but you have to understand that the regular user is not going to go through that much hassle to set up a virtual machine.
@desconectado@glibg10b Wine exists... And that's all I have to say. There is a good installer in lutris for creative cloud that works pretty good if you own it. And if you have a NVIDIA graphics card, it works even better, almost like on windows. It's not 1:1 but we're getting close. For excel you have wine again or a great free alternative is WPS or softmaker if you want to buy it.
I wish Wine worked well enough to use Excel. We are not talking about adding up numbers in a cell. Once you include macros, or a reference manager in Word, Wine is not good enough. The same can be said about propietary software, like autocad, or software used to control equipment. Also, good luck convincing a regular user to get familiar with wine.
WPS is great for simple files. Again, not good enough for complex files, especially if it is a corporate collaboration environment. I have lost count on the amount of ppt files that didn’t display well when it used WPS.
Every other year I try all the alternatives you mention, hoping they got better, and I always come back to use a dual boot or a virtual machine, which is not a thing your regular user wants to do.
You just gotta make an effort. The one who are too lazy will never be free of Microsoft’s clutches. Which probably just means pretty much everyone will stick to windows.
It depends on your industry. I’m in an agile development team, working in AWS in Java. I’m not a dev, so my work is in spreadsheets, word processor documents, web utilities like Azure Dev Ops
All that is platform independent, though we have to work on the organisation’s computers, so we work in the office on windows PCs or from home on whatever, remoted into a windows machine or VM
The devs work in VMs which are variously windows or GNU/Linux depending on what the person’s previous project was.
Work use. The are hardware requirements (XRD machines, potentiostats, CNC machining) and software requirements (3D design). My workshop asks for files in Autodesk Inventor, if I send it in any other format, they just won’t fabricate my pieces, and I completely understand, who am I to change the workflow of a complete department just because I refuse to use Inventor (which is provided at work).
Meh I had a dual boot machine ages ago. Still here collecting dust. Basically I only switched to use the Linux for down time, movies, and study, most day to day tasks from engineering software to anything I considered important enough that you do not want the results hacked or broken I would use Windows.
I think of modern machines kind of like a hammer. These days almost nobody actually remembers the guy who made the first hammer, or who discovered fire, but there’s a price tag for the bow, the paper and the hammer, not so much the making of the hammer, because the actual skill involved or required to learn about it has become challenged if not cheapened to the degree that there are now multiple paths to obtain or create a hammer, yet the benchmark quality of the hammer as well as the process for creation itself as a whole is now more of an authority than the actual original statue or monolith of “hammer man” himself.
This is why I think the many flavours of Ubuntu including the many esoteric Linux distros are still interesting but still lack the diversity of use and specialization. The fact that whole blockchains are built for XYZ while sitting around pumped then dumped to trading at cents with no use goes to show how cloud computing systems and lower level computing is still very disconnected and becoming further thrown aside to uphold ponzi schemes.
I’ll give you an example, more money is wasted on onlyfans per year than for people trying to use system XYZ for solving problem A, or curing cancer. Consider that to be one of the “good” reasons many men and women are so misogynistic, even without looking down on sex workers.
Look. Everything is like a hammer, in terms of specialization. From Linux distros to gender roles, if you want to understand the world, just look at the hammer. We live in the Hammer Age. It is hammer time.
Nothing. Linus doesn’t personally do coding on the kernel, he has a team who do that and he oversees it and makes the hard decisions.
There are others who will take his place and the work will continue.
If somehow the entire kernel team shut down, Google, Samsung or some other large corporation would take it over and continue development because at this point many, many, many servers, phones, smart devices, iot, and other appliances rely on the Linux kernel to function.
“Today here at Microsoft we are celebrating the legacy of the late Linus Torvalds by releasing a new kernel, re-written entirely in Golang using Copilot. No GPL code was touched, merely re-written, and we will offer ISOs to the coding community for free! Stay tuned for more updates, as we will be exclusively developing on this kernel going forward! This is a great day for open source!”
Nothing. Linus doesn’t personally do coding on the kernel, he has a team who do that and he oversees it and makes the hard decisions.
Even that is not really the truth. There are dozens and dozens of teams that actually do the development, then there are people who coordinate and maintain certain parts of the kernel, merge in patches and make decisions. And then there’s Linus who does coordinate these people.
There are others who will take his place and the work will continue.
And most likely Greg K-H will take over the position that Linus has right now. He has been one of the most active maintainers and is probably “the number 2” behind Linus.
Mark Ewing used to wear a red Cornell lacrosse cap and when he would help in computer labs people would look for a the man in the red hat. The company was called Red Hat after Mark but their logo has been a person in a fedora for a long time.
Fedora is a community continuation of Red Hat Linux, which was discontinued in favor of Red Hat Enterprise Linux. Back when I was starting out Fedora wasn’t a thing, you downloaded Red Hat Linux for free directly from the company or could buy it in a box.
As with every software/product: they have different features.
ZFS is not really hip. It’s pretty old. But also pretty solid. Unfortunately it’s licensed in a way that is maybe incompatible with the GPL, so no one wants to take the risk of trying to get it into Linux. So in the Linux world it is always a third-party-addon. In the BSD or Solaris world though …
btrfs has similar goals as ZFS (more to that soon) but has been developed right inside the kernel all along, so it typically works out of the box. It has a bit of a complicated history with it’s stability/reliability from which it still suffers (the history, not the stability). Many/most people run it with zero problems, some will still cite problems they had in the past, some apparently also still have problems.
bcachefs is also looming around the corner and might tackle problems differently, bringing us all the nice features with less bugs (optimism, yay). But it’s an even younger FS than btrfs, so only time will tell.
ext4 is an iteration on ext3 on ext2. So it’s pretty fucking stable and heavily battle tested.
Now why even care? ZFS, btrfs and bcachefs are filesystems following the COW philisophy (copy on write), meaning you might lose a bit performance but win on reliability. It also allows easily enabling snapshots, which all three bring you out of the box. So you can basically say “mark the current state of the filesystem with tag/label/whatever ‘x’” and every subsequent changes (since they are copies) will not touch the old snapshots, allowing you to easily roll back a whole partition. (Of course that takes up space, but only incrementally.)
They also bring native support for different RAID levels making additional layers like mdadm unnecessary. In case of ZFS and bcachefs, you also have native encryption, making LUKS obsolete.
For typical desktop use: ext4 is totally fine. Snapshots are extremely convenient if something breaks and you can basically revert the changes back in a single command. They don’t replace a backup strategy, so in the end you should have some data security measures in place anyway.
It likely has an edge. But I think on SSDs the advantage is negligible. Also games have the most performance critical stuff in-memory anyway so the only thing you could optimize is read performance when changing scenes.
But again … practically you can likely ignore the difference for desktop usage (also gaming). The workloads where it matters are typically on servers with high throughput where latencies accumulate quickly.
I remember reading somewhere that btrfs has good performance for gaming because of deduplication. I’m using btrfs, haven’t benchmarked it or anything, but it seems to work fine.
Having tried NTFS, ext4 and btrfs, the difference is not noticeable (though NTFS is buggy on Linux)
Btrfs I believe has compression built in so is good for large libraries but realistically ext4 is the easiest and simplest way to do so I just use that nowadays
Perhaps I’m guilty of good luck, but is the trade off of performance for reliability worth it? How often is reliability a problem?
As a different use case altogether, suppose I was setting up a NAS over a couple drives. Does choosing something with COW have anything to do with redundancy?
Maybe my question is, are there applications where zfs/btrfs is more or less appropriate than ext4 or even FAT?
are there applications where zfs/btrfs is more or less appropriate than ext4 or even FAT?
Neither of them likes to deal with very low amounts of free space, so don’t use it on places where that is often a scarcity. ZFS gets really slow when free space is almost none, and nowadays I don’t know about BTRFS but a few years ago filling the partition caused data corruption there.
For fileservers ZFS (and by extension btrfs) have a clear advantage. The main thing is, that you can relatively easily extend and section off storage pools. For ext4 you would need LVM to somewhat achieve something similar, but it’s still not as mighty as what ZFS (and btrfs) offer out of the box.
ZFS also has a lot of caching strategies specifically optimized for storage boxes. Means: it will eat your RAM, but become pretty fast. That’s not a trade-off you want on a desktop (or a multi purpose server), since you typically also need RAM for applications running. But on a NAS, that is completely fine. AFAIK TrueNAS defaults to ZFS. Synology uses btrfs by default. Proxmox runs on ZFS.
ZFS cache will mark itself as such, so if the kernel needs more RAM for applications it can just dump some of the ZFS cache and use whatever it needs.
I see lots of threads on homelab where new users are like “HELP MY ZFS IS USING 100% MEMORY” and we have to talk them off that ledge: unused RAM is wasted RAM, ZFS is making sure you’re running fast AF.
In theory. But how it is implemented in current systems, reserved memory can not be used by other processes and those other processes can not just ask the hog to give some space. Eventually, the hog gets OOM-killed or the system freezes.
ZFS is not really hip. It’s pretty old. But also pretty solid. Unfortunately it’s licensed in a way that is maybe incompatible with the GPL, so no one wants to take the risk of trying to get it into Linux. So in the Linux world it is always a third-party-addon. In the BSD or Solaris world though …
Also ZFS has tendency to have HIGH (really HIGH) hardware/CPU/memory requirements.
It was originally designed for massive storage servers (“zettabyte” file system) rather than personal laptops and desktops. It was before the current convergence trend too, so allocating all of the system resources to the file system was considered very beneficial if it could improve performance.
I haven’t meant it as the criticism of ZFS. It is just so, and perhaps there were good reasons for it. Now (especially with the convergence trend) it hurts.
In case of ZFS and bcachefs, you also have native encryption, making LUKS obsolete.
I don’t think that it makes LUKS obsolete. LUKS encrypts the entire partition, but ZFS (and BTRFS too as I know) only encrypt the data and some of the metadata, the rest is kept as it is.
Data that is not encrypted can be modified from the outside (the checksums have to be updated of course), which can mean from a virus on a dual booted OS to an intruder/thief/whatever.
If you have read recently about the logofail attack, the same could happen with modifying the technical data of a filesystem, but it may be bad enough if they just swap the names of 2 of your snapshots if they just want to cause trouble.
Btw COW isn’t necessarily (and isn’t at least for ZFS) a performance trade-off. Data isn’t really copied, new data is simply written elsewhere on the disk (and the old data is not marked as free space).
Ultimately it actually means “the data behaves as though it was copied,” which can be achieved in many ways. There are many ways to do that without actually copying.
So let me give an example, and you tell me if I understand. If you change 1MB in the middle of a 1GB file, the filesystem is smart enough to only allocate a new 1MB chunk and update its tables to say “the first 0.5GB lives in the same old place, then 1MB is over here at this new location, and the remaining 0.5GB is still at the old location”?
If that’s how it works, would this over time result in a single file being spread out in different physical blocks all over the place? I assume sequential reads of a file stored contiguously would usually have better performance than random reads of a file stored all over the place, right? Maybe not for modern SSDs…but also fragmentation could become a problem, because now you have a bunch of random 1MB chunks that are free.
I know ZFS encourages regular “scrubs” that I thought just checked for data integrity, but maybe it also takes the opportunity to defrag and re-serialize? I also don’t know if the other filesystems have a similar operation.
Not OP, but yes, that’s pretty much how it works. (ZFS scrubs do not defrgment data however).
Fragmentation isn’t really a problem for several reasons.
Some (most?) COW filesystems have mechanisms to mitigate fragmentation. ZFS, for instance, uses a special allocation strategy to minimize fragmentation and can reallocate data during certain operations like resilvering or rebalancing.
ZFS doesn’t even have a traditional defrag command. Because of its design and the way it handles file storage, a typical defrag process is not applicable or even necessary in the same way it is with other traditional filesystems
Btrfs too handles chunk allocation effeciently and generally doesn’t require defragmentation, and although it does have a defrag command, it’s almost never used by anyone, unless you have a special reason to (eg: maybe you have a program that is reading raw sectors of a file, and needs the data to be contiguous).
Fragmentation is only really an issue for spinning disks, however, that is no longer a concern for most spinning disk users because:
Most home users who still have spinning disks use it for archival/long term storage/media that rarely changes (eg: photos, movies, other infrequently accessed data), so fragmentation rarely occurs here and even if it does, it’s not a concern.
Power users typically have a DAS or NAS setup where spinning disks are in a RAID config with striping, so the spread of data across multiple sectors actually has an advantage for averaging out read times (so no file is completely stuck in the slow regions of a disk), but also, any performance loss is also generally negated because a single file can typically be read from two or more drives simultaneously, depending on the redundancy config.
Enterprise users also almost always use a RAID (or similar) setup, so the same as above applies. They also use filesystems like ZFS which employs heavy caching mechanisms, typically backed by SSDs/NVMes, so again, fragmentation isn’t really an issue.
Cool, good to know. I’d be interested to learn how they mitigate fragmentation, though. It’s not clear to me how COW could mitigate the copy cost without fragmentation, but I’m certain people smarter than me have been thinking about the problem for my whole life. I know spinning disks have their own set of limitations, but even SSDs perform better on sequential reads over random reads, so it seems like the preference would still be to not split a file up too much.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.