I have just started trying to use Linux and I find it very hard to actually recommend it to anyone. And the problem isn’t really anything mentioned in the video, it’s just that the UX is not great. You have to google so much to get things working and the answers are almost always typing some cryptic stuff into the terminal. I am technically minded enough to get by but Linux ends up feeling more like a hobby to me rather than something I can actually get work done in.
That said, I really like Linux and am gonna stick with it. I just don’t don’t see it being widely adopted until it becomes a bit more straight forward.
I have tried quite a few now. Fedora, Mint, Debian - none could detect my wifi card so I had to go do a bunch of googling to try and get them working, found what driver I needed but was never able to actually find out how to install it, other than some terminal commands in forums that didn’t end up working. I stuck with Endeavour OS because it detected it without any problems.
I have a keyboard that I configure with an online tool called via that requires something called hid. On windows it just works but on Endeavour I have to enable something through the terminal.
I have a shared data drive and in order to make it mount when I start the computer I had to go and edit some fstab file?
I couldn’t even figure out how to install a dual boot with with fedora and mint because it asked me about the root and home and swap and boot partitions and didn’t explain how to set any of them up or what they did.
I needed a program for work that wasn’t on a repository and I had to google howw to launch an .sh file because clicking doesn’t work haha. Also through the terminal.
I’m not saying these are crazy insurmountable problems, and windows definitely has some similar things, getting my tablet working was so much smoother on Linux for example. But I’ve had to learn so much more about how my computer works to actually use Linux and I’m just not sure the majority of people will have that patience.
I wholeheartedly agree with you in regards to general lack in UX quality and lack of introduction for new users.
I have mived to use Linux exclusively for about 5 years now and whenever a teammember at work tries it, I have to give advice about once a day because of some cryptic info that has accumulated in my head and they couldn’t find through a 20minute internet search, to solve an endless stream of tiny issues.
It is an OS that I definitely could never recommend to people like my parents, which are by no means tech illiterate.
In regards to the specific point of launching .sh files:
On KDE Plasma I can double click sh files and a popup shows asking me whether I want to execute the program or edit the file in a text editor.
o, that’s weird, this one might actually just be user error then, haha. I’ll have to try again as I’m also using plasma.
I actually think it might be better for less tech literate people in some cases. Supposing it’s pre installed or they have someone to set things up for them. If you’re just using it to browse the web or write some documents the general experience is pretty good. It’s only when you start trying to do a bit more with it that things get complicated.
I agree linux can be very difficult but easy as well if you do not have “exotic” needs. If more people were using linux, especially more non techies, a lot would change but we’ll get there just slowly.
to respond to your points
I initially thiught you meant that you had to use commands in order to tinker with the UI - that’s my bad!
Wifi card, drivers, etc. can be a real pain. That’s neither linux or your fault. It’s just that noone prior to you wanted to use it and that’s why it wasn’t yet supported. Most systems are just plug and play. - compare it to macos, and you’ll find that linux is easy to install on most systems.
Auto mount is done using fstab right. Yoz xna also auto mount from the file explorer of the disk utility, it always depends on the system. There are a lot of different things and it’s not perfect.
Dual booting is in my opinion something for advanced people. I have no idea why anyone would ever suggest it to a newcomer! It’s pain in the ass if you deviate from the standard protocol.
Xou can double click on a shell script (.sh) by chmod +x file.sh or > right click > properties > exe as program
I have the same issue with Windows. I’ve been using Linux since I got my first PC. Trying to navigate Windows is a pain in the ass. It’s just old programs somehow put together. When I find some solutions online it’s often opening who knows what via Windows+R or better yet, changing something I have no idea about in regedit.
And even the most basic things are hidden away by many steps.
I feel you, I’m sure a lot of it comes down to familiarity. I just very recently did a fresh reinstall of windows and endeavour in a dual boot. And honestly the Calamares installer is a lot nicer than the windows one. But doing simple things like just writing to a secondary hard drive is a non-issue in windows whereas in Linux it was a whole learning adventure.
But doing simple things like just writing to a secondary hard drive is a non-issue in windows whereas in Linux it was a whole learning adventure.
What do you mean by that? Are you talking about RAID, having some partitions on separate drive or something else? Because if you mean just using secondary drive for files that’s just as easy as on Windows with most distributions.
Or did you mean installing programs to secondary drive? Yeah… I have no idea how that can be done. By a quick 4 minute search it seems… that it’s a problem.
So yeah, I can see a problem here. So many computers have something like 128GB SSD + 1TB HDD.
No, just a secondary hard drive. I use it for Windows and Linux so it’s ntfs. I was just trying to save a file to it but it said I didn’t have access, turns out I needed to specify ntfs-3g in the fstab file before I could write to it.
NTFS is proprietary FS that works on Linux thanks to great reverse-engineering efforts. To make this more fair, try accessing ext4 partition from Windows. Oh, it can’t even recognize it. Except that ext4 is open-source, so it wouldn’t even require reverse-engineering.
That said, have you fully shutdown Windows? You generally get write access out of the box nowadays, but only if Windows is fully shutdown. And clicking “Shut down” does not properly shut it down unless you disable fast startup.
Another method is to choose a “Restart” in Windows, and then instead of continuing with the restart, choose Linux on bootloader screen after you get there.
I’m mostly just speaking to the process. I can right click and mount the drive without a problem, but there’s no way to auto mount it on startup without editing the fstab file and finding the uuid of the drive through the terminal (at least as far as I could tell) all of the functionality is there, which is rather laudable, but the process is unapproachable for a lot of people.
O and yea, I did have to disable some fast startup setting in windows to get the write access, I forgot about that. But yeah, that one’s on Windows.
edit: sorry, this was actually pretty irrelevant to what I actually said, which was just about the write access which you pointed out was a windows issue. I got mixed up with my replies.
but there’s no way to auto mount it on startup without editing the fstab file and finding the uuid of the drive through the terminal (at least as far as I could tell) all of the functionality is there, which is rather laudable, but the process is unapproachable for a lot of people.
I’m not sorry that the CIA is using your closed-source software you mistakenly thought you owned anything because you paid way too much for anyone else to actually control your shit, you ignorant slave.
Edit: You’re a bunch of ignorant fuckwads. You can’t read shit and know who’s on what side because your sensitive to sarcasm and blatant…nevermind. Fuck off.
As with every software/product: they have different features.
ZFS is not really hip. It’s pretty old. But also pretty solid. Unfortunately it’s licensed in a way that is maybe incompatible with the GPL, so no one wants to take the risk of trying to get it into Linux. So in the Linux world it is always a third-party-addon. In the BSD or Solaris world though …
btrfs has similar goals as ZFS (more to that soon) but has been developed right inside the kernel all along, so it typically works out of the box. It has a bit of a complicated history with it’s stability/reliability from which it still suffers (the history, not the stability). Many/most people run it with zero problems, some will still cite problems they had in the past, some apparently also still have problems.
bcachefs is also looming around the corner and might tackle problems differently, bringing us all the nice features with less bugs (optimism, yay). But it’s an even younger FS than btrfs, so only time will tell.
ext4 is an iteration on ext3 on ext2. So it’s pretty fucking stable and heavily battle tested.
Now why even care? ZFS, btrfs and bcachefs are filesystems following the COW philisophy (copy on write), meaning you might lose a bit performance but win on reliability. It also allows easily enabling snapshots, which all three bring you out of the box. So you can basically say “mark the current state of the filesystem with tag/label/whatever ‘x’” and every subsequent changes (since they are copies) will not touch the old snapshots, allowing you to easily roll back a whole partition. (Of course that takes up space, but only incrementally.)
They also bring native support for different RAID levels making additional layers like mdadm unnecessary. In case of ZFS and bcachefs, you also have native encryption, making LUKS obsolete.
For typical desktop use: ext4 is totally fine. Snapshots are extremely convenient if something breaks and you can basically revert the changes back in a single command. They don’t replace a backup strategy, so in the end you should have some data security measures in place anyway.
It likely has an edge. But I think on SSDs the advantage is negligible. Also games have the most performance critical stuff in-memory anyway so the only thing you could optimize is read performance when changing scenes.
But again … practically you can likely ignore the difference for desktop usage (also gaming). The workloads where it matters are typically on servers with high throughput where latencies accumulate quickly.
I remember reading somewhere that btrfs has good performance for gaming because of deduplication. I’m using btrfs, haven’t benchmarked it or anything, but it seems to work fine.
Having tried NTFS, ext4 and btrfs, the difference is not noticeable (though NTFS is buggy on Linux)
Btrfs I believe has compression built in so is good for large libraries but realistically ext4 is the easiest and simplest way to do so I just use that nowadays
Perhaps I’m guilty of good luck, but is the trade off of performance for reliability worth it? How often is reliability a problem?
As a different use case altogether, suppose I was setting up a NAS over a couple drives. Does choosing something with COW have anything to do with redundancy?
Maybe my question is, are there applications where zfs/btrfs is more or less appropriate than ext4 or even FAT?
are there applications where zfs/btrfs is more or less appropriate than ext4 or even FAT?
Neither of them likes to deal with very low amounts of free space, so don’t use it on places where that is often a scarcity. ZFS gets really slow when free space is almost none, and nowadays I don’t know about BTRFS but a few years ago filling the partition caused data corruption there.
For fileservers ZFS (and by extension btrfs) have a clear advantage. The main thing is, that you can relatively easily extend and section off storage pools. For ext4 you would need LVM to somewhat achieve something similar, but it’s still not as mighty as what ZFS (and btrfs) offer out of the box.
ZFS also has a lot of caching strategies specifically optimized for storage boxes. Means: it will eat your RAM, but become pretty fast. That’s not a trade-off you want on a desktop (or a multi purpose server), since you typically also need RAM for applications running. But on a NAS, that is completely fine. AFAIK TrueNAS defaults to ZFS. Synology uses btrfs by default. Proxmox runs on ZFS.
ZFS cache will mark itself as such, so if the kernel needs more RAM for applications it can just dump some of the ZFS cache and use whatever it needs.
I see lots of threads on homelab where new users are like “HELP MY ZFS IS USING 100% MEMORY” and we have to talk them off that ledge: unused RAM is wasted RAM, ZFS is making sure you’re running fast AF.
In theory. But how it is implemented in current systems, reserved memory can not be used by other processes and those other processes can not just ask the hog to give some space. Eventually, the hog gets OOM-killed or the system freezes.
ZFS is not really hip. It’s pretty old. But also pretty solid. Unfortunately it’s licensed in a way that is maybe incompatible with the GPL, so no one wants to take the risk of trying to get it into Linux. So in the Linux world it is always a third-party-addon. In the BSD or Solaris world though …
Also ZFS has tendency to have HIGH (really HIGH) hardware/CPU/memory requirements.
It was originally designed for massive storage servers (“zettabyte” file system) rather than personal laptops and desktops. It was before the current convergence trend too, so allocating all of the system resources to the file system was considered very beneficial if it could improve performance.
I haven’t meant it as the criticism of ZFS. It is just so, and perhaps there were good reasons for it. Now (especially with the convergence trend) it hurts.
In case of ZFS and bcachefs, you also have native encryption, making LUKS obsolete.
I don’t think that it makes LUKS obsolete. LUKS encrypts the entire partition, but ZFS (and BTRFS too as I know) only encrypt the data and some of the metadata, the rest is kept as it is.
Data that is not encrypted can be modified from the outside (the checksums have to be updated of course), which can mean from a virus on a dual booted OS to an intruder/thief/whatever.
If you have read recently about the logofail attack, the same could happen with modifying the technical data of a filesystem, but it may be bad enough if they just swap the names of 2 of your snapshots if they just want to cause trouble.
Btw COW isn’t necessarily (and isn’t at least for ZFS) a performance trade-off. Data isn’t really copied, new data is simply written elsewhere on the disk (and the old data is not marked as free space).
Ultimately it actually means “the data behaves as though it was copied,” which can be achieved in many ways. There are many ways to do that without actually copying.
So let me give an example, and you tell me if I understand. If you change 1MB in the middle of a 1GB file, the filesystem is smart enough to only allocate a new 1MB chunk and update its tables to say “the first 0.5GB lives in the same old place, then 1MB is over here at this new location, and the remaining 0.5GB is still at the old location”?
If that’s how it works, would this over time result in a single file being spread out in different physical blocks all over the place? I assume sequential reads of a file stored contiguously would usually have better performance than random reads of a file stored all over the place, right? Maybe not for modern SSDs…but also fragmentation could become a problem, because now you have a bunch of random 1MB chunks that are free.
I know ZFS encourages regular “scrubs” that I thought just checked for data integrity, but maybe it also takes the opportunity to defrag and re-serialize? I also don’t know if the other filesystems have a similar operation.
Not OP, but yes, that’s pretty much how it works. (ZFS scrubs do not defrgment data however).
Fragmentation isn’t really a problem for several reasons.
Some (most?) COW filesystems have mechanisms to mitigate fragmentation. ZFS, for instance, uses a special allocation strategy to minimize fragmentation and can reallocate data during certain operations like resilvering or rebalancing.
ZFS doesn’t even have a traditional defrag command. Because of its design and the way it handles file storage, a typical defrag process is not applicable or even necessary in the same way it is with other traditional filesystems
Btrfs too handles chunk allocation effeciently and generally doesn’t require defragmentation, and although it does have a defrag command, it’s almost never used by anyone, unless you have a special reason to (eg: maybe you have a program that is reading raw sectors of a file, and needs the data to be contiguous).
Fragmentation is only really an issue for spinning disks, however, that is no longer a concern for most spinning disk users because:
Most home users who still have spinning disks use it for archival/long term storage/media that rarely changes (eg: photos, movies, other infrequently accessed data), so fragmentation rarely occurs here and even if it does, it’s not a concern.
Power users typically have a DAS or NAS setup where spinning disks are in a RAID config with striping, so the spread of data across multiple sectors actually has an advantage for averaging out read times (so no file is completely stuck in the slow regions of a disk), but also, any performance loss is also generally negated because a single file can typically be read from two or more drives simultaneously, depending on the redundancy config.
Enterprise users also almost always use a RAID (or similar) setup, so the same as above applies. They also use filesystems like ZFS which employs heavy caching mechanisms, typically backed by SSDs/NVMes, so again, fragmentation isn’t really an issue.
Cool, good to know. I’d be interested to learn how they mitigate fragmentation, though. It’s not clear to me how COW could mitigate the copy cost without fragmentation, but I’m certain people smarter than me have been thinking about the problem for my whole life. I know spinning disks have their own set of limitations, but even SSDs perform better on sequential reads over random reads, so it seems like the preference would still be to not split a file up too much.
My laptop is a cast off from a member of my staff who said it was too slow - a (dmidecode) - Product Name: HP 255 G6 Notebook PC. It now runs Arch (actually).
It previously slogged along with Win 10, Outlook n O365 n that. Now it does Libre Office, Evolution and much more. I use KDE, which isn’t known for a light touch on the resources. I also do light CAD and other stuff.
My office desktop is even older - it was a customer cast off, due to be skipped around six years ago. I did slap a SSD into it and I think I upped the RAM to 8GB. Its a (ssh, dmidecode): Product Name: Lenovo H330 and the BIOS is dated from 2012! I run two 23" screens off it and again, it runs Arch (actually) and KDE for pretty stuff. I run containers on it - at the moment a test Vikunja instance. I have apache, nginx and caddy fronting various experiments backed up with postgres and mariadb.
Both devices are “domain joined” and I auth to Exchange via Kerberos, via Samba winbind. File access (drive letters for the Windows mindset) is currently via autofs. I have a project on at a member of staff’s request to switch from Windows to Linux. I’m going to take my time and get it right. My current thinking is the Fedora KDE spin and this: Closed In Directory
If you have an old laptop or PC why not give it a go? You could start here: www.linuxmint.com Another option is to install something like Virtual Box on your existing machine and try out running it as a virtual machine or two. 2 CPUs, 4GB of RAM and 20GB of virty disc will work for any Linux distro as a VM to start off with. There’s also VMware Workstation - there’s a free version. Do discover the joy of snapshots/checkpoints which allow you to roll back failed changes!
25 years ago the options were rather more limited. I started off dual booting Windows and Linux but I don’t really recommend that these days, unless you want to run a gaming rig with both. Few people can afford two lots of top end hardware! I left Windows behind completely around 2004 or 5.
And it still kinda breaks my brain when I look at an expression. When I just look at it it looks like utter gibberish, but when I say to myself, “okay, what’s this doing?”
And go through it character by character, it turns into something I can comprehend.
I live and die by ssh and scp. Sometimes rsync for larger moves.
Once you’ve got ssh for terminals (used to be x sessions too!), then port forwarding and socks proxies, add in scp for file moves, and layer in sshfs for whole file system mounts it’s a potential combo for remote work and network tunnels. Such a phenomenal toolkit.
SSHFS is shipped by all major Linux distributions and has been in production use across a wide range of systems for many years. However, at present SSHFS does not have any active, regular contributors, and there are a number of known issues (see the bugtracker).
The current maintainer continues to apply pull requests and makes regular releases, but unfortunately has no capacity to do any development beyond addressing high-impact issues.
When reporting bugs, please understand that unless you are including a pull request or are reporting a critical issue, you will probably not get a response.
I’ve thought about it, but right now everything works exactly the way I need it and the only complaint I have is the occasional pop-ups from MS trying to get me to upgrade to win11 or switch my browser. My main uses for my devices are games and I just started back to school, so MS Office is nice to have. So, it’s hard to justify putting in the effort to change things now, especially when I know how to use MS products very well, particularly modding games.
Yeah. I feel ya. I still have windows on dual boot for certain things and it’s been a struggle at times but I gotta say I dread the times I need to boot windows! So much slower and annoying
If I need to rename a file, yeah, I can do that by right-clicking it in the file explorer, and selecting ‘rename’ from the menu. Two files? Painful but doable. Three files? Oh hell no, I’m switching to my always-open-in-background terminal window, and write a quick c=1; for f in *.jpeg; do mv “$f” $c.jpeg; c=expr $c + 1 ; done and it takes twice less time than clicking things through with mouse.
And yes, I wrote that shell command off the top of my head on the first try and without edits.
I’m sorry, I’m too old to learn emacs over my perfect knowledge of Midnight Commander.
The point of this topic was to tell why we are using terminal, and emacs is kind of terminal on steroids, there are like 1000 key bindings and the mouse is totally optional, you are proving the point even further.
The Thunar bulk renamer is relatively good, but recently I wanted to name images based on the capture date. Probably very tedious without the right GUI tool, while it’s just one line using exiftool in the terminal. (I don’t know it off the top of my head)
Similarly, I just extracted the audio only from a video using ffmpeg in like 10s. ffmpeg -i video.mkv -c:a copy out.mka
I do a bit of programming. Git help is about terminal commands. There are graphical front ends but I have to learn how to use them. I use terminal also for package management for the same reasons.
I’d say is similar with any source control software. It’s the same with me and Fossil. (And, granted, there are less plugins to support Fossil in IDEs; the one in Visual Studio Code/Codium does OK.)
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.