If you’re looking for something that won’t break, Debian and openSUSE Tumbleweed are two good options. Both offer the Plasma desktop, though openSUSE may have an easier installation interface for some. Note that some video codecs aren’t (officially) available, so if playing “differently acquired” media is a concern, perhaps Debian would be easier.
If you wanna try arch, consider EndeavourOS. It simplifies the installation process significantly, though it doesn’t do much to help maintain the system. That’s on you. Avoid Manjaro like a plague.
2
The Arch Wiki is universally considered the best source. 99% of what you’ll see on there will work on any linux distro, so don’t worry about the name. Aside from that, your favourite search engine is your best bet.
3
No clue what FALGCS means, but wallhaven.cc a great place for wallpapers.
Edit: seems the Manjarno site is down. Shame, genuinely useful site for times like this.
I’ve started using BTRFS on my laptop with OpenSUSE and on my Steam Deck. It does two things for me, which I’m interested in. On OpenSUSE it does a snapshot before every system update. So if anything goes wrong I can easily roll back.
On the Steam Deck I love the deduplication. It’s really great for a ton of Windows games that all need their own little “Windows” environment which amounts to a GB or two per game. With BTRFS I only use that space once.
Can you elaborate more on deduplication? Is this a feature you setup, or does it sort of work out of the box? This is a new concept to me, but sounds incredibly useful, especially in that scenario.
I used a script that did everything for me, so I’m not 100 % sure. But as far as I know you enable the feature at mount time and then every time you copy something only a reference is copied until you actually do a change to the new or old file.
For everything else a cronjob runs every week or so to search for unnecessary duplicates.
And if a copied file is changed, btrfs only stores the difference instead of two complete files. E.g. if the 1GB file1 is copied to file2, they will take 1GB total. If 100MB is appended to file2, the total storage usage is 1,1GB
CLI’s are likely not specifically the target. I suspect the CLI is just the “low hanging fruit” and core set of software that needs to be supported before you build up to a fully functional GUI apps.
but LXDE should effectively be considered “end of life”, the developer is in the process of porting everything over to Qt and working on releases of LXQt
with that, for a full DE – Xfce if you like GTK, LXQt if you like Qt
or a minimal setup with a WM plus utilities (like Openbox or one of the large selection of tiling window managers)
along those lines though, there are still a LOT of lightweight Linux distros to choose from
Crunchbangplusplus or BunsenLabs – successors to Crunchbang Linux – usually just Openbox WM and a few utils rather than a full DE
plain old Debian stable – proprietary drivers are now part of the installer, no more hunting for a special ISO – can choose your DE or WM during install
Did anyone really think that making UEFI systems the equivalent of a mini OS was a good idea? Or having them be accessible to the proper OS? Was there really no pushback, when UEFI was being standardized, to say “images that an OS can write to are not critical to initializing hardware functionality, don’t include that”? Was that question not asked for every single piece of functionality in the standard?
It breaks the cardinal rule of executing privileged code: Only code that absolutely needs to be privilaged should be privileged.
If they really wanted to have their logo in the boot screen, why can’t they just provide the image to the OS and request through some API that they display it? The UEFI and OS do a ton of back and fourth communication at boot so why can’t this be apart of that? (It’s not because then the OS and by extension the user can much more easily refuse to display what is essentially an ad for the hardware vendor right? They’d never put “features” in privileged code just to stop the user from doing anything about it… right?)
Did anyone really think that making UEFI systems the equivalent of a mini OS was a good idea
UEFI and Secure Boot were pushed forcibly by MS. That’s why FAT32 is the ESP filesystem.
If I had to guess, a brief was drafted at MS to improve on BIOS, which is pretty shit, it has to be said. It was probably engineering led and not an embrace, extinguish thing. A budget and dev team and a crack team of lawyers would have been whistled up and given a couple of years to deliver. The other usual suspects (Intel and co) would be strong armed in to take whatever was produced and off we trot. No doubt the best and brightest would have been employed but they only had a couple of years and they were only a few people.
UEFI and its flaws are testament to the sheer arrogance of a huge company that thinks it can put a man on the moon with a Clapham omnibus style budget and approach. Management identify a snag and say “fiat” (let it be). Well it was and is and it has a few problems.
The fundamental problem with UEFI is it was largely designed by one team. The wikipedia page: en.wikipedia.org/wiki/UEFI is hilarious in describing it as open. Yes it is open … per se … provided you decide that FAT32 (patent encumbered) is a suitable file system for the foundations of an open standard.
You may be surprised to learn that they didn’t all run out until 2013. UEFI had been around for 7 years by this time, and Microsoft was doing patent enforcement actions against Tom Tom during this time period.
Sure, they’re expired now, but not at the time. It was supposed to be an open standard at the time.
FWUPD/LVFS (Linux Vendor Firmware Service) has made it remarkably easy to update a lot of system firmware and device/peripheral firmware under Linux. Prior to widespread LVFS support it was often a daunting chore for Linux users to update device firmware with frequently needing to boot into a Microsoft Windows installation, resorting to FreeDOS for system BIOS updates in the olden days, or go without updating firmware.
My ad blocker turned off at some point and I loaded that link without it… holy shit was that obnoxious. The ads moved things around, blocked the article, autoplayed videos… and that’s what we’re supposed to be appreciative of and turn off our ad blockers for? 🤢
This is what a ton of links end up being. I don’t know how anyone thinks that is an even acceptable user experience. Shit just popping up and covering the actual content. Actual content spread out like you are supposed to search for where the next paragraph is. The current state of most websites is just absolute shit. I end up going in and turning off javascript, since that seems to be the herpes that is behind most of this. I am not sure who or how people are making money off this advertising, but it needs to end (without having to resort to adblockers or disabling javascript). Ads have just absolutely fucked most web browsing and that’s sad.
I 110% agree with you on all points. I have been complaining and telling people about privacy till I am blue in the face, and it just seems like the general public doesn’t care. They just do not care that every second of their life is under surveillance and the data is sold to the highest bidder.
I said this about 2 weeks ago. I was trying to support Phoronix and was browsing on mobile. The site was unusable. They need to get control of that. I have websites and I refuse to ever run ads. Then again, I am not in it for the money.
On the other hand the number of websites that are using ChatGPT to create content, images, and links solely to push ads for profit is getting larger every day. It does not take any effort either. You can pay a monthly fee to have a service auto update your site and the ads are automatic too.
I would do it by manually splitting it up into sets and writing scripts to back up each of those sets. Then you only have to figure out the split once.
I wonder if rsync has an option to do what you are asking for?
It also sounds like the kind of thing the old tape backup software would do. Maybe look into something that can pretend the drives are tapes.
Made infinitely more entertaining by the fact they’re all more or less the same under the hood with minor differences packed ontop of the same Linux kernel
A typical Linux distro, especially lightweight and simpler ones like Arch, will of course be better than a bloated OS, like Pop or Windows. The only problem with Linux distros might be the choice of tools - X and AMD will work much better overall than Wayland and Nvidia.
Just that many people may have an Nvidia GPU before deciding to use Linux, and some people just prefer to use Wayland over X for literally everything else.
My PC with Wayland + Nvidia has so many problems with gaming, especially flickering and performance, while my Laptop with Wayland + integrated Intel graphics has no problems at all - even in games, that I wonder if Nvidia + Wayland still really sucks ass or if my GPU is just broken. Currently there’s a bug where frames are ‘switched’ somehow, so it’s not Frame 1, Frame 2, … Frame n, but Frame 1, Frame 3, Frame 2, Frame 5, Frame 6, Frame 4 etc.
I expect it to be fixed by an update of nvidia in the future, but there are always such bugs.
If my experience is any indicator, your GPU is fine :(. Any chance you’re using mixed display scalings? I’ve got an RTX 3050 eGPU for my Plasma/Wayland laptop, and for the most part it actually works fairly smoothly (albeit more slowly compared to windows), but if I try to run a game at a higher resolution than my monitor (used by Plasma for mixed scaling) I get constant flashing/frame shifting, but when I drop it down to the native 1080p it starts working again
As a side note, X and eGPUs do not play well together, but Wayland is literally plug and play after installing the drivers–I can even hot plug/unplug as long as nothing’s using the GPU!
That frame issue is because of the fact that Nvidia uses “explicit sync” and AMD/Intel use “implicit sync” - XWayland is built to only support implicit syncing for now (Nvidia is trying to get it changed), and since most games right now run under XWayland… Along with a ton of apps of course.
Until then, that issue won’t be resolved sadly. It’s what finally pushed me to get an AMD card since the issue has been open for over a year with a ton of back and forth.
Not sure about the other ones, but I use Btrfs because of subvolumes and backups.
Subvolumes are like special folders inside of your partition that mount separately. Ex. In my btrfs partition, I have a @home partition that is mounted to /home
This makes it easier to choose what you are backing up, because you can say, “just copy everything in @home to the backup location”
If I got any of that wrong, feel free to correct me!
I mean, is it actually easier to copy everything in @home than it is to copy everything in /home? Btrfs has always kinda felt like it’s a bunch of extra steps to solve problems I don’t have.
The real power for btrfs for me is incremental backups; you can take a snapshot of your home partition and send it to a backup device, then you can take a second snapshot a week later and just send the differences between them. I do my weekly backups like this. You can keep many multiple snapshots to roll back if needs be since only the differences between snapshots take up space. This is the tutorial that got me started.
Yeah, alright, I see how that could be useful for someone who isn’t me. I don’t have much that’s important on my computer, and for what little there is I just have a second ssd I drag and drop it onto. That one has Mint installed on it in case I do something stupid to my main drive, because I routinely do stupid things to my main drive.
I suppose it depends on how much stuff you have, doing a full back up of my home every week is too time consuming to be practical but takes a couple of minutes with this method.
Keeping multiple past snapshots is overkill for me but I do it because I can, more-or-less. It would be useful if I accidentally delete a file and only remember it months later.
Kinda. You can copy your snapshots from @home too, meaning a restore from backup also restores your local file version history. There are also tools to push snapshots around as a large archive instead of dealing with smaller files directly.
The COW can also reduce the chances of running rsync on a large file that is currently being accessed, and getting a partial file in your backup. Or I suck at rsync 🤷♂️
You’re right, atomic snapshots are a big advantage of CoW fs.
Rsync backups done while the system is running have a chance of being broken, while CoW fs snapshots are instant and seem basically as if the system suddenly lost power.
I still prefer to run everything built directly from reliable deb sources.
As an end user… sure, flatpaks and appimages and snaps are I guess neat if you are constantly distro hopping or something, at least in theory.
But uh, I have already found the ability to play games, develop games and other software, use basic daily software for everyday needs, and have a stable and predictable OS that doesnt crash or have insane misconfigurations caused by some esoteric conflict by just basing everything directly off of deb sources.
Every once in a while I will have to compile my own build, but this is rare and usually only occurs when trying out something experimental, or, also rare, something that doesnt have an actively and well maintained deb source. In that case its just a matter of doing a build from github when a new version comes out.
And I can do builds from github because I have saved a lot of storage space from not using bundled installers for all my software, allowing me to store the sources. This is also neat because it allows me to quickly /use/ one of those sources in a project, after I have already seen that it is stable via the software I use that is built on it.
Finally there is the security angle. Using a myriad of different containerized installers for everything is convenient in that you don’t have to directly worry about source management… until you do, when a source lib is discovered to have a critical flaw.
When a serious flaw is found in a source library… what’s gonna get updated faster? A containerized installer that you have to wait for the devs, who are busy managing tons of cross platform dependency issues and have to do a new safe stable build everytime any of their many dependencies for their many supported platforms? Or an app specifically built from source libs that either doesnt focus on cross platform, or has different teams specific to maintaining its different supported flavors?
In my experience, literally all of the time, the ‘direct from source’ software gets updated more quickly than the cross platform bundled installer.
Further, this whole approach here gives you experience with software that is built on source packages that, as you become more familiar with, and tinker with yourself, gives you insight into what source libs are well coded in terms of cpu/gpu/ram optimization, and which are resource hogs and should be avoided if youre interested in promoting and using software built off of efficient code. I enjoy learning from the good coding techniques of stable, lean and fast programs, and avoiding code that is comparatively unstable, boated, or slow.
I’m afraid this isn’t really the place to ask for such a device. If you ask for any kind of laptop, they pretty much only tell you to buy a ThinkPad X1 Carbon here. Sure, it’s probably a great laptop, but not for every usecase.
You’re probably not going to find a tablet with modular RAM, but 4GB will barely run a web browser these days. You will be using swap a lot and that will put a lot of wear on the non replacable SSD.
Oof. Didn’t even think of the SSD bit. The surface devices are generally good quality but the repairability is apparently atrocious with all these soldered Chips.
I’m using a Thinkpad X1 Yoga with touchscreen and stylus support. Works flawlessly with Fedora. Bought the 8 GB version with an 8th gen i7 from a local shop in Vienna for 200€, but there’s also a 16 GB one.
I would say that Surface is too old for proper use especially for that price. Have a look around more, and better deals just gonna fall into your lap.
I was specifically hoping for something with the 10" form factor. I already have a thinkpad as a laptop and was hoping for something with a smaller form factor.
Unlike ram, ssds die after some use. So the lifespan of this device depends on this SD card, eMMC is basically a soldered SD card, a bad quality ssd. I have 3 old tablets with dead eMMC, they are otherwise perfectly fine devices, but unusable for anything
I’m not too familiar with the surface lineup, but iirc there are higher end devices with replaceable ssds. I think soldered ram is not a big deal in this form factor if it’s enough for the expected use case, but a soldered hard drive lowers the lifespan of your device
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.