I’ve always experienced weird annoying bugs, and I much prefer the UX and minimalism of gnome. It’s better for productivity. I don’t just use my desktop for gaming.
This is my complaint as well. KDE certainly has advantages and neat design ideas, but I always come across some major bugs that make my device unusable.
Definitely not saying GNOME doesn’t have bugs as well though.
Well, Nobara is a gaming-oriented distribution and as the changelog outlines, does Plasma currently offer technological benefits. As is the case with everything, this isn’t set in stone and might change at some point but right now the main target audience for paid development work for Gnome are corporate users where for Plasma it’s being Desktop Mode for Steam Deck.
But why? Not being the default doesn’t mean that Gnome isn’t available. As both are Fedora derivatives, both should have good Gnome support inherited from Fedora anyway. As the changelog says, only a handful of Gnome Shell extensions will no longer be provided in the Nobara repository but instaling them manually from extensions.gnome.org is a breeze.
They still have a gnome ISO, but you’re going to throw it out because it isnt the “official” version. That sounds like a hissy fit.
The non-official KDE previous version was fine, since the point of the distro is the backend optimizations for gaming and editing. Its not what DE leads point since you can add whatever Fedora has in the repos, which is pretty much everything.
While I don’t know the term hissy fit, switching distributions just because the default of a user-changeable setting is different is definitively a bit over the top.
Speaking on Bazzite, KDE is our default to match SteamOS, but we put more effort into the GNOME release if anything due to us trying to maintain feature parity with Valve’s KDE, including being able to right click and add to steam, use the desktop nested, enable VRR, add custom themes based on the ones Valve shipped, and add the steam deck wallpapers ported to GNOME.
That being said, GE’s points about GNOME are very real, and they have a lot of catching to do in regards to gaming. KDE has DRM Leasing, VRR and HDR right now.
As I understand it, that’s not even needed because the current DE choice is being preserved at an upgrade. The only thing that needs manual tweaking is reinstalling the extensions from the Gnome website which isn’t really an issue at all.
you have faulty hardware, whether it’s RAM or cooling or storage related, no way to tell but crashes like that don’t happen nowadays.
edit: I recall having some issues with a 7490 a few years back, it needed some special module for the fan or the sensors, not sure. don’t know if that’s your issue, but look it up.
I think you mistyped the model, if it’s a 7390 it should be the same hardware as the 7490 I’ve mentioned. the module I needed was i8k, check if your model needs it.
The RAM is fine (Memtest ran 4 times without faults), and cooling seems to work well enough. Storage is ok and I used two different SSDs through this whole process and saw the same problems on both.
I tried the previous known-good kernel options on the Manjaro install and it seems to be OK now. According to the Arch Wiki the Intel 8th Gen mobile CPUs and especially iGPUs are known to be a little problematic on Linux so the kernel options to disable some power saving options are basically non-optional. It’s weird though that it works now and didn’t on the Tumbleweed reinstall.
I have an issue involving similar hardware, can you share the mandatory stuff for 8th gen iGPUs? read through the intel_graphics article but found no direct mention.
I linked the specific wiki page section in an edit to the main post. It’s in the troubleshooting part at the end.
I didn’t try the i8k module but looking at a couple things it looks like the issue was more apparent around Linux kernel 4.15 from a few years ago. I also don’t have any specific complaints with temperature control. The fans only ramp up in the 70-80C range which seems to be quite reasonable.
I honestly don’t understand why recent Ubuntu releases are popular. However, I enjoyed it in the early 2000s. There was another popular release a few years ago that had zero hotkeys enabled and I have never felt more disgusted by a release in my life. I can’t even remember what it’s called, it traumatized me hahaha.
my dell runs kubuntu, but i plan to move it to arch as well (after i back up my data)
i liked it for a while and suddenly had tons of issues with snap, especially with firefox, and webusb breaking constantly on chromium (i use android flash tool a lot)
Honestly, I don’t know. Though, I’d reckon there would be any significant difference between distros.
stability
Depends on what you mean with stability. If you meant it like how “stable” is used in “Debian stable”, then it would be any distro with a release cycle that chooses to not continuously deliver packages; but instead chooses to freeze packages and hold off updates (besides those related to security) for the sake of offering a relatively polished experience in which the behavior of the distro is relatively predictable. Some distros that score good on this would be Debian stable and openSUSE Leap. It’s worth noting that Distrobox, Flatpak and Nix allow one to have newer packages on these systems if desired.
If, instead, you meant that the distro is less likely to break upon an update, then it’s important to note the following:
While you shouldn’t expect breakage to happen in the first place, unfortunately it’s realistic to expect it every so often (read: 0-2 times a year on non-stable distros).
If you have a lot of packages, then it’s more likely that at least one of them causes some breakage.
Technically, every update is a potential ‘breakage-moment’.
Packages that haven’t been installed through the official/native repos are more likely to cause breakage.
Relying on Distrobox, Flatpak and Nix for (at least some of) your packages should benefit the stability of your base system.
(GRUB-)Btrfs+Timeshift/Snapper allows one to create snapshots one can easily rollback to in case of breakage. Therefore it’s worth seeking out a distro that configures this by default or set it up yourself on whichever distro you end up using (if it isn’t included by default).
So-called ‘atomic’^[1]^ distros are (generally speaking) more resistant to breakage, but (arguably) they’re less straightforward compared to traditional distros. It’s still worth considering if you’re adventurous or if your setup is relatively simple and you don’t really feel the need to tinker a lot. Don’t get me wrong; these atomic distros should be able to satiate ones customization needs, it’s just that it might not be as straightforward to accomplish this. Which, at times, might merely be blamed on lackluster documentation more than anything else.^[2]^
As for recommendations you shouldn’t look beyond unadulterated distros like (Arch^[3]^), Debian, Fedora, openSUSE (and Ubuntu^[4]^). These are (in almost all cases^[5]^) more polished than their respective derivatives.
speed
Most of the distros mentioned in this comment should perform close enough to one another that it shouldn’t matter in most cases.
If you’re still lost, then just pick Linux Mint and call it a day.
More commonly referred to as ‘immutable’. Atomic, however, is in most cases a better name.
If you’re still interested, I’d recommend Fedora Silverblue for newcomers and NixOS for those that actually know what they’re getting into.
I believe that one should be able to engage with Arch as long as they educate themselves on the excellent ArchWiki. It might not be for everyone, though. Furthermore, its installation (even with archinstall) might be too much for a complete newbie if they haven’t seen a video guide on it.
Ubuntu is interesting. It has some strange quirks due to its over-reliance on Snap. But it’s worth mentioning, if you don’t feel like tinkering.
With Linux Mint (and Pop!_OS) being the clear exception(s).
For just two VM, any Linux distro is enough, virt-manager to easily run those VMs up and done. The default network will allow them to communicate between their NAT. Proxmox sounds too many complications for just some testing or development stuff.
Other posters are right in that KVM is the same on just about every distro. Proxmox comes with extra tools for management and I think that makes it especially well suited.
I know this isn’t the answer you were looking for, but they’re all the same. Arch, Debian, Ubuntu, Fedora, I’ve tried them all, and there isn’t a discernable difference.
Well, I’m currently using VMware on Ubuntu to run Win 10 and Kali Linux. I don’t know what exactly caused the problem, it was either Ubuntu’s updates or VMware’s updates, but now Win 10 is unusable because it crashes (same with Kali Linux)
Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.
I would second Debian for stability, it’s what I use for all my VM servers. I have always preferred KVM however, as I had a lot of trouble with VMware hogging my cpu years ago. KVM has the virtual machine manager available for GUI monitoring but I’m not sure how far it goes for creating new VMs as I’ve always handled the setup directly from command line.
Since you’ve been on Ubuntu, I would suggest Debian. The commands are pretty much the same across the board, and it’s one of the most stable distros in the wild.
Well there’s your mistake: using VMware on a Linux host.
QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.
Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.
Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.
It’ll definitely run Kali well, Windows will be left without hardware acceleration for 2D/3D so it’ll be a little laggy but it’s usable.
VMware has its own driver that converts enough DirectX for Windows to run smoother and not fall back to the basic VGA path.
But VMware being proprietary software, changing distro won’t make it better so it’s either you deal with the VMware bugs or you deal with stable but slow software rendering Windows.
That said on the QEMU side, it’s possible to attach one of your host’s GPUs to the VM, where it will get full 3D acceleration. Many people are straight up gaming in competitive online games, in a VM with QEMU. If you have more than one GPU, even if it’s an integrated GPU + a dedicated one like is common with most Intel consumer non-F CPUs, you can make that happen and it’s really nice. Well worth buying a used GTX 1050 or RX 540 if your workflow depends on a Windows VM running smoothly. Be sure your CPU and motherboard support it properly before investing though, it can be finicky, but so awesome when it works.
On Vista and up, there’s only the Display Only Driver (DOD) driver which gets resolutions and auto resizing to work, but it’s got no graphical acceleration in itself.
I use virt-manager GUI to control KVM easily, but you can control anything easily with virsh command lines. I dislike VMware and VirtualBox, neither needed. Also, on terminal client virsh you can do much more configurations than just with virt-manager.
Remember that Desktop and Server editions are very different in terms of stability. Ubuntu has got to be one of the, if not the, most widely used linux distros for servers, that’s where the money is really in for them, so it’s more deeply tested before release to the public at large, but in my experience, in the last decade or so, Ubuntu is painfully lacking on too many fronts in it’s desktop versions.
My only issue with qemu is that folder sharing is not a great experience with windows guests. Other than that Ive had a great experience, especially using it with aqemu
PopOS and Ubuntu - really just found that I don’t like gnome. Nothing against it, I know some people love it but it is not for me. This would likely apply to any gnome distro, but those were the two I tried and immediately moved on.
Honorable mention: Manjaro because “it just breaks™” but it wasn’t something I noticed immediately and initially liked the os…
You are aware that you can have multiple DEs installed at once, right? Also many distros have multiple different choices for the default DE. I haven’t used it for probably over a decade, but I’m sure Kubuntu, the KDE version of Ubuntu, still exists.
I am aware the DE can be changed, but it was just an honest answer to OP’s question. I downloaded like 8 different distros and put them on flash drives and tried them all out and that was what caused me to move on. I didn’t have kubuntu downloaded to try, probably because canonical seems to treat them as entirely different distros.
ie, some distros have the DE options when looking at the download page or have you choose during the live boot which to use and include multiple in one iso. Ubuntu makes no mention of those separate downloads unless you explore their site a bit further than the download page. It’s a minor difference but makes a difference when you’re grabbing a handful of isos to try out, you might miss it and assume the one iso has all the options available when it doesn’t, or that it is the only option they provide.
As for PopOS I actually did look into changing to KDE and the popular wisdom at the time on message boards was that changing to KDE would possibly or likely undo most of the benefits of the tweaks and changes system 76 made. I don’t have any idea if that is even true, just what came up when searching a few years back.
I get your reasoning, a lot of “re-spins” are hidden away on many distros download pages, but saying something like “I don’t like Ubuntu because it uses Gnome” is like saying “I don’t like Fords because they come with radios”.
Regarding PopOS it probably is true because it probably all GUI specific things setup for new users, anything system level wouldn’t be changed.
Ubuntu when they first switched to Unity. I had been running Ubuntu for 2 or 3 years at that point, but I was already thinking about switching to Debian at the time. I hobbled along for a few weeks on that first version of Unity, but I didn’t like what I was seeing. I took the plunge into Debian, thinking, “If I’m going to have to learn something new anyways, I might as well try switching.”
linux
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.