Upgrading/tinkering doesn’t void your warranty. Explicitly.
And their customer service is top notch. I thought I bricked my gazelle when I upgraded the memory, but their customer service walked me through how to fix it - didn’t even bat an eye.
Upgrading/tinkering doesn’t void your warranty. Explicitly.
This is generally true with everything in the USA (covered by the Magnuson-Moss Warranty Act) even though companies are sketchy about it and try to convince people that it’ll void their warranty. The manufacturer has to prove that your upgraded part was the direct cause of the issue you’re trying to claim under warranty.
I did not know that - my point is that system76 is not at all sketchy about it. They actively encourage tinkering, make it clear that you won’t void your warranty, and have extensive technical documentation to explain how to do upgrades etc
i can’t get over how much more they cost than a similarly spec’ed mac with macs being superior in every single benchmark (except privacy and customizability)
Mac are only competitive on the smallest configuration, as you start to add the same options to each the Mac pricing goes through the roof while this one’s price will only increase by a bit.
I’m curious. What do you prefer, some larger res with resolution scaling? How’s the scaling situation on DEs/WMs nowadays? Last I tried it, it was pretty abysmal. Admittedly it was years ago, but it used to be that mixed scaling wasn’t possible, so if my laptop was higher DPI and needed scaling, I’d need to run any external monitor with display scaling as well. I’ve avoided high DPI/display scaling on purpose for a while at this point because of it, and tend to prioritize usable pixel real estate.
That’s the odd part. I run Pop!_OS on a ThinkPad with a 4K touch screen at 175% scaling and it looks beautiful. The scaling on the DE is superb. I don’t understand why they don’t offer a HiDPI option on their laptops.
And it works fine with multiple monitors at different scaling ratios, or does it scale them all the same? That’s the actual part that didn’t work correctly for me, back then.
Also a great way to get more performance and increase battery life. On a laptop, most folks would be hard pressed to see the difference between 1080p and a higher resolution.
I’m using OpenSUSE Tumbleweed on a Dell XPS 13 9360 with a 3300x1800 13" screen and Wayland, and it works fine. There was one application (Sublime Merge) where I had to edit some scaling configuration settings, and there’s one tray-based tool (Jetbrains Toolbox) that comes up tiny, but for everything else the global scaling setting in KDE has done a fine job. It also handles dual monitors with different resolutions.
I don’t like 1080 screens because small text becomes unreadable more quickly on them. It’s less of an issue with a small screen, but it still counts against a machine for me.
Whatever works for you haha. Admittedly, I’m the kind of guy that’s running a 34" ultra wide + two 22" monitors on top, and is looking at replacing them with a single 42-43" 4k monitor right now just to have the equivalent of a bezelless 2x2 grid of 21" monitors lol. And they’re all budget/business monitors. So I may not be a reference on display quality… I’m obsessed with having tons of things on screen at once. The ADHD object permanence issues (“out of sight, out of mind” is my default state) might have something to do with it…
I’ll have to check it out again then, if display scaling got better since.
The awful screen is one big reason I don’t use my System76 laptop more often. It’s the worst laptop screen I’ve ever seen, has terrible light bleed, and has a pink tint. And this is the warranty replacement they tried to charge me for. The first one had the same awful screen, but kept freezing on me randomly.
And the damn thing STILL has hardware features that only work on Windows 10, five years later (like multi-finger trackpad gestures). I’ll take System76 seriously when they start putting good screens in their laptops and get rid of nvidia.
Great. I’m not using a Dell. I have a laptop from a company that supposedly supports Linux first. A company I will not be buying anything from in the future either.
I didn't care for PuppyLinux as it didn't run well with even just Firefox open. I also didn't care for the updating structure that seemed to be the idea that you just don't update packages between releases. I could be wrong on that, but that was what I got from reading on how to keep things up to date. I did like how small it is and how it loads into memory on boot.
AntiX wouldn't let me install any packages or update. It would keep telling me I needed to wait a few hours to access the repos. I did like how you could swap between several desktop environments easily.
The lack of systemd was something I couldn’t get over. I mean the alternative service managers are good but a few apps I really need have a strong dependency on systemd and the adapter packages just weren’t working. Otherwise I highly recommend AntiX. It made my old netbook feel useful again.
AntiX wouldn’t let me install any packages or update. It would keep telling me I needed to wait a few hours to access the repos. I did like how you could swap between several desktop environments easily.
Just manual change the repo and problem solved…
And I need to clarify this because AntiX IMO, under category Permacomputing for low power consumption without too much sacrificing the function than others [in my experiment].
I feel like you guys aren’t really “explaining like I’m 5”. Let me show you: Sometimes, when a mommyboard and a daddy graphics card fall in love, the daddy graphics card puts his connector pins inside the mommyboard’s expansion slot. Then when they both get turned on, millions of tiny electrons surge out of his connector pins and into her expansion slot, where they travel up through mommyboard’s data bus, and into one of her memory cards. Meanwhile, there are thousands of image files inside mommy’s storage drives waiting to come to life, and every once in a while one of them ventures out of the storage drive and into her memory card. And if the electrons and the image file happen to meet at the same time, then 9 milliseconds later, a picture of a baby appears on the monitor!
And wayland represents the overly friendly postman that left the house with a satisfied smirk just as the daddy board came home from his hard workday as an xorg liason.
In short, the maintainers have made questionable decisions over the years, and the Arch Linux packages are held back by two weeks on Manjaro for… basically no reason.
If you want an out-of-the-box solution to Arch Linux, just use EndeavourOS.
So. I’m a happy Manjaro user. I don’t install a lot of things and have had AUR updates break stuff likely due to the 2 weeks delay Manjaro adds to their packages.
I’m still using it on multiple devices and I’m really happy. I considered moving to endeavour but I wasn’t sure how it would handle hardware updates. I mean, my understanding is that Manjaro is more “noob” friendly and I don’t consider myself an expert. I used the Manjaro hardware helper to fix my video drive several times and I like the simplicity of the command. Does endeavour require a more advanced user? Does it have the “easy to use” troubleshooting things that Manjaro has?
Ah. What about the Kernel uploader? I think the Manjaro one is unique to Manjaro right? Is there another one for regular arch/endeavour?
Endeavour has plenty of “beginner” tools, including a kernel manager (literally called A Kernel Manager) and a friendly GUI Welcome app that helps you update your system and your mirrors.
I think this is by design. I once contributed to git, and it required putting a patch in the mailing list. It certainly forced you to be sure your code was spot on.
And it will discourage new users from contributing. Thus, only boomers and corpos will contribute, and over time Linux becomes a de facto corporate owned committee.
There are two options here, given that the OS seems to rely heavily on React Native to work: having the streaming APKs converted to React Native apps, or simply use the web browser and PWAs.
Hopefully Fedora and others forcing users onto Wayland is going to help push Wayland devs to fixing the stuff that’s breaking compatibility for everyone still stuck on X11.
Yeah, I share the feeling. Not sure if the problem lies on Wayland or Nvidia but hopefully if Wayland becomes the standard they’ll address the elephant in the room!
Wayland is just a set of protocols, which work fine (albeit with limitations) when implemented properly. So if KDE’s implementation of its share of the APIs works correctly with Intel and AMD GPUs, but not with Nvidia ones, the culprit is extremely likely to be the latter.
I had a quick go at it yesterday (the latest 535 broke DDC CI for one of my monitors, making plasma-powerdevil unable to start) and for whatever reason KWin ran at something like 3 seconds per frame. No that’s not a typo, I mean it. I hope it’s fixed before it gets to Arch’s repo.
EDIT: It works! I had to switch to the DKMS driver (the main one isn’t in the repos yet) but other than that my Wayland session didn’t die a horrible death. Well smooth. I still didn’t test much, but at least night light works.
This is what I don’t get. AMD has driver issues on windows because of a combination of their own incompetence and windows updates doing stupid windows things - people squarely lay the the blame on AMD. NVIDIA releases bad closed source drivers causing issues on linux - somehow the fault of linux and the open source communities.
These people should be hounding NVIDIA to fix their issues instead crying to DE developers to fix issues caused by NVIDIA.
because in Windows, blame doesn’t solve problems. You can blame Microsoft, or you can blame AMD, but either way nothing will change. In Linux, there’s some level of accountability because almost all software has maintainers (if not, you can step up personally). Similarly, you can’t hold Nvidia accountable on Linux - best you can do is not buy their GPUs.
Completely agree, as an NVIDIA user (for now) I am screwed if I am required to use Wayland. I mean, I use Wayland for a long time and it works well with NVIDIA but there are many things that don’t quite work, like many emulators (Yuzu/RPCS3) that for some reason have a strange tearing, or some programs that simply won’t open in xWayland.
Completely agree. I keep trying to open a new session on a clean new user regularly to check if it works and it is absolutely horrible. 3 days ago after updating the system and seeing some new latest kde versions coming in, tried again and noped the out of it in a few minutes. The fonts and scalling in so many places are very bad.
I keep reading about great improvements in the 6 version and am really hopeful for it to be usable.
Or the problem is just that no developers have normal regular laptops that are 14’’ at 1080p and can’t imagine that proper scaling at 125% and 150% needs to work out of the box.
Edit: I don’t even have nvidia hardware, it’s just regular intel stuff. Can’t imagine the struggle of nvidia folks.
Many apps are designed with bitmap icons (png, jpg) instead if svg, so fractional scaling requires manual changes.
Also, frameworks like GTK don’t have enough development resources to quickly make changes to support anything besides integer scaling. It’s difficult to change to fractions if everything assumes integers.
PS: “making stuff show up bigger on a screen” works already, it’s just not perfect. Windows is as far as I know the only OS coming close to doing scaling perfectly. Except Android and similar OS that were designed with fractional scaling in mind.
What is your background. “A job using Linux” is super broad and remote work only narrows it further. If you don’t have plenty of experience, it’ll be hard to get a remote position.
HIP is amazing. For everyone saying “nah it can’t be the same, CUDA rulez”, just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you’re basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don’t do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
The problem with ROCm is that its very unstable and a ton of applications break on it. Darktable only renders half an image on my Radeon 680M laptop. HIP in Blender is also much slower than Optix. We’re still waiting on HIP-RT.
That’s true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can’t speak for rendering as that’s not my field, but I’ve done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD’s CUDA-to-HIP porting tool, then yeah chances are things won’t work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
How is the situation with ROCm using consumer GPUs for AI/DL and pytorch? Is it usable or should I stick to NVIDIA? I am planning to buy a GPU in the next 2-3 months and so far I am thinking of getting either 7900XTX or the 4070 Ti Super, and wait to see how the reviews and the AMD pricing will progress.
Works out of the box on my laptop (the export below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):
Anything that is still broken or works better on CUDA? It is really hard to get the whole picture on how things are on ROCm as the majority of people are not using it and in the past I did some tests and it wasn’t working well.
Hard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
I have the s10+ and it’s actually useful, as you can remap the double click on that button to open any app you like. But yeah single click, never happened intentionally.
EDIT: F yeah, I just checked the settings and you can decide if you want bixby activation on single or double-click. Now I’ve set bixby to double click and on single-click it opens my password manager. If you don’t select anything, it will do nothing on a single click.
The setting is under “Advanced Features” -> “Bixby Key” for me.
because most people are unaware of keybindings and when they inevitable tap on the new dedicated key they’ll probably be shown a subscription screen for Copilot Premium or whatever they call it.
IMO it’s a very disgusting and intrusive way of fishing subscriptions to the AI thing they’ve invested so much money on.
It’s a mess, but honestly so are a lot of critical FOSS projects (e.g.: OpenSSH, GNUPG, sudo). Curmudgeons gonna curmudgeon. There was a point of no return and that was years ago – now that Wayland’s finally becoming useable despite itself it’s probably time to come to terms with the fact that better alternatives would have arisen had anyone thought they could truly manage it.
it’s probably time to come to terms with the fact that better alternatives would have arisen had anyone thought they could truly manage it.
This is the most important takeaway. There’s a lot of people whining about Wayland, but Wayland devs are currently the only people actually willing to put in the work. Nobody wants to work on X and nobody wants to make an alternative to Wayland, so why do we keep wasting time on this topic?
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.