Does anyone have one of these that could confirm if that’s realistic? I’ve seen many laptops with similar specs and claims that come out to significantly lower battery life unless you do nothing but stare at an empty desktop.
The optimization might just be the rather large battery. Usually laptops with U-series processors have 40-60Eh batteries, the spec sheet shows a 73Wh battery in there.
Where the Lemur Pro really shines is battery life. System76 claims 14 hours, and I managed 11 hours in our battery drain test (looping a 1080p video). In real-world use, I frequently eked out over 13 hours. That’s off the charts better than any other Linux laptop I’ve tested recently.
FWUPD/LVFS (Linux Vendor Firmware Service) has made it remarkably easy to update a lot of system firmware and device/peripheral firmware under Linux. Prior to widespread LVFS support it was often a daunting chore for Linux users to update device firmware with frequently needing to boot into a Microsoft Windows installation, resorting to FreeDOS for system BIOS updates in the olden days, or go without updating firmware.
My ad blocker turned off at some point and I loaded that link without it… holy shit was that obnoxious. The ads moved things around, blocked the article, autoplayed videos… and that’s what we’re supposed to be appreciative of and turn off our ad blockers for? 🤢
This is what a ton of links end up being. I don’t know how anyone thinks that is an even acceptable user experience. Shit just popping up and covering the actual content. Actual content spread out like you are supposed to search for where the next paragraph is. The current state of most websites is just absolute shit. I end up going in and turning off javascript, since that seems to be the herpes that is behind most of this. I am not sure who or how people are making money off this advertising, but it needs to end (without having to resort to adblockers or disabling javascript). Ads have just absolutely fucked most web browsing and that’s sad.
I 110% agree with you on all points. I have been complaining and telling people about privacy till I am blue in the face, and it just seems like the general public doesn’t care. They just do not care that every second of their life is under surveillance and the data is sold to the highest bidder.
I said this about 2 weeks ago. I was trying to support Phoronix and was browsing on mobile. The site was unusable. They need to get control of that. I have websites and I refuse to ever run ads. Then again, I am not in it for the money.
On the other hand the number of websites that are using ChatGPT to create content, images, and links solely to push ads for profit is getting larger every day. It does not take any effort either. You can pay a monthly fee to have a service auto update your site and the ads are automatic too.
“I use A, so A must be better than B. Otherwise I’m wrong, and I don’t like that.”
The reality, of course, is that there is no “Best distro” for all use cases, and personal choice is absolutely a qualifier in defining those use cases. If your personal requirement is for a neon pink desktop and rather aged theming aimed at little girls, then you’ve absolutely chosen “The best distro” for you and don’t let anyone tell you differently.
Sorry for being a bit of a dick, I think you mean that the file “is 3.8MB”.
“mb” would mean millibit, 3.8millibit is an impossibly small file size, and would never exist practically (though I an sure that with some clever maths a zip bomb could be designed so that one bit of data could be compressed into 3.8millibits)
MB is the proper shorthand for MegaByte, a decent file size for a high quallity pucture, depending on the format and compression.
Unless we analyze the image, and determine the image format and compression settings we have no idea of if 3.8MB is a resonable size of the file or not, and the mods have hidden a rar file in the picture file, it is highly improbable that would be the case however.
Most of the hate is because of the maintainers not maintaining their security certificates. Another similar distro is EndeavourOS, which I personally prefer. But either way, find what works for you.
Just out of curiosity I’ve looked for that a couple of months ago and I found that it’s relatively easy to transform a Manjaro installation to Arch and Endeavor. IIRC it was just adding new repo keys and changing the repos. People attempting that would have to look the guide up for details.
I’m not sure what you mean. Artists use Photoshop for drawing, yet Adobe advertises Photoshop mostly for image editing. Even though Adobe advertises Photoshop for image editing, which should include fully editing your own photographs imo, the only proper Denoise AI is built into Lightroom lol. Photopea also supports pressure sensitivity, so it should work just fine for drawing. Tools aren’t that big of a deal. People who design beautiful presentation decks use PowerPoint after all… with the default system fonts.
Did a search for ubuntu “integrity: problem loading x.509 certificate” and the first result indicates out of date bios certificates needed for secure boot on older laptops. Disabling secure boot seems to be the suggested fix.
You might check your BIOS clock time too, if the certs are ‘expired’, it might be the future, or more likely, the past. Certs have validity timers that specify start and end.
It’s more likely that your BIOS is just old, and you’ll have to keep secure boot disabled from now on.
So essentially it’s running a single computer we if it were two seperate workstations?
I could see an implementation that’s similar to those running a VM with a DGPU for gaming. User A could run a login against the primary GPU and OS. User B could run a VM with several cores allocated and the secondary GPU dedicated to the VM. If any shared did file resources in the primary OS are needed, KVM has ways to do that as well.
Not entirely sure why this reply is being panned (was at -6 when I first saw it).
OP is in the process of upgrading their PC to a Ryzen 9. If we make the assumption that this Ryzen 9 is on the AM5 platform, the CPU comes equipped with an IGPU, meaning the RTX 3060s are no longer needed by the bare metal. So, installing a stable, minimal point release OS as a base would minimize resource utilization on the hardware side. This could be something like Debian Bookworm or Proxmox VE with the no-subscription repo enabled. There’s no need for the NVIDIA GPUs to be supported by the bare metal OS.
Once the base OS is installed, the VMs can be created, and the GPUs and peripherals can be passed through. This step effectively removes the devices from the host OS – they don’t show up in lsusb or lspci anymore – and “gives” them to the VMs when they start. You get pretty close to native performance with setups of this nature, to the point that users have set up Windows 10/11 VMs in this way to play Cyberpunk 2077 on RTX 4090s with all the eye candy, including ray reconstruction.
Downsides:
Three operating systems to maintain: bare metal, yours, and your partner’s.
Two sets of applications/games to maintain: yours and your partner’s.
May need to edit VM configs somewhat regularly to stay ahead of anti-cheat measures targeted at users of VMs.
Performance is not identical to bare metal, but is pretty close.
VM storage is isolated, so file sharing requires additional setup.
Upsides:
If you don’t know a lot about Linux, you’ll know a bunch more when you’re done with this.
Once you get the setup ironed out, it won’t need to change much going forward.
Each VM’s memory space is isolated, so applications won’t “step on each other” – that is, you can both run the same application or game simultaneously.
Each user can run their own distro, or even their own OS if they wish. You can run Fedora and your partner can run Mint, or even Windows if they really, really want to. This includes Windows 11 as you can pass an emulated TPM through to meet the hardware requirements.
Host OS can be managed via web interface (cockpit + cockpit-machines) or GUI application (virt-manager).
It’s not exactly what OP is looking for, but it’s definitely a valid approach to solving the problem.
I came to the comment section to recommend Proxmox or another hypervisor as well. If it was a system with just one GPU, I wouldn’t, as splitting it between two VMs can be difficult. But, most of the time having two GPUs under one OS can be a lot worse too though. I think it’s definitely the cleaner & easier way to go. One caveat I’ll add is that resources are more strictly assigned to each seat, so memory & cpu can’t be sent to who needs it more as readily. Another positive though is that it would be super simple to create a third VM with a small amount of resources for running a small self-hosted server of some kind on the same box.
honestly, it’s not worth it. hard drives are cheap, just plug one via USB 3 and make all the write operations there. that way your little SBC doesn’t suffer the performance overhead of using docker.
The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.
docker images have a ton of extra processes from the OS they were built in. Normally a light distro is used to build images, like Alpine Linux. but still, you’re executing a lot more processes than if you were installing things natively.
Of course the images does not contain the kernel, but still they contain a lot of extra processes that would be unnecessary if executing natively.
To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).
Containers running multiple processes are possible, but hard to pull off and therefore rarely used.
What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).
The Tumbleweed installer is great, the general feel of the distro is polished, modern, up-to-date and efficient.
As other people have said, use the terminal to update both flatpaks and packages.
One main reason I went back to Arch BTW is that there aren’t, contrary to the old self a declaration by Suse, that many software available for my use case, so I ended up with tons of ppa’s, sorry, Suse Vendors who relied on each others for libraries, and it eventually broke down my system when some stuff wasn’t available but was required, while some may be available from 4 different, private, repos.
So I found software management a nightmare: where to find, which one to choose from? Looking for stuff in yast, then in gnome-software, then in software.opensuse.org, then on the Build Service… Clicking bliindly to trust keys from people with personal repos titled “Use At Your Own Risk”. Updating that mess then was complicated, and slow because gnome-software would lock yast while checking stuff in the background. I had to kill it, even just to relaunch it to search for stuff.
But Tumbleweed installs Snapper on Btrfs by default, so rolling back shouldn’t be a problem? True, and I did it and it’s just delicious, fuck up your system, wind back in two clicks… That is, unless btrfs snapshots didn’t got unruly, and in it’s default settings ate up all my disk space, forcing me to destroy that great system.
What annoyed me most here wasn’t the software all-over-the-place mess, but that the default, factory setting of a great system they themselves contributed to the Linux world wouldn’t be working 6 months down the line on a small disk (30Gb). Thanks to the Arch Wiki I know better now, and it is easily manageable, but it was too late for me.
Went back to Arch, with snapper, snap-pac, grub-btrfs, snapper-rollback. Can’t yet wind back like in Suse at all, currently at VM number 9, trying again, wish me luck.
TL;DR: a rolling release from a reputable company with one-click rollback is a perfect solution if you keep your system relatively standard.
That would be my only complaint about OpenSUSE, the default size for partitions is too conservative. it would default to 30gig and even though there is a snapper cleanup, acheduler updates would fill it up. I now add to the recommended x2, and then the partition always hovers around 70%. The scheduled btrfs and snapper cleanup it keep it managed
One main reason I went back to Arch BTW is that there aren’t, contrary to the old self a declaration by Suse, that many software available for my use case, so I ended up with tons of ppa’s, sorry, Suse Vendors who relied on each others for libraries, and it eventually broke down my system when some stuff wasn’t available but was required, while some may be available from 4 different, private, repos.
This is the reason I abandoned both Fedora and openSUSE when I tried them. I like plenty about both of them but things are just simpler on Arch. Despite Arch having less software than most distributions, it tends to be the software I actually want or need to use. The few programs not present can be installed from the AUR. Writing new PKGBUILDs is simple and there is no bureaucracy.
Arch is a pain upfront but I’ve found it tends to save you time later on. It’s not without its downsides, though; the primary one being that I’m the one responsible for managing everything and there are plenty of things I don’t know.
Love your analogy. However I must say windows looks terrible. Then again so do suits, so it holds up. I had to run a win10 VM a while back in order to flip the developer bit on the oculus (don’t even get me started on that PoS). Hadn’t used it in years. Felt like some kind of money grab freak show. I couldn’t even mount an iso without having to visit several pushy sites and use one of those creepy installers. That’s when it hit me how digitally spoiled I truly am.
While I too like the analogy, and agree that Windows is becoming increasingly money grabby, I feel the need to be fair: as an OS it has supported native ISO mounting since Win7, just right click an ISO file and choose “Mount”…
I think they’re happy with just a slightly silly foot logo. “Gnome Network Object Model Environment” is a serious sounding name and I don’t think an animal mascot is what they had in mind for branding (seeing as, they don’t have one). An anime gnome might even be the exact opposite of what their intent is. GNOME is looking to be seen as a professional alternative to MacOS and Windows. Speaking of which, note that Windows and OSX don’t have a mascot either.
Just stumbled upon this comment. Actually, Windows has been using anime mascots for years in asia. Notably, they haven’t made a new one for Windows 11.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.