Kinda. Redox uses a microkernel architecture and tries to keep only the most important functionality in ring 0 while they push everything else in userspace. It’s great.
No other company will contribute to LXD now. This is 100% a Canonical tool. Were the big clouds looking at deploying LXD so Canonical tried to block them?
These are 3D filesystem browser apps that let you navigate your filesystem as though files and folders were trees in a forest that you’re walking/flying through.
They were practically useless aside from the wow factor. I believe one may have made it into a movie scene. Jurassic Park?
For context, LDAC is one of the few wireless audio codecs stamped Hi-Res by the Japan Audio Society and its encoder is open source since Android 8
LDAC is great, but simply stating that the encoder is “open source” is quite misleading (while technically correct). The codec is owned by Sony and heavily licensed. It’s a savvy business move of Sony to make the encoder free to use though, so everyone else can support their standard while charging manufacturers who want to integrate it into their headphones.
If we want a really free and open high quality codec, we should push for opus support via bluetooth
Yes… I made double sure to mention ‘encoder’ between that.
Xiph really won the lossy codec scene with Opus and I transcoded all my junk to that format. Hitting (my personal) transparency on 128k vbr is flat out impressive and it warms my heart that corpos won’t have a reason to collect taxes for basic things like audio codec. However it’s a different story with bluetooth audio codec in which I hope will change.
Can’t say I have any interesting stories. Most of mine are just the head-scratching “I don’t know why that didn’t work; guess I need to reinstall” kind of story. Like enabling encrypted LVM on install and suddenly nothing is visible to UEFI. Or trying to switch desktop environments using tasksel and now I have a blank screen on next reboot. That lame kind of stuff.
My coworker though… he was mindlessly copy/pasting commands and did the classic rm -rf $UNSETVARIABLE while in / and nuked months of migrated data on his newly built system. He hadn’t even set up backups yet. Management was upset but lenient.
I was on windows and I was forced to update and then it bricked my computer and I had to reinstall windows except when I did it asked me for a windows license key. I tried everything to recover my license key but wasn’t able to.
This was around the time linus texh tips was teasing his upcoming month on linux series so I was like fuck it I’ll give it a go. Spent a week on mint and wifi was broken then tried Endeavor, Garuda and fedora and settled on manjaro. Manjaro was amazing to me. Everything worked out of the box and kde plasma looked so clean and I could set it up exactly how I wanted.
Then I watched linus tech tips video on linux and I was like wtf how did he have such a bad experience is he dumb?
He’s pretty much the quintessential QA tester. He wants to do things his way, regardless of whether or not the OS wants him to do that. He’s usually skilled enough to fix anything he messes up, but he doesn’t know enough about Linux to do that, so he ends up breaking things. I feel like most people have a better experience than he did, but his technique uncovered a ton of bugs and usability issues that significantly improved the Linux desktop to have fixed.
Love those videos, mostly because it is my perfect argument on why the Linux Desktop isn’t ready yet.
Was Linus an idiot in those videos? Yes, Luke even said so, stating he installed in and in the month chose not to use his machine (recent wan show)
However it shows, just how easy it is for a novice to break the distro, and how much work is needed to get it to the point of Windows for general population usability. Granted the issues Linus had with POP_OS was dumb and shouldn’t have happened. But it showed me that Manjaro existed, which I am using to this day.
I think linux desktop is ready for open minded people who see interested in a new way of doing things. I don’t think it’s ready for people who can’t use a computer or troubleshoot. Windows breaks often so I’m not as harsh when I see linux break.
Agreed, I am surprised how often file explorer crashes on Win 10. Or I need to restart windows for random reasons since moving to Linux. Its to the point I want to gut my desktop and put Manjaro on it too.
Compared to when I started using Linux in the late 2000’s, Linux has matured to an unbelievable point. To someone who is even slightly interested in learning, its perfectly usable as a Windows replacement… depending on your Distro, Desktop Environment, etc.
It’s this depends which makes recommending Linux hard for me, since when a problem occurs, I find its not as easy to troubleshoot especially with how many flavours of Linux exists.
ngl i consistently have a better experience running games through wine than using their native versions. linux ports are often completely dysfunctional and it sucks ass
Since there are no replies yet I will take a stab that I hope will help get you closer and not send you off on a wild goose chase because of me being an idiot or something.
You may also know all of the following, if so sorry.
I run into Undefined symbol errors usually when some software is trying to call a function it expects to be in a library but the function isn’t because I have the wrong version of the library.
So yeah I think it may be a dependency issue.
The next thing is to figure out which package holds libharfbuzz so you can determine what version of that package you have vs what version is required by Davinci Resolve.
Something I learned about Nobara is that updating software has to either be done in the Nobara package manager or with a specific set of commands on the command line or you might break stuff (like I just did). It seems plausible that if you updated with a simple dnf update on the command line it may have caused issues.
Also… Are you installing Resolve from a Flatpak or …?
I would then go look for it in Nobara package manager and see if it needs updating or not and see what version it is running.
Another thing I would check is what version is standard in the supported distros. I’m guessing that list includes Fedora and Ubuntu and/or Debian. If the version they expect is different from what you have then we need a way to make the right version available. I’m not sure what is the best answer but I have a few ideas that I would try if it were me.
Man I tell you, apps that have one off install approaches like this are so annoying. I’ve run into this a few times. It just leaves me feeling dirty. Why can’t they just distribute an AppImage or rpm for Pete’s sake?
I just checked on Pop!_OS to see what version I’m running there because I know everything works fine. harfbuzz isn’t even installed…
When I’m finished work later today I’ll have a look on Nobara for the version installed cs expected etc.
EDIT: I solved the issue.I came across a similar issue on the GitHub repo for harfbuzz and while reading the log file again I had an ah-ha moment. More details in the post.
I used to work at this place that had a gigantic QNX install. I don’t know if QNX that we used back then had any relation to q&x now They certainly don’t look very close.
It was in the '90s and they had it set up so that particular nodes handled particular jobs. One node to handle boot images and serve as a net boot provider, one node handled all of the arcnet to ethernet communication, one node handled all the serial to mainframe, a number of the nodes were main worker nodes that collected data and operated machinery and diverters. All of these primary systems were on upper-end 386s or 486s ,they all had local hard disks.
The last class of node they called slave nodes. They were mainly designed for user data ingest, data scanning stations, touch screen terminals, simple things that weren’t very high priority.
These nodes could have hard discs in them, and if they did, they would attempt to boot from them saving the net boot server a few cycles.
If for some reason they were unable to boot from their local hard drive, They would netboot format their local hard drive and rewrite their local file system.
If they were on able to rewrite their local file system they could still operate perfectly fine purely off the net boot. The Achilles heel of the system was that you had no idea that they had net booted unless you looked into the log files. If you boot it off your local hard drive of course your root file system would be on your local hard drive. If you had net booted, and it could not rebuild your local file system, your local root file / was actually the literal partition on the boot server. Because of the design of the network boot, nothing looked like it was remotely mounted.
SOP for problems on one of the slave nodes was to wipe the hard disk and reboot, in the process it would format the hard drive and either fix itself or show up as unreliable and you could then replace the disc or just leave the disc out of it. Of course If the local disk had failed and the box had already rebooted off netboot without a technician standing there to witness it, rm -Rf would wipe out the master boot node.
I wasn’t the one that wiped it, but I fully understand why the guy did.
Turns out we were on a really old version of QNX, we were kind of a remote warehouse mostly automated. They just shut us down for about a week. Flew a team out. Rebuilt the system from newer software, and setup backups.
I can’t remember what I did to break it, but back when I was in high school I was tinkering right before class and rendered my laptop unbootable. I booted into an Arch Linux USB, chrooted into my install, found the config file I messed with, then reverted it. I booted back into my system and started the bell ringer assignment as quickly as I could. I had one minute left when the teacher walks by, looks at it, and says that I did a really good job. She never knew my laptop was unbootable just 2 minutes earlier.
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.