That’s the Nvidia drivers. Dkms just builds to match your kernel when the kernel updates. Intel and AMD contribute driver code so you don’t have to do anything extra but Nvidia doesn’t do that because they are shits.
As far as not needing it for x11 you are either using nouveau, the reverse engineered drivers which last I tried are effectively useless for any modern workload, or a non dkms version of Nvidia driver provided by distro maintainers or someone else and just didn’t notice.
Probably nvk which I think just completed vulkan feature set for the newest cards recently, but is not reaching the windows fps yet. I still haven’t gotten over Nvidia betraying me by dropping the GTX 460 from drivers at a time when it was still more than enough for me so I don’t follow it very carefully.
Follow the blogs for Year of the Voice initiative from Home Assistant. There will be lots of pointers for the journey they’ve taken this year getting TTS and STT working for HA.
No Xorg also ships with up to date distros like Debian, arch, Ubuntu, etc but several of them have switched to using Wayland by default. It is deprecated because it is no longer actively developed and only maintained by a small group of devs and even that because these devs work for companies like Red Hat, Oracle, etc who have a vested interest in fixing those bugs
I’m guessing that you’ve been using kernels from packages provided by your distribution and its maintainers simply haven’t decided yet that Wayland is used wide enough to put things it needs into default kernel. But that’s just a matter of time.
On distribution I use, for example, I did not have to compile my own kernel when I decided to check Wayland out. But that’s only because kernel package maintainers of my distribution have decided to enable it earlier
Debian is my go-to. So long as you’re already comfortable with Linux, you can get gaming working with a tiny bit of elbow grease… and unlike some other distros, Debian is rock-solid.
Maybe this functionality was replaced by the next thing?
<span style="color:#323232;">Automatic root filesystem soft-reboot: systemctl automatically reboots into a new root filesystem located at /run/nextroot/.
</span>
I just gave it a try on my system and it worked just like it did before! Ill have to change my scripts to mount to /run/nextroot instead of /mnt, but i am very relieved that it is still possible. I was having trouble with it all morning. Thank you so much for your reply! It is much appreciated!
I did this way back in the day on my Mandrake installation with a 1.44" floppy. Only tricky part was that I had to run cp from the floppy instead of from normal $PATH as I’d wiped out /bin.
Double and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).
Without them, if you want to do some number crunching on your GPU and have your data on the host (“CPU”) memory, then you’d basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There’s one big issue here: during the memory transfer, your GPU is idle because you’re waiting for the copy to finish, so you’re wasting precious GPU compute.
So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let’s call them buffer_0 and buffer_1. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it to buffer_0, then run your GPU code asynchronously on that buffer. While it’s running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously to buffer_1. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time on buffer_1, again asynchronously. Then you copy, asynchronously again, another chunk from the host to buffer_0 this time and you keep swapping the buffers like this for the rest of your loop.
Now some GPU programmers don’t want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.
So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it’s exactly the same principle.
Lol, why own up to adding animations the system can’t handle when you can blame app and web devs? Gnome users always know where the blame should be laid, and it’s never Gnome.
If the system can’t keep up with the animation of e.g. Gnome’s overview, the fps halfes because of double buffered vsync for a moment. This is perceived as stutter.
With triple buffer vsync the fps only drop a little (e .g 60 fps -> 55 fps), which isn’t as big of drop of fps, so the stutter isn’t as big (if it’s even noticeable).
Biased opinion here as I haven’t used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.
Not only slow, it drops frames constantly. Doesn’t matter how good your hardware is.
There’s always the Android route, why fix the animations when you can just add high framerate screens to all the hardware to hide the jank. Ah, who am I kidding, Gnome wouldn’t know how to properly support high framerates across multiple monitors either. How many years did fractional scaling take?
After he got a handle on it, Torvalds returned to the kernel. He’s been much more mild-tempered since then. As he mentioned in Tokyo, he won’t be “giving some company the finger. I learned my lesson.”
This is probably a good thing.
Looking ahead, Hohndel said, we must talk about “artificial intelligence large language models (LLM). I typically say artificial intelligence is autocorrect on steroids. Because all a large language model does is it predicts what’s the most likely next word that you’re going to use, and then it extrapolates from there, so not really very intelligent, but obviously, the impact that it has on our lives and the reality we live in is significant. Do you think we will see LLM written code that is submitted to you?”
Torvalds replied, “I’m convinced it’s gonna happen. And it may well be happening already, maybe on a smaller scale where people use it more to help write code.” But, unlike many people, Torvalds isn’t too worried about AI. “It’s clearly something where automation has always helped people write code. This is not anything new at all.”
Indeed, Torvalds hopes that AI might really help by being able “to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don’t need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say ‘this pattern does not look like the regular pattern. Are you sure this is what you need?’ And the answer may be ‘No, that was not at all what I meant. You found an obvious bag. Thank you very much.’ We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do.”
But, “What about hallucinations?,” asked Hohndel. Torvalds, who will never stop being a little snarky, said, “I see the bugs that happen without AI every day. So that’s why I’m not so worried. I think we’re doing just fine at making mistakes on our own.”
There were no questions about whether maintainers would start utilizing LLMs. The questions were focused on how maintainers would respond to LLM-generated (or -assisted) patches being submitted to them. This attitude seems perfectly reasonable to me, but it would have been more interesting to ask questions about whether maintainers would start using LLMs in their work. Torvalds might have responded with a more interesting answer.
I’m a newbie programmer (and have been for quite a few years), but I’ve recently started trying to build useful programs. They’re small ones (under 1000 lines of code), but they accomplish the general task well enough. I’m also really busy, so as much as I like learning this stuff, I don’t have a lot of time to dedicate to it. The first program, which was 300 lines of code, took me about a week to build. I did it all myself in Python. It was a really good learning experience. I learned everything from how to read technical specifications to how to package the program for others to easily install.
The second program I built was about 500 lines of code, a little smaller in scope, and prototyped entirely in ChatGPT. I needed to get this done in a weekend, and so I got it done in 6 hours. It used SQLite and a lot of database queries that I didn’t know much about before starting the project, which surely would have taken hours to research. I spent about 4 hours fixing the things ChatGPT screwed up myself. I think I still learned a lot from the project, though I obviously would have learned more if I had to do it myself. One thing I asked it to do was to generate a man page, because I don’t know Groff. I was able to improve it afterward by glancing at the Groff docs, and I’m pretty happy with it. I still have yet to write a man page for the first program, despite wanting to do it over a year ago.
I was not particularly concerned about my programs being used as training data because they used a free license anyway. LLMs seem great for doing the work you don’t want to do, or don’t want to do right now. In a completely unrelated example, I sometimes ask ChatGPT to generate names for countries/continents because I really don’t care that much about that stuff in my story. The ones it comes up with are a lot better than any half-assed stuff I could have thought of, which probably says more about me than anything else.
On the other hand, I really don’t like how LLMs seem to be mainly controlled by large corporations. Most don’t even meet the open source definition, but even if they did, they’re not something a much smaller business can run. I almost want to reject LLMs for that reason on principle. I think we’re also likely to see a dramatic increase in pricing and enshittification in the next few years, once the excitement dies down. I want to avoid becoming dependent on this stuff, so I don’t use it much.
I think LLMs would be great for automating a lot of the junk work away, as you say. The problem I see is they aren’t reliable, and reliability is a crucial aspect of automation. You never really know what you’re going to get out of an LLM. Despite that, they’ll probably save you time anyway.
I’m no expert, but neither is most of the workforce (although kernel work is, again, much more in the expert realm).
I think experts are the ones who would benefit from LLMs the most, despite LLMs consistently producing average work in my experience. They know enough to tell when it’s wrong, and they’re not so close to the code that they miss the obvious. For years, translators have been using machine translation tools to speed up their work, basically relegating them to being translation checkers. Of course, you’d probably see a lot of this with companies that contract translators at pitiful rates per word who need to work really hard to get decent pay. Which means the company now expects everyone to perform at that level, which means everyone needs to use machine translation tools to keep up, which means efficiency is prioritized over quality.
This is a very different scenario to kernel work. Translation has kind of been like that for a while from what I know, so LLMs are just the latest thing to exacerbate the issues.
I’m still pretty undecided on where I fall on the issue of LLMs. Ugh, nothing in life can ever be simple. Sorry for jumping all over the place, lol. That’s why I would have been interested in Linus Torvalds’ opinion :)
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.