Many people do dual drives, but if you install linux second and it is a distro thay uses grub with probe foreign OS them you don’t really need two drives. make space on windows drive, in the linux installer create another boot partition, root and home. You set bios to boot Linux grub. Grub will launch and give you linux or choice to chainload to Windows. Windows is unaware it is getting kicked off by grub so the Windows and Lunux boot partitions leave each other alone. i can’t vouch for every distro letting you setup like this but this is how my OoenSUSE has been since 2017
Also, I’d say install Windows first, then Linux. Windows assumes it’s the only OS in the universe and tends to steamroll over the whole boot setup, so I’ve found it much easier to just let Windows do whatever it wants first, then fix it with Linux afterwards.
That is a good option as well, but for experienced users only and only if you have a lot of RAM and a UPS (or on a laptop with a working battery). Otherwise, power failiures mess that thing up.
NTFS is the standard for Windows. Nowadays Linux can handle reading/writing NTFS pretty well, but you should probably use the very established ext4 or maybe btrfs for its partition.
For Linux, if you’re a beginner, EXT4. Experienced users - BTRFS.
And ntfs-3g is even better at writing on NTFS than Windows is. There are fragmentation examples online, Windows makes a fragmented mess while ntfs-3g takes great care regarding fragmentation. Plus reads/writes a lot faster than Windows does.
Yep, use NTFS. You can access it in both Windows and Linux. You’ll need to install ntfs-3g in Linux. It comes bundled in most mainstream distros, but just in case.
Sound great and I went the same way for a while. Just be aware that steam on Linux can have issues with ntfs partitions. So I also went the the two drive route, much less of a headache.
Yes. But since we’re in Linux land, you may be able to replay the journal and un-dirty your disk by mounting with the ntfs3 driver listed here docs.kernel.org/filesystems/ntfs3.html, or you could try using ‘ntfsfix -d [your device]’ from the ntfs-3g package to clear the journal and the dirty bit, although whatever the last operation was on the filesystem may be left in an incomplete state since the journal is not replayed.
I haven’t done it in a while, but with virtualbox I have used direct disk access by creating a special vmdk with vboxmanage to give a VM access to real partitions.
I never had huge problems with rolling release distros and Nvidia but recently got a AMD card and boi… Everything (hardware acceleration etc) basically worked out of the box with a very simple Arch installation… Never again going with Nvidia on Linux.
I would check out your laptop, especially if it’s somewhat new. I have one that is dual booting from an M.2 NVMe drive and a SATA SSD. Even if it didnt, I have easy panels that pop off when I wanted to swap.
After he got a handle on it, Torvalds returned to the kernel. He’s been much more mild-tempered since then. As he mentioned in Tokyo, he won’t be “giving some company the finger. I learned my lesson.”
This is probably a good thing.
Looking ahead, Hohndel said, we must talk about “artificial intelligence large language models (LLM). I typically say artificial intelligence is autocorrect on steroids. Because all a large language model does is it predicts what’s the most likely next word that you’re going to use, and then it extrapolates from there, so not really very intelligent, but obviously, the impact that it has on our lives and the reality we live in is significant. Do you think we will see LLM written code that is submitted to you?”
Torvalds replied, “I’m convinced it’s gonna happen. And it may well be happening already, maybe on a smaller scale where people use it more to help write code.” But, unlike many people, Torvalds isn’t too worried about AI. “It’s clearly something where automation has always helped people write code. This is not anything new at all.”
Indeed, Torvalds hopes that AI might really help by being able “to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don’t need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say ‘this pattern does not look like the regular pattern. Are you sure this is what you need?’ And the answer may be ‘No, that was not at all what I meant. You found an obvious bag. Thank you very much.’ We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do.”
But, “What about hallucinations?,” asked Hohndel. Torvalds, who will never stop being a little snarky, said, “I see the bugs that happen without AI every day. So that’s why I’m not so worried. I think we’re doing just fine at making mistakes on our own.”
There were no questions about whether maintainers would start utilizing LLMs. The questions were focused on how maintainers would respond to LLM-generated (or -assisted) patches being submitted to them. This attitude seems perfectly reasonable to me, but it would have been more interesting to ask questions about whether maintainers would start using LLMs in their work. Torvalds might have responded with a more interesting answer.
I’m a newbie programmer (and have been for quite a few years), but I’ve recently started trying to build useful programs. They’re small ones (under 1000 lines of code), but they accomplish the general task well enough. I’m also really busy, so as much as I like learning this stuff, I don’t have a lot of time to dedicate to it. The first program, which was 300 lines of code, took me about a week to build. I did it all myself in Python. It was a really good learning experience. I learned everything from how to read technical specifications to how to package the program for others to easily install.
The second program I built was about 500 lines of code, a little smaller in scope, and prototyped entirely in ChatGPT. I needed to get this done in a weekend, and so I got it done in 6 hours. It used SQLite and a lot of database queries that I didn’t know much about before starting the project, which surely would have taken hours to research. I spent about 4 hours fixing the things ChatGPT screwed up myself. I think I still learned a lot from the project, though I obviously would have learned more if I had to do it myself. One thing I asked it to do was to generate a man page, because I don’t know Groff. I was able to improve it afterward by glancing at the Groff docs, and I’m pretty happy with it. I still have yet to write a man page for the first program, despite wanting to do it over a year ago.
I was not particularly concerned about my programs being used as training data because they used a free license anyway. LLMs seem great for doing the work you don’t want to do, or don’t want to do right now. In a completely unrelated example, I sometimes ask ChatGPT to generate names for countries/continents because I really don’t care that much about that stuff in my story. The ones it comes up with are a lot better than any half-assed stuff I could have thought of, which probably says more about me than anything else.
On the other hand, I really don’t like how LLMs seem to be mainly controlled by large corporations. Most don’t even meet the open source definition, but even if they did, they’re not something a much smaller business can run. I almost want to reject LLMs for that reason on principle. I think we’re also likely to see a dramatic increase in pricing and enshittification in the next few years, once the excitement dies down. I want to avoid becoming dependent on this stuff, so I don’t use it much.
I think LLMs would be great for automating a lot of the junk work away, as you say. The problem I see is they aren’t reliable, and reliability is a crucial aspect of automation. You never really know what you’re going to get out of an LLM. Despite that, they’ll probably save you time anyway.
I’m no expert, but neither is most of the workforce (although kernel work is, again, much more in the expert realm).
I think experts are the ones who would benefit from LLMs the most, despite LLMs consistently producing average work in my experience. They know enough to tell when it’s wrong, and they’re not so close to the code that they miss the obvious. For years, translators have been using machine translation tools to speed up their work, basically relegating them to being translation checkers. Of course, you’d probably see a lot of this with companies that contract translators at pitiful rates per word who need to work really hard to get decent pay. Which means the company now expects everyone to perform at that level, which means everyone needs to use machine translation tools to keep up, which means efficiency is prioritized over quality.
This is a very different scenario to kernel work. Translation has kind of been like that for a while from what I know, so LLMs are just the latest thing to exacerbate the issues.
I’m still pretty undecided on where I fall on the issue of LLMs. Ugh, nothing in life can ever be simple. Sorry for jumping all over the place, lol. That’s why I would have been interested in Linus Torvalds’ opinion :)
Double and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).
Without them, if you want to do some number crunching on your GPU and have your data on the host (“CPU”) memory, then you’d basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There’s one big issue here: during the memory transfer, your GPU is idle because you’re waiting for the copy to finish, so you’re wasting precious GPU compute.
So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let’s call them buffer_0 and buffer_1. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it to buffer_0, then run your GPU code asynchronously on that buffer. While it’s running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously to buffer_1. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time on buffer_1, again asynchronously. Then you copy, asynchronously again, another chunk from the host to buffer_0 this time and you keep swapping the buffers like this for the rest of your loop.
Now some GPU programmers don’t want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.
So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it’s exactly the same principle.
Lol, why own up to adding animations the system can’t handle when you can blame app and web devs? Gnome users always know where the blame should be laid, and it’s never Gnome.
If the system can’t keep up with the animation of e.g. Gnome’s overview, the fps halfes because of double buffered vsync for a moment. This is perceived as stutter.
With triple buffer vsync the fps only drop a little (e .g 60 fps -> 55 fps), which isn’t as big of drop of fps, so the stutter isn’t as big (if it’s even noticeable).
Biased opinion here as I haven’t used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.
Not only slow, it drops frames constantly. Doesn’t matter how good your hardware is.
There’s always the Android route, why fix the animations when you can just add high framerate screens to all the hardware to hide the jank. Ah, who am I kidding, Gnome wouldn’t know how to properly support high framerates across multiple monitors either. How many years did fractional scaling take?
I did this way back in the day on my Mandrake installation with a 1.44" floppy. Only tricky part was that I had to run cp from the floppy instead of from normal $PATH as I’d wiped out /bin.
Maybe this functionality was replaced by the next thing?
<span style="color:#323232;">Automatic root filesystem soft-reboot: systemctl automatically reboots into a new root filesystem located at /run/nextroot/.
</span>
I just gave it a try on my system and it worked just like it did before! Ill have to change my scripts to mount to /run/nextroot instead of /mnt, but i am very relieved that it is still possible. I was having trouble with it all morning. Thank you so much for your reply! It is much appreciated!
Debian is my go-to. So long as you’re already comfortable with Linux, you can get gaming working with a tiny bit of elbow grease… and unlike some other distros, Debian is rock-solid.
linux
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.