That was quite the read but pretty worth it. He talks about a lot of the mistakes he made not just in relation to his crime, but as a developer, project leader, and general human being.
He discusses what things he would have done differently, and how he thinks that could have changed things not only for him but his software as well.
He mentions multiple times how much he wishes that the conflict handling and social classes he has access to in prison, were available to him in school. He ends the letter with a call to action, for just that asking people to try and affect legislation to get more youth access to this information to avoid cases such as his.
Yes, which is why it is a little odd for the article author to include it without context, because we all immediately think of one social mistake that has nothing to do with Linux.
He did mention the murder of his wife and said he would detail his regret to anyone who asked. The rest of the letter describes the “social mistakes” in dealing with co-workers and the Linux community. He even asks that those co-workers’ names be added to the credits and his negative comments about them be deleted. There’s no forgiving what he did to his wife but there’s at least some evidence he’s changed since that happened.
He did mention the murder of his wife and said he would detail his regret to anyone who asked.
This is true - I’m reacting more to the title than the content. It’s a very peculiar choice of words.
There’s no forgiving what he did to his wife but there’s at least some evidence he’s changed since that happened.
Perhaps - it’s hard to tell. It still reads a lot like one of his standard narcissistic rants even as he’s complimenting others. It’s still all about his “dream”.
I’m not a doctor but he certainly seems neurodivergent based on his writing. It’s hard to imagine him ever changing in some significant way and being “rehabilitated” enough to be allowed back into society, hence the “some evidence”. It’s might be best he remains in jail rather than be paroled.
Yeah - I mean - I don’t want to get into the business of analyzing somebody’s metal state but he definitely seems to have issues with fixation. But I also don’t want to cross the line into saying that he’s necessarily dangerous because of that. He’s dangerous for other reasons though. I agree with your “some evidence” line in that he does seem to be focusing on the part of his personality that does seem to be the most dangerous - inability to manage conflict. Prison does provide for that conflict - but it also provides many rules and structures that he wouldn’t have on the outside. Dunno. I have a difficult time saying that anybody who has murdered their wife should ever see freedom again at all - “reformed” or not.
If it means I won’t have to do a ritual dance under the full moon, facing towards finland, just to get it installed correctly, I welcome my new gentleman overlords.
I never understood why AMD themselves don’t work in integration in Debian and Fedora. That way Ubuntu and RHEL would automatically inherit it. At worst it would be in Universe/EPEL.
I think end-to-end refers to the “open source”, not the GPU acceleration. I know GPUs have always been a black magic to get working and so you often have to use proprietary, closed-source blobs from the manufacturer to get them to work.
The revolution that this is bringing seems to be that all that black magic has been able to be implemented in open-source software.
Could be wrong though, that’s just how I interpreted the article.
Yup, it’s definitely about the “open-source” part. That’s in contrast with Nvidia’s ecosystem: CUDA and the drivers are proprietary, and the drivers’ EULA prohibit you from using your gaming GPU for datacenter uses.
HIP is amazing. For everyone saying “nah it can’t be the same, CUDA rulez”, just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you’re basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don’t do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
The problem with ROCm is that its very unstable and a ton of applications break on it. Darktable only renders half an image on my Radeon 680M laptop. HIP in Blender is also much slower than Optix. We’re still waiting on HIP-RT.
That’s true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can’t speak for rendering as that’s not my field, but I’ve done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD’s CUDA-to-HIP porting tool, then yeah chances are things won’t work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
How is the situation with ROCm using consumer GPUs for AI/DL and pytorch? Is it usable or should I stick to NVIDIA? I am planning to buy a GPU in the next 2-3 months and so far I am thinking of getting either 7900XTX or the 4070 Ti Super, and wait to see how the reviews and the AMD pricing will progress.
Works out of the box on my laptop (the export below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):
Anything that is still broken or works better on CUDA? It is really hard to get the whole picture on how things are on ROCm as the majority of people are not using it and in the past I did some tests and it wasn’t working well.
Hard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
Don’t a lot of CPUs like Snapdragons already have “performance cores” and “efficiency cores” that the kernel has to be able to recognize in order to switch between them? This sounds neat but I’m just curious what’s different between these situations.
The only difference is the hardware. Intel has their own version that has been in the kernel for a long time. Amd has been struggling with landing the concept.
Even Intel has these. I think this patch set goes a bit further and takes into account the silicon lottery differences between cores (according to the patch series)
I’m using the patch set on my framework 7840u and didn’t notice a difference though, though this is really YMMV.
Did you do benchmarks? It probably doesn’t help much for heavily multi threaded apps, as they should use all cores anyway. And most apps aren’t performance critical, altough it might stabilize fps in games.
Opening the connections is one thing but resends and stream ordering can also cause issues since they might delay the latest information reaching the user space application even if the packet for them has actually arrived just because some earlier packet has not. There can also be issues with implementations waiting for enough data to be available before sending a packet.
If your connection is stable, the latency will more or less be the same, but TCP will consume more bandwidth because of acknowledgement packets, making it harder to keep your connection stable.
On an unstable connection, TCP latency will skyrocket as it resends packets, while UDP will just drop those packets unless the game engine has its own way of resending them. Most engines have that, but they only do it for data that is marked as “important”. For example using an item is important, but the position of your character probably isn’t, because it’ll be updated on the next tick anyway.
This isn’t only an app issue, it’s the implementation in Mutter.
On KDE for example, I’ve set 150% fractional scaling, and all apps look sharp.
I was really hyped when the recent update introduced “proper” fractional scaling, and was bummed when I noticed it didn’t work in many of my apps, especially Electron ones.
Undoubtedly Wayland is the way forward and I think it’s a good thing. However I wouldn’t piss all over X because it served us well for many years. My LMDE 6 still runs X and probably will for the next 2 years at least because both the Mint Team and Debian team don’t rush into things. They are taking it slow, testing Wayland to make sure no-one’s system breaks when they switch to Wayland.
This is the best approach. Eventually it will all be Wayland but I never understood why this is such an issue. Like any tech it’s progress, no need for heated debates. It’s just a windowing system after all.
I love Wayland until I don’t. I honestly don’t think about it, it gets out of my way and my system is stable, until I go to use something like scrcpy that just doesn’t work at all. Luckily, the amount of things that straight up don’t work is shrinking.
It’s super impressive to see Wayland having its big breakthrough moment. I remember reading about Wayland 10 years ago and worrying it was going to end up as a dead project.
Wayland on an Intel iGPU runs flawlessly and has for several years. However, that’s a matter of drivers. AMD is in the forefront regarding having dGPU support, while NVIDIA is playing catch-up.
phoronix.com
Newest