This weekend is getting Foundry VTT up with a reverse proxy and certs for voice/video chat. Spinning up a new VM in proxmox and getting HAproxy configured for it (it’s used for the rest of my services).
I’m very excited for what’s to come. Good to know things are going well with the testing and bugs don’t seem too bad.
As features I’d like to see, I guess we still can’t configure different language layouts for different connected keyboards at the same time? I really would like to be able to have my usb connected one with english and the native laptop keyboard with portuguese. It’s annoying that I have to constantly switch when needed globally the layouts. At least multiple connected mouses seem to work fine.
That’s awesome! I wish more OS-es follow, especially Debian. Having support for an OS that can cover the whole perceived lifecycle of the hardware is something that was once (in the 2000s) the standard. This is something crucial for businesses, but it’s also great for home users.
HIP is amazing. For everyone saying “nah it can’t be the same, CUDA rulez”, just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you’re basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don’t do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
The problem with ROCm is that its very unstable and a ton of applications break on it. Darktable only renders half an image on my Radeon 680M laptop. HIP in Blender is also much slower than Optix. We’re still waiting on HIP-RT.
That’s true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can’t speak for rendering as that’s not my field, but I’ve done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD’s CUDA-to-HIP porting tool, then yeah chances are things won’t work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
How is the situation with ROCm using consumer GPUs for AI/DL and pytorch? Is it usable or should I stick to NVIDIA? I am planning to buy a GPU in the next 2-3 months and so far I am thinking of getting either 7900XTX or the 4070 Ti Super, and wait to see how the reviews and the AMD pricing will progress.
Works out of the box on my laptop (the export below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):
Anything that is still broken or works better on CUDA? It is really hard to get the whole picture on how things are on ROCm as the majority of people are not using it and in the past I did some tests and it wasn’t working well.
Hard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
I think end-to-end refers to the “open source”, not the GPU acceleration. I know GPUs have always been a black magic to get working and so you often have to use proprietary, closed-source blobs from the manufacturer to get them to work.
The revolution that this is bringing seems to be that all that black magic has been able to be implemented in open-source software.
Could be wrong though, that’s just how I interpreted the article.
Yup, it’s definitely about the “open-source” part. That’s in contrast with Nvidia’s ecosystem: CUDA and the drivers are proprietary, and the drivers’ EULA prohibit you from using your gaming GPU for datacenter uses.
If it means I won’t have to do a ritual dance under the full moon, facing towards finland, just to get it installed correctly, I welcome my new gentleman overlords.
I never understood why AMD themselves don’t work in integration in Debian and Fedora. That way Ubuntu and RHEL would automatically inherit it. At worst it would be in Universe/EPEL.
The appeal of it, to me, is the same as why Docker containers are really good. You write your definition, save it to git, for example, and if you ever need to setup your computer from scratch, if you restore that config, it’ll setup your computer exactly like it was before. But even besides that, being able to roll back if something goes wrong, is a big plus
That’s what I keep reading and why I would like to give it a try. For now I’m still confused how this is easier/more efficient than sharing your list of packages, restoring a backup, or using downgrade in Arch. I’m really interested because I like to try new stuff, especially if they bring something of interest.
I really have hard time to see the difference for now after my first setup in a VM but also because imaging my full Arch system on a new machine 2 years ago only took me an hour and less than 10 command lines.
Again, I’m genuinely trying to understand what I’m missing. From my reading NixOS seems to be the only distro I could switch to.
my thoughts which may have inaccuracies: in NiXOS The package declares the exact version of dependencies needed. when you update nixos it takes up quite a bit of space because you may have some links to one library but another app uses something else and both are stored on drive, and your old install is still there to roll back to. On other distros a package lists dependencies, but during updates a single dependency may have a bug fix point release, and upRev. so the behaviour of that app you added may change depending on all it subparts changing. So when you install non nix today or 6 months that package also determines how it may function. if Dependencies updated in the meantime your install may act different. NiX prevents this since you have a repeatable install.
Thanks for taking time to share this detailed thought. That’s an interesting point I forgot because I didn’t experience any related issues over 15 years with Arch but that’s still a nice approach. I can certainly see why this is a big plus for NixOS.
I haven’t had issues with my OpenSUSE Leap install in 7 years either, there is careful curating, and automated QA testing, and roll back snapshotting if you break something while messing about. But I have a NixOS machine also. It provides a nice way of configuring a repeatble system, which is probably a huge bebefit for folks making / deploying linux devices that are 100% repeatable.
Right, I totally agree. If I would have to deploy my config on several machines or create dedicated config using a common base then I would have been convinced. I’m still not convinced from a dummy single user point of view but I still believe in this distro and like its approach so I’ll continue experimenting with it and we’ll see where my journey leads me.
At least for now I’m glad to have a new toy I can mess up with. With my Arch system I was getting this weird feeling where I was happy to have an efficient and stable machine while at the same time being bored to have nothing to test/tweak/destroy and rebuild. I mean I love to learn and discover new things so I experiment a bunch of applications and parameters I will never need anyway but it becomes harder and harder to find something that keeps me entertained for more than a day.
I hear you. My openSUSE Leap has been so stable that I got bored with nothing to tweak. Their MicroOS has an immutable system with config file setup capability, and sombody built this for it to make config file creation simple opensuse.github.io/fuel-ignition/editso that was fun for a while. But NixOS was a nice distraction also
Because your Nix config also configures your software, not just installs it. Admittedly, with base NixOS that’s more true with server software than desktop. But with the addition of home manager you can also configure many desktop apps in your Nix config.
Thank you for this addition. I very much appreciate the fediverse community who is helping people to understand things, share their knowledge, and acting nicely (if we exclude some rare people who are clearly not used to live within a sane community). I’ve seen home manager but this raised one more question to me: what’s the added value compared to stow for example? Thanks again for sharing your thoughts.
I’ve never used stow so I can’t speak to it specifically. Home manager is nice for two reasons. If you’re already using NixOS you can have one unified config for your whole system. And because Nix is a programming language generation these configs, you may be able to do thing you wouldn’t otherwise. It also has some nice defaults that you may not get without.
Due to the still early development of NixOS, Home manager is in some ways very similar to nix-env and flakes is still highly experimental. Also, the configuration parameters are changing quite significantly with the distro development. I’m sure this will all settle down when the distro will become more mature but to be honest that’s also what attracts me. I like chaos ^^ Seriously, this shows me some potential for great achievements. I will continue testing NixOS but for now I didn’t find THE reason to leave Arch yet. If I would have to deploy my config on several machines or create dedicated config using a common base then I would have been convinced. Will see where my journey leads me.
Sure but not everything can be defined in the Nix config. Firewalls have an issue, some options for packages are not implemented yet. For example systemd networkd doesn’t have all the features implemented.
NixOS in it’s current form does have it’s limitations but it’s ever improving. I personally have never had issues doing what I needed to firewall wise, but I’ve not done much of anything complex. Mostly just opening ports and a little port redirecting.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.