Ooh, CRT monitor. And that’s an odd resolution that it’s suggesting. You could try driving it at 1280x1024 at 60Hz. If that doesn’t work, try 800x600 at 60Hz, which is the traditional lowest SVGA resolution (picture may be slightly distorted if it really is a 5:4 monitor). If that doesn’t work, try the traditional VGA resolution of 640x480 just to get something going. I’d recommend using X as Wayland has probably not been tested very much on hardware this old. And it almost certainly has no clue how to deal with a widescreen resolution or a resolution wider than its “Recommend mode”.
(I was still using a 17" CRT with X at 1280x960 up to about five years ago. I had no issues ever.)
X/X11 is a client-server protocol from the age of 10Mbps networks, intended for a bunch of “dumb terminals” connected to a mainframe that runs the apps, with several “optimizations” that over time have become useless cruft.
Wayland is a local machine display system, intended for computers capable of running apps on the same machine as the display (aka: about everything for the past 30 years).
Nowadays, it makes more sense to have a Wayland system (with some RDP app if needed), than an X11 system with a bunch of hacks and cruft that only makes everything slower and harder to maintain. An X11 server app acting as a “dumb terminal”, can still be run on a Wayland system to display X11 client apps if needed.
I think if you have no answer, it could be that NixOS doesn’t solve any problem for you. In effect, it’s not better. Don’t buy into social media hype. It’s just a tool like any other.
You’re spot on and that’s what this discussion helped me figure out: I have no problem. I knew that but I also thought that NixOS would bring something new to improve my Linux usage. So far I still see such improvements for servers or deployment on several machines but not for a single user with standard needs (and this statement may be wrong and due to my limited experience with NixOS).
But NixOS approach is quite different from others and I feel like I may discover something of interest to me once I learn more about it. Also, just for the sake of learning and discovering, I will continue experimenting with it for a while.
In short, Nix reduces the setup time, both for your system and for your projects. If you find yourself spending a while setting stuff up (for example, after a reinstall; or maybe you want to run your project on another PC and need to install the right dependencies), Nix will help. Otherwise, if your desktop is vanilla Fedora or whatever and you don’t do much programming (or you don’t have any dependency management problems), Nix probably isn’t for you.
HIP is amazing. For everyone saying “nah it can’t be the same, CUDA rulez”, just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you’re basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don’t do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
The problem with ROCm is that its very unstable and a ton of applications break on it. Darktable only renders half an image on my Radeon 680M laptop. HIP in Blender is also much slower than Optix. We’re still waiting on HIP-RT.
That’s true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can’t speak for rendering as that’s not my field, but I’ve done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD’s CUDA-to-HIP porting tool, then yeah chances are things won’t work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
How is the situation with ROCm using consumer GPUs for AI/DL and pytorch? Is it usable or should I stick to NVIDIA? I am planning to buy a GPU in the next 2-3 months and so far I am thinking of getting either 7900XTX or the 4070 Ti Super, and wait to see how the reviews and the AMD pricing will progress.
Works out of the box on my laptop (the export below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):
Anything that is still broken or works better on CUDA? It is really hard to get the whole picture on how things are on ROCm as the majority of people are not using it and in the past I did some tests and it wasn’t working well.
Hard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
Unity employee here, idk anything specific about the departments that handle this I wouldn’t even know what their name is. With that caveat, I will say that all the layoffs last year going into this year, changing CEOs, and the competition between big company beurocracy and the dying breath of small company culture, a lot of departments are behaving erratically. I wouldn’t be surprised if nobody internally has a clear answer why this was banned but others aren’t. Some workers may legit be trying to help but their hands are tied for corporate or maybe even legal reasons, it could be people trying to keep their heads down and close tickets quickly to keep metrics up in the hopes they’re less likely to be fired. I think you all know this already but please don’t be too hard on the workers we’re doing what we can but it’s a corporate mess right now
Yeah it’s a bit of a shit show for sure. Unfortunately I do not have anything else lined up right now, I know that’s an unsafe decision. My life has been a mess lately I can only handle so much at once and finding different work is exhausting
If you’re a software engineer, and you’re in the unity Austin area, lmk. Assuming you would be open to writing b2b software, the company i work for is huge, and still hiring devs.
A friend of mine worked in a position I would have assumed was considered vital to one of Unity’s products, in fact to my knowledge they were the only one keeping that part running. Apparently the higher-ups were able to lay them off without much hesitation this time around. The company seems to be leaking hard.
You don’t understand how development works, at all. The developers themselves don’t make these kind of decisions at these companies. They just do what they are told to do by their higher-ups. The higher-ups happen to be corporate businesspeople that don’t really know much about tech, and only care about profits.
The blame for Unity’s failures belongs to the executives and businesspeople, not the developers.
Look, it’s a low level employee of a faceless corporation!
GET 'IM!
Jokes aside, thanks for the transparency, and salute to you and your coworkers for trying to weather the storm caused by “shifting paradigms”… that’s what they call it, right? I know the execs can shift my paradigm, that’s for sure.
Tech in general but especially the game industry desperately needs to unionize. If the last couple years doesn’t convince tech bros they’re just as expendable as all the other working class out there, idk what will. Got to do something to insulate us from “restructures”, “rightsizing”, and “company resets”
I don’t use gnome; can someone who does plz tell me what style that is? The color scheme is Everforest, but what’s the rest of the style called? It’d look good on rofi & polybar.
Edit: I guess the theme is also Everforest Dark? I think it’s this one.
Edit 2: Someone has already done most of the work for polybar, rofi, and some other tiling WM tools; dotfiles here. I haven’t tested it myself yet, but it looks pretty good.
X (not formerly twitter) is decades old and is built around deprecated ways of doing things as well as a lot of legacy functions.
Wayland is a relatively new project with the aim of replacing X as a more “modern” display server.
Wayland had some stability issues, but they’ve since improved.
I’m sure Wayland is good and all, but I can’t be arsed replacing X yet. I don’t really have any skin in the game, I just don’t replace functioning components just because they’re old (FYI, bash turns 35 this year). While X does what I need it to do, I’ll keep using it. I’ll probably move over when my distro does.
I’ll leave the technical explanation to someone else.
Yeah. Wayland works great on my laptop. I can’t even log in on my desktop because it’s Nvidia. As a light Linux user the difference doesn’t really mean much to me aside from the fact it doesn’t work on my desktop.
That’s right. To add a few things: X11 isn’t bad. It’s just a big and complex piece of software that has grown for multiple decades. And nobody wants to do big changes or add new things anymore.
Wayland is the modern and “fresh” new approach. I’ve had some issues with my NVidia graphics card. But that wasn’t Waylands fault, but the NVidia drivers. I have a laptop with just Intel graphics and both X11 and Wayland run excellently on that machine.
With Linux we often get many choices, and have several alternatives available to do the same / a similar job. That is a bit complicated for someone new. But we should embrace it, be glad that we can pick whatever suits our individual needs. Wayland still has some issues on a few specific setups, but eventually it will replace X11 as the default.
The X font server has been deprecated like 10 years ago. I doubt you’ll find it as an option in a modern distribution. Nowadays fonts are rendered by the client (application) with something like the Cairo library (if I’m not mistaken).
There is t really a whole thing. Wayland is where Linux is going as the people who developed X11 say it is insecure and it’s to hard to fix the issues so they went and started Wayland. They should have called it X12 or something then there would probably be less complaining.
Haha that’s my thoughts exactly. Anyone that complains should go try to maintain x11 so they can understand way development is moving to Wayland. I’m not a dev at all and I don’t understand but I love reading the blog post from devs
TBF the error can become that big if you do a bunch of unstable operations (i.e. operations that continue to increase the relative error), though that’s probably not what is happening here.
To get to 0.01 error, you’d need to add up trillions of trillions of floating point errors. It will not happen solely because of floating point unless you’re doing such crazy math that you shouldn’t be using primitives in the first place.
As the answer in the link explains, it’s adjustment of your scaling factor to the nearest whole pixel, plus a loss of precision rounding to/from single/double floating point values.
Gnome is coded with JavaScript (lmao 🤣) so yeah, I Think you are right.
EDIT: Actually, even if JavaScript and other languages have this issue, the value 1.7518248558044434 has not this issue. There is another reply that explains it and makes totally sense. But still pretty lame to know the desktop runs with JavaScript. (Yeah, I hate Gnome)
It’s not a “language” issue it’s a “computer” issue. This math is being done on the CPU.
IEEE 754
Some languages do provide for “arbitrary precision math” (Java’s BigDecimal for example) but it’s slower to do that. Not what you want if you’re multiplying a 4k matrix every millisecond.
And Gnome is far from the only desktop that uses JS, KDE Plasma, for example, also uses a lot of JavaScript.
It’s weird when people bash Gnome for using JS, when practically everybody else uses it a lot too. Shows that they’re just regurgitating “Gnome = bad!!!” nonsense.
We get it, you think disliking Gnome is a quirky, edgy personality trait.
Mostly C because you need to type more C code to do the same with JavaScript, so I suppose most of the logics are using JavaScript. Plasma desktop has 2% JavaScript (invent.kde.org/plasma/plasma-desktop), it’s not comparable. 🙂
There’s a lot more to your UX than just the Plasma desktop. And you’re also trying to pass off Gnome’s shell as being Gnome desktop. Pretty disingenuous.
Using JavaScript isn’t inherently a bad thing. JavaScript can be very useful when used for scripting. Obviously anything with a new for performance will be done in C.
JavaScript isn’t the best language to make a desktop interface in my opinion, it can be very efficient, but you can see in bugs (at least in the past) how bad performance it had, and they needed to re-factor it to replace to C or improve the JavaScript. I’m just laughing and making fun of it using JavaScript, not saying it is slow, Gnome is pretty fast nowadays.
There is less than 4% more code in C than JavaScript. That’s pretty much, many features on the gnome-desktop is using JavaScript too, like gestures and mouse events.
Well, I started this thread saying it runs on JavaScript, and I mean that they need JS for most of the interactions with the desktop, like gesture or mouse events. 😞 Even if most of the code is C, we all know we need to write much many lines of code of C to do the same with JS, so most of the logics on GNOME is computed by JS. We need some rust here. 🦀 🦀 🦀 🦀
You don’t get to decide what too much JS in the project is unless you actually work on and have in depth knowledge of the project. I dont like JS, but it has its uses.
Many people are conflating modern electron bloatware with ‘JS bad’, but things are not that simple.
If I understand correctly, in your case: let’s say you search for a term , i.e. vanillaOS. You will be able to select from a list of videos, with a preview image and other data ( data, views, channel etc). If on a later date you wish to search again for this term, by selecting the j option, you can easily repeat the same search, with any new videos available included.
Ideally I’d like it as a playlist, sort of like youtube-tui’s library, but this seems like it might be a workable solution. Also, even if it turns out not to be, very nice script. I’ve always sucked terribly at bash scripting.
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.