This is one of the hardest walls for people to jump over mentally, from scavenging the internet for binaries to using a package manager.
I think ideally one should understand what they’re doing, I think that if you did you would realise it’s not hard, just different from what you’re used to. Usually you install things using the graphical package manager, of which there are a lot, since I don’t know which one you are using nor have I used any of them in a long while, I’ll use the terminal as an example (same reason the site uses terminal commands), but all of this is almost assuredly possible via GUI.
To install things you usually do sudo apt install , this is a huge advantage on Linux, it works similar to your phone in that everything gets updated together but also it installs dependencies separately, which means that instead of having 10 copies of the same library for 10 programs that use it (like on Windows) you get a single one, which is part of the reason binaries are smaller on Linux.
The problem with this approach is that some programs are NOT listed there, the only programs there are the ones the maintainers of your distro (Ubuntu in this case) can review and approve. So you can have a lot of different solutions for this:
The first and most obvious for Windows users is to download the .deb from the website and just run that like you would a binary on windows, i.e. double-clicking it, or from the terminal you can run sudo dpkg -i . This works, but you lose the advantages of a package installed via your package manager, i.e. you would get the same experience as on windows, so it’s not ideal.
The second way is the one they’re describing, essentially you’re adding a new repository to the package manager, that the people who wrote the program are maintaining (instead of Ubuntu guys), this is a two step process, sudo curl -fsSLo /usr/share/keyrings/mullvad-keyring.asc https://repository.mullvad.net/deb/mullvad-keyring.asc that command is downloading the file https://repository.mullvad.net/deb/mullvad-keyring.asc and putting it in /usr/share/keyrings/mullvad-keyring.asc, this is needed because repositories are not trusted by default, that would be a security nightmare, you can do this via GUI if your problem is with the terminal , just download the file and copy it to that location, it’s just harder to explain than giving you a command. Then it’s adding the repository to the repository list, the command is echo “deb [signed-by=/usr/share/keyrings/mullvad-keyring.asc arch=$( dpkg --print-architecture )] https://repository.mullvad.net/deb/stable $(lsb_release -cs) main” | sudo tee /etc/apt/sources.list.d/mullvad.list that command has a lot to unwrap, in essence it’s editing the file /etc/apt/sources.list.d/mullvad.list and writing a line like deb [signed-by=/usr/share/keyrings/mullvad-keyring.asc arch=amd64] https://repository.mullvad.net/deb/stable focal main" there, but because the guy who wrote this doesn’t know your architecture (e.g. amd64) nor your version (e.g. focal) he wrote a command that gets that information from your system, you can instead write the file yourself if you know those. Then install via package manager as normal.
There’s a third way which is more recent which is install via snap/flatpak which is similar to install via package manager, except you don’t add new repos.
There’s a fourth way which is manually, usually when you compile stuff you install them manually.
I know it’s a lot to take in, but I’m of the opinion that if you understand what’s happening it makes things easier.
Just as an addendum to your answer. In the command writing to mullvad.list the | sudo tee /etc/apt/sources.list.d/mullvad.list is using two helpful linux utilities to modify the command. The first is the | which is called a pipe and connects the text output of one program to the text input of another. The pipe is connecting the output of echo which simply prints a string, in this case composed of the outputs of several other commands to the program tee. Tee which is given admin privileges by the sudo takes an input stream and splits it between two files. In this case those are mullvad.list and since no other was provided stdout the output pipeline of the terminal running the command.
EDIT:
In the interest of further completeness. Another utility used in those commands is the command substitution operator of sh. So when the terminal is interpretting text $(some command) gets substituted out for the text output by the command in the parentheses. It is another common way of connecting commands on the shell to allow for more flexible and powerful commands.
The problem is that for most users, when their setup is completed they won’t need to play with it for a while so after that any time they need to install something new through the terminal it means losing time to find instructions again.
Nothing is learned, to the eyes of a casual users it’s just meaningless entries getting copy/pasted and it’s information getting repeated again and again and again just with slightly different entries for each program. Meanwhile “how to install a program on Windows” would basically require one page on the whole internet to cover 99% of situations: “Download the install file, double click it, follow the on screen instructions to automatically install the program”.
The problem is that for most users, when their setup is completed they won’t need to play with it for a while so after that any time they need to install something new through the terminal it means losing time to find instructions again.
Which is why it’s better to understand what you’re doing than blindly copying pasting. You won’t need to remember these since whatever you want to install if it’s not on the repos you’ll have to google it same as if you were on Windows.
Also on Windows the steps are: Download the install file, double click it, follow the on screen instructions to automatically install the program, then every week or so go back to the website, check if a new version was released, if so download it and install it again. If the configuration would be destroyed by doing this first, make a backup first, if the new version is not backwards compatible for configuration move the existing configuration changes so that you get the new default after install and can apply your changes afterwards.
That’s closer to the truth, and you need to do that weekly for every one of the dozen or hundreds of programs a person has, no wonder people don’t update their programs on Windows and become susceptible to lots of exploits over time.
Also, read my option 1, which is what most websites offer you first, i.e. download a .deb and run it which is the equivalent of Windows, with all of its downsides. For example if you go to that website that OP posted and click on downloads you can select Windows, Mac or Linux, and you can download an installer that way and be done with it. But only Linux has a better option that takes a couple more steps but saves you lots of time in the future.
then every week or so go back to the website, check if a new version was released, if so download it and install it again
Don’t know what kind of program you’re running but… No.
Also if you want adoption you need to make your product easy to use and not ask them to become experts at how things work. Do you think all bike riders know how to adjust their derailer or even care to know? No, because people have other things they care about. Same guess 6 for computers, if Linux requires users to understand how to do things manually in the terminal then the “year of the Linux computer” will never happen.
the “year of the Linux computer” will never happen.
It won’t, that’s fine. People who don’t want to lean anything about computers use iOS and Android now. And that’s fine. I never want Linux distros to become like that.
Don’t know what kind of program you’re running but… No.
My Linux has updates every week, which means that if I was on Windows to keep everything the same up to date I would need to check every website to see which app released this week, maybe this week Firefox had a new release, maybe next week it’s mullvad VPN, and next week is the NVIDIA driver, but if I hadn’t checked all of them I would not know which ones have a new release.
Also if you want adoption you need to make your product easy to use and not ask them to become experts at how things work
Agreed, but also you should have options so that power users can take advantage of it.
Do you think all bike riders know how to adjust their derailer or even care to know?
Do you think that bikes should not have gears since most people don’t know how they work? No, because even if you don’t understand the mechanics you can understand the general terms, and even if you don’t understand gears you can just not use them, same as a package manager. Options is always better.
if Linux requires users to understand how to do things manually in the terminal then the “year of the Linux computer” will never happen.
It doesn’t require it, you’re ignoring the fact that OP could have just clicked download and download an installer same as he would on Windows. But if you can use the terminal and understand package managers you can use Linux in a way that Windows is impossible, if you can’t you can still use Linux in the same way you would windows with all of the sales downsides.
Yes, as much as I appreciate memory safety and rust in particular. I’m very worried by this pivot away from copyleft and GPL. Specially the rewriting in rust phenomenon of fundamental stuff. It’s safer, yes, but they’re all pretty much non GPL and it seems very risky to me. Make no mistake, the industry is riding this wave to move away from copyleft to permissive licenses.
I wish that people understood the importance of FSF and GNU
You make it sound like a conspiracy. Just accept that some things are organically more popular, like MIT which is very easy to understand and use for normies. It’s not perfect, but that’s how it is
MIT is a terrible license that only got popular because of the popularity of the anti-open source movement in the last decade.
one could write books about what’s wrong with the MIT license.
It could even theoretically be argued that MIT has in some ways allowed big tech companies to proliferate, by effectively allowing them to take open-source code, modify it, and then close it off in their proprietary software. What does this mean? It means that the work of countless dedicated open-source developers can be co-opted by companies that have done almost none of the work, reaping several billions of dollars, while the developers who actually did the work make no money. It’s like opening your doors wide only to have someone come in, take your stuff, and sell it back to you.
In contrast, in licenses like the GPL, there’s a requirement that if you use GPL-licensed code and modify it, your new code also has to be open-source under the GPL.
I’m sure there’s some community pull as well, because most of the rust ecosystem seems to be converged on MIT. But what despairs me is the wilful sidelining of GPL and everything GNU by some open source community members/corporate people. So yeah, you’re probably right
But what is wrong with C and C++ apart from the ISO fuck-up (ahem, slow updates)? There’s a lot of technical debt, so wouldn’t it be better to create an alt-language compiler that adds improvement over C, so that migration is possible in multiple stages?
I should have framed my words better, I guess. Rust is a radically different language, and honestly, none of the feature it offers fixes the main issue, that is technical debt - I mean yes, there’s incline C or FFI, but that’s still going to be a radical migration.
What I’m trying to propose is an alternative project, independent from the ISO. Maybe it could be a C-to-Rust, or a C-to-Vale migration project. It could be any of the modern language, I don’t really care. But that particular compiler/transpiler/migrationpiler/<something>-piler should have the ability to do step-by-step migration.
I’d say no. Programming safely requires non-trivial transformation in code and a radical change in style, which afaik cannot be easily done automated.
Do you think that there’s any chance to convert from this to this? It requires understanding of the algorithm and a thorough rewrite. Automated tools can only generate the former one because it must not change C’s crooked semantics.
I was planning to learn C23 for quite some time. It’s a pity that I’ve been planning to learn RISC-V with it, sigh. I guess I’m gonna move over to Rust or Zig, whichever makes sense. But I’d probably switch, when Vale becomes a legit language.
I think there’s no need to stick with one particular language. It benefits to learn more languages and bring the “good parts” of their design into your code whatever you are writing it in.
Btw It happens that I’ve learned a bit of RISC-V, with Rust.
C and C++ can’t be fixed retroactively because old code must remain compatible.
If you’re going to implement your own C dialect, you may as well just write a new language.
For C++ that’s Rust, for C that’s probably Zig (Zig will just let you import existing C files, which helps with porting). Carbon and experimental languages like Jakt may also work, it all depends on what your priorities are.
I know the evangelists can be somewhat overwhelming, but its popularity is not unwarranted. It’s fairly easy to pick up, has an incredibly enthusiastic and welcoming community. People like it because it’s incredibly performant, and its memory safe. In terms of DX it’s really a joy to work with. It just has a LOT going for it, and the main drawback you’ll hear about (difficulty) is really overblown and most devs can pick it up in a matter of months.
The main difficulty I have with Rust (what prevents me from using it), is that the maintainers insist on statically compiling everything. This is fine for small programs, and even large monolithic applications that are not expected to change very often.
But for the machine learning projects I work on, I might want to include a single algorithm from a fairly large library of algorithms. The amount of memory used is not trivial, I am talking about the difference between loading a single algorithm in 50 MB of compiled code for a dynamically loadable library, versus loading the entire 1.5 GB library of algorithms of statically linked code just to use that one algorithm. Then when distributing this code to a few dozen compute nodes, that 50 MB versus 1.5 GB is suddenly a very noticeable difference.
There are other problems with statically linking everything as well, for example, if you want your application to be written in a high-level language like Python, TypeScript, or Lisp, you might want to have a library of Rust code that you can dynamically load into the Python interpreter and establish foreign function bindings to the Rust APIs. But this is not possible with statically linked code.
And as I understand, it is a difficult technical problem to solve. Apparently, in order for Rust to optimize a program and guarantee type safety and performance, it needs the type information in the source code. This type information is not normally stored into the dynamically loadable libraries (the .so or .dll files), so if you dynamically load a library into a Rust program its type safety and performance guarantees go out the window. So the Rust compiler developers have chosen to make everything as statically compiled as possible.
This is why I don’t see Rust replacing C any time soon. A language like Zig might have a better chance than Rust because it can produce dynamically loadable libraries that are fully ABI compatible with the libraries compiled by C compilers.
So you’re working on your machine learning projects in Zig?
No, Python and C++, which were the languages chosen by both Google and Facebook for their AI frameworks.
I just think if a systems programming language like Rust does not provide a good way to facilitate dynamic linking the way C, C++ does, these languages will start running into issues as the size of the compiled binaries become ever larger and larger. I think we might all be a little too comfortable with the huge amount of memory, CPU cycles, and network bandwidth that we have nowadays, and it leads to problems when you want to scale-up a deployment. So I think Zig might make a more viable successor to C or C++ as a systems programming language than Rust does.
That said, I definitely think Rust and Haskell’s type systems are much better than that of Zig.
You can load Rust into Python just fine. In fact, several packages have started requiring a Rust compiler on platforms thst don’t get prebuilt binaries. It’s why I installed Rust on my phone.
The build files for Rust are bigger than you may expect, but they’re not unreasonably big. Languages like Python and Java like to put their dependencies in system folders and cache folders outside of their project so you don’t notice them as often, but I find the difference not that problematic. The binaries Rust generates are often huge but if you build in release mode rather than debug mode and strip the debug symbols, you can quickly remove hundreds of megabytes of “executable” data.
Rust can be told to export things in the C FFI, which is how Python bindings are generally accomplished (although you rarely deal with those because of all the helper crates).
Statically compiled code will also load into processes fine, they just take up more RAM than you may like. The OS normally deduplicates dynamically loaded libraries across running processes, but with statically compiled programs you only get the one blob (which itself then gets deduplicated, usually).
Rust can also load and access standard DLLs. The safety assertions do break, because these files are accessed through the C FFI which is marked unsafe automatically, but that doesn’t need to be a problem.
There are downsides and upsides to static compilation, but it doesn’t really affect glue languages like Python or Typescript. Early versions of Rust lacked the C FFI and there are still issues with Rust programs dynamically loading other Rust programs without going through the C FFI, but I don’t think that’s a common issue at all.
I don’t see Rust replace all of C either, because I think Rust is a better replacement for C++ than for C. The C parts it does replace (parsers, drivers, GUIs, complex command line tools) weren’t really things I would write in C in the first place. There are still cars where Rust just fails (it can’t deal with running out of memory, for one) so languages like Zig will always have their place.
Is it not possible for Rust to optimize out unused functions as with C?
No Rust can do dead code elimination. And I just checked, Rust can do indeed do FFI bindings from other languages when you ask the compiler to produce dynamically linking libraries, but I am guessing it has the same problems as Haskell when it produces .so or .dll files. In Haskell, things like “monad transformers” depend pretty heavily on function inlining in order to achieve good performance.
So I am talking more about how Rust makes use of the type system to make decisions about when to inline functions which is pretty important when it comes to performance. You usually can’t inline across module boundaries unless modules are all statically linked. So as I understand it, if you enable dynamic linking in your Rust program, you might see performance suffer a lot as compared to static linking, and this is why most Rust people (as I understand it) just make everything statically linked by default.
I am not sure that is quite right. I dont think rust support just enabling dynamic linking of its dependencies. It can talk to dynamically linked libraries - which is how FFI works. And you can compile rust crates to be dynamically linked. But when you are going down this route you are talking over the C ABI. This requires some effort on the code author to make their APIs exportable to C types and means you lose all safety when talking over the C ABI.
I also dont think that rust inlines across a crate boundary unless the function is marked as inline or LTO is enabled - inlining across crate boundaries is expensive and so only done when explicitly needed or asked for it. It is more that you lose features like generics and traits and other things that are not supported over the C API.
Do you need inlining if you just use fixed monad transformers?
I am not sure what you mean by “fixed” monad transformers, if you mean writing your own newtype where the functor variable is the only type variable, essentially what you are doing is hand-inlining the monad transformer, and so no, if you inline by hand, then the compiler doesn’t need to do it.
Haskell inlines all newtype definitions automatically, so if your monad transformer has all of the type variables bound (except for the functor variable, because that is a special case the Haskell compiler is specifically designed to handle) the compiler will usually reduce those to ordinary lambda expressions automatically, and lambda expressions usually optimize to the most efficient machine code.
The only time the compiler cannot reduce a newtype to an efficient lambda is if the non-functor variables, e.g. the state type variable or the exception type variable, are unbound. Those values could become anything at all at its call site, limited only by the constraints set by the type context. So the type context information, a lookup table of type class instances, must be associated with that lambda expression, and in order to do that, the compiler must create a closure around those values. Creating closures allocates values on the heap, and this is much, much slower than efficient lambda expressions, and no faster than allocating a data constructor as with Free Monads.
Alexis King did a presentation on it where she explains all of this extremely well, if you are interested: youtu.be/0jI-AlWEwYI
It is a bit long, but at 17:40 or so she starts talking about strategies for how monads and effects can be implemented in the GHC intermediate code, and compares Free Monads and effects to monad transformers. At 21:15 or so she begins to explain how newtype types can be optimized away completely, newtype constructors don’t exist at all in the low-level code, they are a “zero-cost abstraction.” On the other hand, data constructors (used for Free monads and effects) always allocate something on the heap which is an order of magnitude slower.
Then at around 27:45 she begins to show how newtypes with type variables cannot be inlined across module boundaries for the reason I explained above (type context tables associated with closures), and so monad transformers cannot be optimized across module boundaries.
From my personal experience I can tell you 2 reasons. The first is that this is the first general purpose language that can be used for all projects. You can use it on the web browser with web assembly, it is good for backend and it also is low level enough to use it for OS development and embedded. Other languages are good only for some thing and really bad for others. The second reason is that it is designed around catching errors at compile time. The error handling and strict typing forces the developer to handle errors. I have to spend more time creating the program but considerably less time finding and fixing bugs.
My point wasn’t that C++ is good. My point was that C++ can and is used everywhere (desktop applications, web applications, OSs,…) and is older than Rust. So I feel that “this is the first general purpose language that can be used for all projects” is false. Probably “this is the first general purpose language that I (and many others) like to use for all projects” is true, but is a different claim.
TLDR: You said Rust was first language capable of system, app and web, it isn’t.
It depends on what “can be used” means. I really like C# and it “can be used” for that full stack C# for example can write out native machine code, can manually and precisely lay out memory, and can directly link to assembly language routines. You can write an OS in C#. Even as a fan though, I would certainly argue that it is the wrong tool for that job.
In the same vein, while I know C++ “can” be used for web dev, I would argue that anybody that tries to do so for any significant project is insane.
I am not sure I would use Rust for “everything” but I do think the claim that Rust is one of the first languages where it is reasonable or practical to choose it for any of these uses is valid. Rust code can be very high level and often does not much different than a scripting language. At the same time, it can go as low-level as you want. This article is about an OS in Rust ( and there are few ). Web dev in Rust is totally reasonable and there are a few popular frameworks available. Rust has one of the best WASM stories around.
Before using Rust I was using C++ for most projects and while it is a really powerful language there were some big problems:
no standard build system, most projects use cmake or meson and vendor dependencies with the projects. These build systems were really hard to learn (especially cmake, meson is easier). There are package managers these days such as conan and vcpkg but there is not really one standard way to build programs like in rust.
error messages were really hard to understand, especially when the project uses templates
it felt like 3 languages in one, projects written before c++11 differ greatly from c++11 and up
some of the new language features have really weird syntax, for example lambdas
some people say that rust is hard, but modern c++ is considerably harder to learn, just look at the list of modern c++ features: github.com/AnthonyCalandra/modern-cpp-features, you have to know the different pointer types (unique_pointer, shared_pointer etc.), templates, rvalue references and move semantic, exceptions, constexpressions and the list goes on
That sounds pretty great. I get sick of having to switch gears for every layer. As a hobbyist it is tough to remember five or six languages well enough when only coding something a few times a year.
Since I do embedded, scripting, web front and back end this is sure tempting.
I have been hesitant to try to learn yet another language (this would make…ummm… idk I lost count ages ago). But with all the hype I may break down and give it a whirl.
Sounds like python may be a better fit if its supported on the embedded devices you use as it will cover scripting and backend too. Rust has quite a learning curve and can be rather verbose.
I realize that even $2 systems are running full Linux distros these days but Python does not map to what I think of as “embedded”. If you have a full Python interpreter, it is already a pretty rich environment.
That said, this is what computing is starting to look like. There is less and less “bare metal”. I work with people that claim to be “firmware” engineers and then, when you look, you find out they have a full Ubuntu distro running and they may as well be running on a laptop.
Sorry but I don’t see the reasoning backing the enthusiasm for python. Sure, it is great for scripting (this includes machine learning), but why for anything else?
I do use python quite a bit for scripting and backend, app, and I’ve used MicroPython a little bit, preferring C, C++ for embedded. It’s pretty great for what I need.
I might mess around with Rust out of curiosity anyway, though the downsides you mention make it less compelling for me, personally. I’m not a big fan of verbose languages (e.g., Java, though I have used it for some apps).
after more than 25 years using linux I could not care less about those dramas, when my distro will drop xorg I’ll switch and that’s it. I’ve got way too much stuff to implement myself already, there is no time for that. I mean, I’ve even embraced systemd…
sorry, my rhetorical question was obviously intended as why I should bother. I don’t see any value in stopping you doing whatever you think is better for you, in fact it is exactly what annoys me the most :)
Well Xorg is pretty much unmaintained and is on its death bed. Modern hardware and software are slowly favoring Wayland due to it being much simpler by design.
To provide features that Xorg can't.
If you don't need features like fractional scaling, VRR, touchscreen gestures, etc. you won't notice a difference.
People who do use those, will. Because for them, those features would be missing or not complete on Xorg.
mmmh, I bet I will not notice any difference also if I don’t do shit and keep whatever is working until the day I’ll have to switch because my distro drops the packages 🤷🏼
I wish that was my experience, but Nvidia drivers on KDE Wayland have had a lot of oddities and issues that have caused me to go back to Xorg every time I’ve tried (12 times and counting). Wayland is a good move in the right direction, and I look forward to it, but it’s still being implemented.
Oh absolutely, this isn’t to say “Wayland bad”, it’s just to say that a large number of people may not have a smooth transition, so it’s hard to say “just do it”
It was a birthday gift from my wife, and lets not alienate people who don’t know computer hardware very well and pick up something from Best Buy. I agree that Nvidia sucks, and many of the issues are indeed their fault, but we also can’t neglect the fact that they own the vast majority of the market.
I’ve been a Linux user since the 90s, and nvidia has been a problem as long as I can remember. The wayland issues are just a new chapter in a long saga. ATI used to be the same, but they came around after having been bought by AMD.
If you’re already planning to use Linux on something a quick search will directly tell you that nvidia is a problem. If you got the hardware before nvidia that sucks - but again, it’s nvidias fault.
I think we absolutely should neglect nvidias market share, and just fully drop support for nvidia cards - either they’ll get pressured by angry users to no longer behave like dicks, or they keep doing it, and people will only make the mistake of buying nvidia once (or not use Linux) - either way, we’ll have gotten rid of a massive headache.
Running AMD/AMD right now for cpu/gfx, and I’m happy with my gaming laptop (and it’s price point).
Linux support and general support of open source was amajor factor in my decision. Intel is also really good on the CPU front, but I want to support AMD for its ooen source and speedy graphics offerings.
Also quite important to make sure we don’t have just a single strong x86 vendor - even though currently looking at price/performance you’d almost always go for AMD.
The time before ryzen was horrible - a 4-core-CPU was considered high end, and if you needed something more you needed to go for ridiculously overpriced Xeons. Similar for servers - you could get slightly higher core counts there, but when going for more than 8 cores it’d also get expensive very quickly.
Now we’re talking about 16 cores in high end notebook, and 64 cores in still reasonably priced pro workstations.
Eventually people will have to get new hardware. That’s the moment to avoid nVidia, that’s how simple this can be.
Also, the problem is nVidia giving shitty Wayland support, not Wayland providing no nVidia support. It’s nVidia who has to write the drivers since they themselves opted to keep their implementation details a secret. There’s nothing the Wayland people can do except plea, beg and shame. If nVidia then decide not to care, then I say fuck them.
Not supporting Nvidia cards will make Wayland unusable for at least half the Linux desktop users, probably more. Stats I recall range from 50-75%.
“Just buy non-Nvidia” is not, I repeat, a simple option. Lots of people stick with old GPU models because the price/performance ratio has gone out the window and they cannot afford to drop hundreds or thousands on one. Many others have Nvidia in their laptops.
There’s nothing preventing Wayland from working with Nvidia except the political insistence that it be open sourced. Which Nvidia is not interested in, never was, and never will be. And it’s a red herring to begin with.
TLDR either Wayland bends their stance on open source or their adoption will be severely limited.
Nvidia will feel enough pressure (likely from the ML / HPC space?) to provide open kernelspace support that they’ll actually make that happen.
Which… Has already happened.
Nvidia took a lot of the kernelspace logic that used to be in their proprietary driver, re-architected their GPUs to move that logic into a firmware blob (GSP).
And last year they released a completely Free driver that intefaces with GSP.
This allowed Nouveau developers to finally access critical features like power management (which were basically behind a wall of DRM, as Nvidia used legal and technical measures to lock Nouveau out of their firmware).
Now Nouveau has a new shader compiler, Vulcan support is growing rapidly, and people like me will soon prefer the Mesa stack for Nvidia over Nvidia’s own drivers.
And you can bet that Nouveau will work great with all of the Wayland compositors.
This is really the exact wrong point in history to be making the argument you’re trying to make 🤣.
Wow you got that backwards. They don’t do any of that for the sake of Nouveau or Vulkan or Wayland or whatever. They don’t care what people use their open scraps for.
They open up the minimum they can get away with because it’s ultimately meaningless — their proprietary stuff is still hidden away and it’s not like you can use the parts they open with anything else.
This, btw, applies to AMD and Intel too. The only choice you get with proprietary hardware that you have to use (like GPUs) is whose dick you want to suck. They’re not your friend and they won’t let community pressure then into decisions.
I’m not sure if this is exactly the same issue I had, but mine ended up being resolved by disabling fastboot on the Windows side. Near as I can figure when I “shutdown” from windows, fastboot prevented releasing control of the network adapter to Linux. Wifi would only work if I restarted from windows, or when fastboot got disabled.
Son of a bitch. Instead of “turning shit the fuck off”, is windows putting the wifi card into some sort of eternal WoL mode when it shuts down? And the wifi card isn’t resetting at boot time (or honoring a reset command) to give the linux drivers a known starting state?
Change your distribution to MX Linux and use MX Package Installer there. Select Mullvad VPN from "Popular Apps" tab there and MXPI will do all these steps for you.
I had moved to a new house but my setup is the same as it was prior. I run updates regularly but I’m pretty sure the issue started after moving and before updating
Yeah, I tried this, and it works from my session, but I still got the same error from trying to run the program. I figured it was because it is called outside the bash session so the run commands have not been run, but is that perhaps not true?
Thanks! I was trying to implement this, and was trying to figure out how to pass all the arguments! This worked for me! I got some other errors, but they don’t seem related to this, so now to find out what they are all about 😅
Moderately. Same rough idea, with a few other things in the blend. For example, I found it via GNU Shepard, which it uses, while Nix sounds like it just uses systemd. The Guix package manager will also compile things from source with custom options if it needs.
Guix is almost like nix but with scheme, right? Any other differences?
I do like scheme. Nix is quite impressive. But my unpopular opinion is I am not convinced it’s philosophy is necessary. Nix feels like a workaround to legacy baggage in POSIX to allow for all its features of full reproducibility of packages and the overall system. Although Gentoo is not exactly reproducible, I feel like the level of control is sufficient to give me the benefits I want.
Nix works for maybe 95% of cases, but the 5% where its workarounds do not work sre annoying to deal with. Gentoo on the other hand doesn’t break so much from the traditional unix way of doing things, but still grants the user a great load of freedom and choice.
The biggest difference between Nix and Guix is that Guix doesn’t support non-foss software, meaning you can’t use it as a package manager on other operating systems. I originally wanted to use Guix but use a Mac for work, so that became a deal breaker.
Nix is pretty awesome as a package manager, I’ve been happy with it after the truly unnecessary learning curve brutality. I do not imagine I would ever use the full OS though.
Based on what I’ve heard so far: GNU Shepard instead of systemd, a package manager that compiles things from source and allows user-defined compiler options, a totally different way of arranging system files, and Guile-Scheme is used for everything; it sounds like there’s no other kind of configuration anywhere.
It also uses Linux-libre by default, although you can go back to plain Linux, and they’re working on Hurd.
From the last, there is a non guix project including packages for guix, which are not officially supported given hey are not free software. I recommend taking a look at the last post at least, since it comes from someone who used Arch, and made the move to Guix, not just opinions from people like me, who haven’t ever used Guix.
That said, Guix is in my TODO list. The thing is that I want to learn a bit more than minimal Guile, so I can write packages myself (there are always missing packages, even on Arch/Artix + AUR, I always have the need to whether tweak something at some point, or create a package still not in there), and also deal with my own services to run with shepherd. So I don’t want to blindly try things out…
It shares with Nix the reproducible build of everything, but the language it uses is Guile, which has some history. Nix has its own language. To me that’s a plus on Guix. But the most important part, is that the official repos are all for free software, and then on the non guix project one can look for non free software pieces, which to me this is also a plus. I guest most might differ.
But again, if you want to try it, even if it’s just because of curiosity, why not doing it so? I hope those prior posts from someone who migrated there might be helpful.
Yes, that’s a great review! Having one language for everything also sounds pretty great. A hard line on nonfree software is pretty tough, but I’m glad to hear you can “downgrade” back to the Linux kernel if you need to deal with a GPU or something.
Do you have any evidence that writing that line actually works to keep AI from using your comment? If some of the biggest authors alive can’t keep their words out of the algorithm, I’m not convinced that a Lemmy comment stands a chance.
So usually people do install Linux software from trusted software repositories. Linux practically invented the idea of the app store a full ten years before the first iPhone came out and popularized the term “app.”
The problem with the Mullvad VPN is that their app is not in the trusted software repositories of most Linux distributions. So you are required to go through a few extra steps to first trust the Mullvad software repositories, and then install their VPN app the usual way using apt install or from the software center.
You could just download the “.deb” file and double click on it, but you will have to download and install all software security updates by hand. By going through the extra steps to add Mullvad to your trusted software repository list, you will get software security updates automatically whenever you install all other software updates on your computer.
Most Linux distros don’t bother to make it easy for you to add other trusted software repositories because it can be a major security risk if you trust the wrong people. So I suppose it is for the best that the easiest way to install third-party software is to follow the steps you saw on the website.
From your other responses, this is a system issue not a problem with the website.
Lemmy.world’s code has this font list for sans-serif: system-ui,-apple-system,“Segoe UI”,Roboto,“Helvetica Neue”,“Noto Sans”,“Liberation Sans”,Arial,sans-serif,“Apple Color Emoji”,“Segoe UI Emoji”,“Segoe UI Symbol”,“Noto Color Emoji”
I’d use the dev tools to check which font is being rendered. I’m on Windows so I get Segoe UI, which I find entirely acceptable.
It was a mistake to come down from the trees if you ask me. These days there’s even people saying we should of stayed in the water were life was simpler.
Of course there’s the total extremists who think life was better as a single celled microbe. Those people are always hard to talk to.
We should HAVE stayed in the water. The real fringe radicals are those who defend the idea that crystals are alive. I think they’re lesser lifeforms who don’t deserve social security
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.