I got pissed off at Google Photos, which led me down a self-hosting rabbit hole. Ended up installing Linux everywhere, even my “gaming” one eventually (I do development, and WSL was a resource hog).
The hardest thing to degoogle has been email - I’ve used the same address for years. But I use Thunderbird so at least I don’t have to see ads in my fucking inbox
I DO have an ad blocker. I still ended up seeing an ad, that one time. It might’ve been on mobile. Or I might have had my ad block turned off. Either way, I’ve gone out of my way to make sure I never see an ad in my email again. Used mobile devtools to uninstall the Gmail client, Firefox with uBlock add-on, etc…
May I recommend Proton Mail? I used gmail for years then decided to finally switch and it was easier than I thought, tho tbf I also don’t have a lot of people who need to email me, it was mostly services where it’s easier to tell them to switch over to the other mail.
I’ll have to check it out! Same here, mostly services on my end - but I’ve accumulated enough of them to make it tiresome. Still, I’ll have to do it at some point
I tested kitty and alacrity when I first found out about advanced term emulators. I liked kitty more, but I don’t remember why. I use the kittens all the time. It’s super convenient to play a video or display an image in the terminal. Kitty works on most distros. I wish it worked on windows, too, so I could use it at work.
X (not formerly twitter) is decades old and is built around deprecated ways of doing things as well as a lot of legacy functions.
Wayland is a relatively new project with the aim of replacing X as a more “modern” display server.
Wayland had some stability issues, but they’ve since improved.
I’m sure Wayland is good and all, but I can’t be arsed replacing X yet. I don’t really have any skin in the game, I just don’t replace functioning components just because they’re old (FYI, bash turns 35 this year). While X does what I need it to do, I’ll keep using it. I’ll probably move over when my distro does.
I’ll leave the technical explanation to someone else.
Yeah. Wayland works great on my laptop. I can’t even log in on my desktop because it’s Nvidia. As a light Linux user the difference doesn’t really mean much to me aside from the fact it doesn’t work on my desktop.
That’s right. To add a few things: X11 isn’t bad. It’s just a big and complex piece of software that has grown for multiple decades. And nobody wants to do big changes or add new things anymore.
Wayland is the modern and “fresh” new approach. I’ve had some issues with my NVidia graphics card. But that wasn’t Waylands fault, but the NVidia drivers. I have a laptop with just Intel graphics and both X11 and Wayland run excellently on that machine.
With Linux we often get many choices, and have several alternatives available to do the same / a similar job. That is a bit complicated for someone new. But we should embrace it, be glad that we can pick whatever suits our individual needs. Wayland still has some issues on a few specific setups, but eventually it will replace X11 as the default.
The X font server has been deprecated like 10 years ago. I doubt you’ll find it as an option in a modern distribution. Nowadays fonts are rendered by the client (application) with something like the Cairo library (if I’m not mistaken).
They have the .deb at the top of their download page, no need to install the PPA repository if you don’t want. You can’t get any more than “just downloading and double clicking an exe deb” than that on Ubuntu.
I will admit though, I wish there were an easier way to install PPAs.
You got that right. So many contradictory comments for such a simple question.
That said, Linux for home use is a hobby and hobbyists expect a certain level of interest and basic commitment to learning. Also, the Linux community is a bit anti-Windows. So, coming on a Linux forum and complaining that a simple Linux task is too hard, basically because it isn’t Windows and you didn’t bother to read any documentation, pushes ALL the Linux nerd buttons, LOL.
Imagine going on a boardgame forum to complain that some super popular game is dumb because it isn’t like a video game, and too complicated even though you didn’t bother to read the game rules.
As a board game hobbyist, that happens all the time. Our community generally makes an effort to direct them to games with a lower weight and easier rules and encourages them to keep playing to grow the hobby.
Yes, as much as I appreciate memory safety and rust in particular. I’m very worried by this pivot away from copyleft and GPL. Specially the rewriting in rust phenomenon of fundamental stuff. It’s safer, yes, but they’re all pretty much non GPL and it seems very risky to me. Make no mistake, the industry is riding this wave to move away from copyleft to permissive licenses.
I wish that people understood the importance of FSF and GNU
You make it sound like a conspiracy. Just accept that some things are organically more popular, like MIT which is very easy to understand and use for normies. It’s not perfect, but that’s how it is
MIT is a terrible license that only got popular because of the popularity of the anti-open source movement in the last decade.
one could write books about what’s wrong with the MIT license.
It could even theoretically be argued that MIT has in some ways allowed big tech companies to proliferate, by effectively allowing them to take open-source code, modify it, and then close it off in their proprietary software. What does this mean? It means that the work of countless dedicated open-source developers can be co-opted by companies that have done almost none of the work, reaping several billions of dollars, while the developers who actually did the work make no money. It’s like opening your doors wide only to have someone come in, take your stuff, and sell it back to you.
In contrast, in licenses like the GPL, there’s a requirement that if you use GPL-licensed code and modify it, your new code also has to be open-source under the GPL.
I’m sure there’s some community pull as well, because most of the rust ecosystem seems to be converged on MIT. But what despairs me is the wilful sidelining of GPL and everything GNU by some open source community members/corporate people. So yeah, you’re probably right
But what is wrong with C and C++ apart from the ISO fuck-up (ahem, slow updates)? There’s a lot of technical debt, so wouldn’t it be better to create an alt-language compiler that adds improvement over C, so that migration is possible in multiple stages?
I should have framed my words better, I guess. Rust is a radically different language, and honestly, none of the feature it offers fixes the main issue, that is technical debt - I mean yes, there’s incline C or FFI, but that’s still going to be a radical migration.
What I’m trying to propose is an alternative project, independent from the ISO. Maybe it could be a C-to-Rust, or a C-to-Vale migration project. It could be any of the modern language, I don’t really care. But that particular compiler/transpiler/migrationpiler/<something>-piler should have the ability to do step-by-step migration.
I’d say no. Programming safely requires non-trivial transformation in code and a radical change in style, which afaik cannot be easily done automated.
Do you think that there’s any chance to convert from this to this? It requires understanding of the algorithm and a thorough rewrite. Automated tools can only generate the former one because it must not change C’s crooked semantics.
I was planning to learn C23 for quite some time. It’s a pity that I’ve been planning to learn RISC-V with it, sigh. I guess I’m gonna move over to Rust or Zig, whichever makes sense. But I’d probably switch, when Vale becomes a legit language.
I think there’s no need to stick with one particular language. It benefits to learn more languages and bring the “good parts” of their design into your code whatever you are writing it in.
Btw It happens that I’ve learned a bit of RISC-V, with Rust.
C and C++ can’t be fixed retroactively because old code must remain compatible.
If you’re going to implement your own C dialect, you may as well just write a new language.
For C++ that’s Rust, for C that’s probably Zig (Zig will just let you import existing C files, which helps with porting). Carbon and experimental languages like Jakt may also work, it all depends on what your priorities are.
I know the evangelists can be somewhat overwhelming, but its popularity is not unwarranted. It’s fairly easy to pick up, has an incredibly enthusiastic and welcoming community. People like it because it’s incredibly performant, and its memory safe. In terms of DX it’s really a joy to work with. It just has a LOT going for it, and the main drawback you’ll hear about (difficulty) is really overblown and most devs can pick it up in a matter of months.
The main difficulty I have with Rust (what prevents me from using it), is that the maintainers insist on statically compiling everything. This is fine for small programs, and even large monolithic applications that are not expected to change very often.
But for the machine learning projects I work on, I might want to include a single algorithm from a fairly large library of algorithms. The amount of memory used is not trivial, I am talking about the difference between loading a single algorithm in 50 MB of compiled code for a dynamically loadable library, versus loading the entire 1.5 GB library of algorithms of statically linked code just to use that one algorithm. Then when distributing this code to a few dozen compute nodes, that 50 MB versus 1.5 GB is suddenly a very noticeable difference.
There are other problems with statically linking everything as well, for example, if you want your application to be written in a high-level language like Python, TypeScript, or Lisp, you might want to have a library of Rust code that you can dynamically load into the Python interpreter and establish foreign function bindings to the Rust APIs. But this is not possible with statically linked code.
And as I understand, it is a difficult technical problem to solve. Apparently, in order for Rust to optimize a program and guarantee type safety and performance, it needs the type information in the source code. This type information is not normally stored into the dynamically loadable libraries (the .so or .dll files), so if you dynamically load a library into a Rust program its type safety and performance guarantees go out the window. So the Rust compiler developers have chosen to make everything as statically compiled as possible.
This is why I don’t see Rust replacing C any time soon. A language like Zig might have a better chance than Rust because it can produce dynamically loadable libraries that are fully ABI compatible with the libraries compiled by C compilers.
So you’re working on your machine learning projects in Zig?
No, Python and C++, which were the languages chosen by both Google and Facebook for their AI frameworks.
I just think if a systems programming language like Rust does not provide a good way to facilitate dynamic linking the way C, C++ does, these languages will start running into issues as the size of the compiled binaries become ever larger and larger. I think we might all be a little too comfortable with the huge amount of memory, CPU cycles, and network bandwidth that we have nowadays, and it leads to problems when you want to scale-up a deployment. So I think Zig might make a more viable successor to C or C++ as a systems programming language than Rust does.
That said, I definitely think Rust and Haskell’s type systems are much better than that of Zig.
You can load Rust into Python just fine. In fact, several packages have started requiring a Rust compiler on platforms thst don’t get prebuilt binaries. It’s why I installed Rust on my phone.
The build files for Rust are bigger than you may expect, but they’re not unreasonably big. Languages like Python and Java like to put their dependencies in system folders and cache folders outside of their project so you don’t notice them as often, but I find the difference not that problematic. The binaries Rust generates are often huge but if you build in release mode rather than debug mode and strip the debug symbols, you can quickly remove hundreds of megabytes of “executable” data.
Rust can be told to export things in the C FFI, which is how Python bindings are generally accomplished (although you rarely deal with those because of all the helper crates).
Statically compiled code will also load into processes fine, they just take up more RAM than you may like. The OS normally deduplicates dynamically loaded libraries across running processes, but with statically compiled programs you only get the one blob (which itself then gets deduplicated, usually).
Rust can also load and access standard DLLs. The safety assertions do break, because these files are accessed through the C FFI which is marked unsafe automatically, but that doesn’t need to be a problem.
There are downsides and upsides to static compilation, but it doesn’t really affect glue languages like Python or Typescript. Early versions of Rust lacked the C FFI and there are still issues with Rust programs dynamically loading other Rust programs without going through the C FFI, but I don’t think that’s a common issue at all.
I don’t see Rust replace all of C either, because I think Rust is a better replacement for C++ than for C. The C parts it does replace (parsers, drivers, GUIs, complex command line tools) weren’t really things I would write in C in the first place. There are still cars where Rust just fails (it can’t deal with running out of memory, for one) so languages like Zig will always have their place.
Is it not possible for Rust to optimize out unused functions as with C?
No Rust can do dead code elimination. And I just checked, Rust can do indeed do FFI bindings from other languages when you ask the compiler to produce dynamically linking libraries, but I am guessing it has the same problems as Haskell when it produces .so or .dll files. In Haskell, things like “monad transformers” depend pretty heavily on function inlining in order to achieve good performance.
So I am talking more about how Rust makes use of the type system to make decisions about when to inline functions which is pretty important when it comes to performance. You usually can’t inline across module boundaries unless modules are all statically linked. So as I understand it, if you enable dynamic linking in your Rust program, you might see performance suffer a lot as compared to static linking, and this is why most Rust people (as I understand it) just make everything statically linked by default.
I am not sure that is quite right. I dont think rust support just enabling dynamic linking of its dependencies. It can talk to dynamically linked libraries - which is how FFI works. And you can compile rust crates to be dynamically linked. But when you are going down this route you are talking over the C ABI. This requires some effort on the code author to make their APIs exportable to C types and means you lose all safety when talking over the C ABI.
I also dont think that rust inlines across a crate boundary unless the function is marked as inline or LTO is enabled - inlining across crate boundaries is expensive and so only done when explicitly needed or asked for it. It is more that you lose features like generics and traits and other things that are not supported over the C API.
Do you need inlining if you just use fixed monad transformers?
I am not sure what you mean by “fixed” monad transformers, if you mean writing your own newtype where the functor variable is the only type variable, essentially what you are doing is hand-inlining the monad transformer, and so no, if you inline by hand, then the compiler doesn’t need to do it.
Haskell inlines all newtype definitions automatically, so if your monad transformer has all of the type variables bound (except for the functor variable, because that is a special case the Haskell compiler is specifically designed to handle) the compiler will usually reduce those to ordinary lambda expressions automatically, and lambda expressions usually optimize to the most efficient machine code.
The only time the compiler cannot reduce a newtype to an efficient lambda is if the non-functor variables, e.g. the state type variable or the exception type variable, are unbound. Those values could become anything at all at its call site, limited only by the constraints set by the type context. So the type context information, a lookup table of type class instances, must be associated with that lambda expression, and in order to do that, the compiler must create a closure around those values. Creating closures allocates values on the heap, and this is much, much slower than efficient lambda expressions, and no faster than allocating a data constructor as with Free Monads.
Alexis King did a presentation on it where she explains all of this extremely well, if you are interested: youtu.be/0jI-AlWEwYI
It is a bit long, but at 17:40 or so she starts talking about strategies for how monads and effects can be implemented in the GHC intermediate code, and compares Free Monads and effects to monad transformers. At 21:15 or so she begins to explain how newtype types can be optimized away completely, newtype constructors don’t exist at all in the low-level code, they are a “zero-cost abstraction.” On the other hand, data constructors (used for Free monads and effects) always allocate something on the heap which is an order of magnitude slower.
Then at around 27:45 she begins to show how newtypes with type variables cannot be inlined across module boundaries for the reason I explained above (type context tables associated with closures), and so monad transformers cannot be optimized across module boundaries.
From my personal experience I can tell you 2 reasons. The first is that this is the first general purpose language that can be used for all projects. You can use it on the web browser with web assembly, it is good for backend and it also is low level enough to use it for OS development and embedded. Other languages are good only for some thing and really bad for others. The second reason is that it is designed around catching errors at compile time. The error handling and strict typing forces the developer to handle errors. I have to spend more time creating the program but considerably less time finding and fixing bugs.
My point wasn’t that C++ is good. My point was that C++ can and is used everywhere (desktop applications, web applications, OSs,…) and is older than Rust. So I feel that “this is the first general purpose language that can be used for all projects” is false. Probably “this is the first general purpose language that I (and many others) like to use for all projects” is true, but is a different claim.
TLDR: You said Rust was first language capable of system, app and web, it isn’t.
It depends on what “can be used” means. I really like C# and it “can be used” for that full stack C# for example can write out native machine code, can manually and precisely lay out memory, and can directly link to assembly language routines. You can write an OS in C#. Even as a fan though, I would certainly argue that it is the wrong tool for that job.
In the same vein, while I know C++ “can” be used for web dev, I would argue that anybody that tries to do so for any significant project is insane.
I am not sure I would use Rust for “everything” but I do think the claim that Rust is one of the first languages where it is reasonable or practical to choose it for any of these uses is valid. Rust code can be very high level and often does not much different than a scripting language. At the same time, it can go as low-level as you want. This article is about an OS in Rust ( and there are few ). Web dev in Rust is totally reasonable and there are a few popular frameworks available. Rust has one of the best WASM stories around.
Before using Rust I was using C++ for most projects and while it is a really powerful language there were some big problems:
no standard build system, most projects use cmake or meson and vendor dependencies with the projects. These build systems were really hard to learn (especially cmake, meson is easier). There are package managers these days such as conan and vcpkg but there is not really one standard way to build programs like in rust.
error messages were really hard to understand, especially when the project uses templates
it felt like 3 languages in one, projects written before c++11 differ greatly from c++11 and up
some of the new language features have really weird syntax, for example lambdas
some people say that rust is hard, but modern c++ is considerably harder to learn, just look at the list of modern c++ features: github.com/AnthonyCalandra/modern-cpp-features, you have to know the different pointer types (unique_pointer, shared_pointer etc.), templates, rvalue references and move semantic, exceptions, constexpressions and the list goes on
That sounds pretty great. I get sick of having to switch gears for every layer. As a hobbyist it is tough to remember five or six languages well enough when only coding something a few times a year.
Since I do embedded, scripting, web front and back end this is sure tempting.
I have been hesitant to try to learn yet another language (this would make…ummm… idk I lost count ages ago). But with all the hype I may break down and give it a whirl.
Sounds like python may be a better fit if its supported on the embedded devices you use as it will cover scripting and backend too. Rust has quite a learning curve and can be rather verbose.
I realize that even $2 systems are running full Linux distros these days but Python does not map to what I think of as “embedded”. If you have a full Python interpreter, it is already a pretty rich environment.
That said, this is what computing is starting to look like. There is less and less “bare metal”. I work with people that claim to be “firmware” engineers and then, when you look, you find out they have a full Ubuntu distro running and they may as well be running on a laptop.
Sorry but I don’t see the reasoning backing the enthusiasm for python. Sure, it is great for scripting (this includes machine learning), but why for anything else?
I do use python quite a bit for scripting and backend, app, and I’ve used MicroPython a little bit, preferring C, C++ for embedded. It’s pretty great for what I need.
I might mess around with Rust out of curiosity anyway, though the downsides you mention make it less compelling for me, personally. I’m not a big fan of verbose languages (e.g., Java, though I have used it for some apps).
Everyone fully missing the point here. This is the banner image for !linux (that’s not where we are right now for the record), and it has a normal JPEG size of 7.7MB. When it’s served as WebP it’s 3.8MB. OP is correct that this is very stupid and wasteful for a web content image. It’s a triple-monitor 1440p wallpaper that’s used verbatim, and it should instead be compressed down to be bandwidth-friendly. I was able to get it to 1.4MB at JPEG quality 80, and when swapping it out in dev tools and performing A/B testing I can’t tell the difference. This should be brought to the attention of a mod on that community so it can stop sucking people’s data for no reason.
It could be resized too. 5120x1440 is way too big for a website banner. There’s no reason to go more than double the size it will actually be displayed at. That would bring it down to a couple hundred KB.
I’ve been using Rust Desk for a few months now and it’s awesome. Works on Linux, Windows, Mac, iOS, and Android.
It’s open source and super easy to setup and configure. For limited use cases, you can just use their freely provided servers, but you can also host your own server if you want.
I’ve been really impressed with the connection speeds and features, it has everything I need including end-to-end encryption for all network traffic, so your remote connection is at far less risk of getting snooped.
I was looking into rust desk. It looked like the perfect solution for remotly helping my family members. The problem is I have them running silverblue. I don’t think rust desk supports Wayland. There is a experimental way I was going to try, but I’m not sure how to install it since its not in Fedora repo or flatpak.
Just a data point: OP is looking for a desktop solution, and Rust Desk may be fine for that; I was pretty impressed with it. However, I caution about using it to share out on Android. I traced down random crashes and reboots into safe mode to Rust Desk running on a Pixel. It took me a while to figure out which app was causing it; it seemed to have no correlation to use, time, or anything else I could discern. They only went away after I completely uninstalled Rust Desk (which is why it took so long; I couldn’t correlate it to running Rust Desk, so I didn’t suspect it).
The reboots into safe mode turned me off to it on mobile - I had no issues at all running the desktop client on Linux. Android aside, it’s a really nice bit of work, and I fancy even nicer than VNC, which for me is saying a lot.
That said, on a fast network, I still prefer a good old X client over ssh to VNC, if for no other reason than easier per-app windows - but I like the L&F and performance of X on a fast pipe.
Good to know since I’m using a pixel as well. For remoting to my own devices. Since they are all running Wayland now. I plan to try out waypipe soon. I read it’s like a replacement for running X over ssh.
For helping family members I was looking for something more user friendly. Which is why I was looking into rust desk.
Well, don’t let me put you off of it; Rust Desk is pretty nice, and user friendly. Just… keep any eye on it if you run it on your phone. Maybe you won’t have any problems, but if you start noticing reboots, you’ll have an idea of why.
Nyfure is right. Wayland support is experimental but has been added to Rust Desk since earlier this year.
I haven’t tested it on Wayland myself, and remote login isn’t yet supported according to their GitHub documentation, but if you just need a reliable way to provide remote support to your fam, it’s a really solid option.
Wayland is quickly becoming the standard, and Rust Desk seems to be on that train which is good. I wouldn’t be surprised to see full Wayland support or close to it by the end of next year.
Also, I just use the .appimage file and it works fine for me. Just make sure to set it as executable with chmod +x or in the file permissions tab in your GUI.
Oh! I didn’t see that app image was an option. I’m definitely going to give it a try now. For my use case I don’t need remote login so that works out fine. Thanks.
This. You can also directly connect via IP address when enabled on the target, very handy.
Its probably also one of the few which have started developing Wayland support on the host side.
Currently only clipboard and video sharing works, mouse and keyboard are close..
wayland is very restrictive and things like full keyboard/mouse reading/writing need special handling (and afaik a mouse write method wasnt really available from user permissions so far)
I just started them on Linux machines from the get go. The same reason I got good at 3.1/95/98 was to setup games, filesharing, and getting hardware to work for better games. Even with Steam, there’s always some work to handle oddities. The kids are rapidly becoming reasonable basic admins the same way I did. Whether they decide to go further and learn more will be up to them.
All too much of OS config, IT work, and troubleshooting is a combination of reading docs, trying things, and plenty of online searches. The big missing piece is motivation. That’s why I learned as a kid. It was all about building systems to play games.
For your kids, a combination of showing the basics, how to find out how to fix things, giving them agency to modify the OS (assume you’ll need to reinstall sometime), and a purpose could get them going. Not everyone find the motivation and interest, but kids are often more able to invest and explore than we give them credit for. I found my son (at age 13) at installed the proprietary NVidia driver for his laptop without my knowing. He just started following tutorials until it worked. Proud dad moment, time for ice cream, and then he went back to playing games with his buddies.
That’s a good start. Also, include him in your own PC activities (some of them, make some up if you don’t have anything that he can be involved in at the time), like “I need to find a cool new background, I was thinking this and this might be cool, could you help me find something online?”. It gives kids a sense of being useful and wanted, plus a pat on the back, high 5 or something like that when the task is done. And it might inspire him to look for his own background, something he identifies with 😉.
Have a lot smaller kid, he’s 4, but this is just something from the top of my head… or how I would play it.
It was hard for me at first, grasping how to bring up and educate him… it didn’t come naturally for me. But my mom was a lot of help, she gave me a lot of pointers and I just started building on that 😉.
And that’s exactly what happened in your case David. Which is why I’m so happy (also because I fixed the tools from an author I like and already had the books at home :-P):
Really detailed and cool response from the kernel developer. I also found the use of the recent BPF feature to provide a workaround until a proper kernel fix lands really interesting.
So despite the desire for one, Vega won’t be an Android-killer, won’t bring an influx of big name apps to benefit regular Linux distros, nor see Amazon do something crazy cool like create its own Linux tablet UI.
You know how much overhead Electron apps are? Well, here’s React Native! Enjoy all the annoyances of mobile development with the ugliest that is React!
It actually works pretty great, it genuinely does compile to native code pretty well. The js code just drives - everything visual or I/O is native, so it’s faster than you’d think
Idk if I’m the only person who thinks this, but I feel like React has gotten worse over the last couple of major versions. Not only does the code look a lot messier when you use their new syntax, but the end result seems unreliable. Facebook is barely even usable now. Their history management is laughable, and it’ll drop you out of the site randomly when using the back buttons. I used to think React was really neat, but I’m not a big fan anymore. There’s too much re-engineering for problems that were solved decades ago.
If you like it, then use it. There’s no point in jumping every time some new framework comes out. Most of them don’t last. I have used React off and on since it came out, and I personally don’t like how the syntax has changed. My personal website is React and doesn’t have any browser history issues. Idk what’s up with Facebook history management. I guess they just don’t care very much because they’re too busy trying to gobble up data.
React is having the same problems Angular had, and jQuery had. New ECMAscript features make formerly complex things easier, and JS frameworks adapt.
Lots of solutions. But as more edge cases start to show up, they continue to add more and more little things that shape the language into more different variants.
Many of the changes are pretty good. But New devs will go, “Why are there 7 ways to do this React thing?” And that adds to the noise.
Again, that’s not a React problem. It’s just coding in general. PHP also had a “damn you ugly” phase. But unlike PHP, I don’t think React (and most JS frameworks of today) will continue to be as popular as some hot new JS framework in 2027-2030 sweeps the landscape.
And PHP will still be chugging along. lol. It’s weird that React syntax went from being fairly pretty, and structured, to looking like a plate of spaghetti. Usually languages and frameworks go the other direction.
I love how PHP 7 looks, and PHP 8 only continues to improve.
Totally agree. React is going backwards. Vue is so attractive. Heck, I’m even starting to rebuild react apps in Web components because react is getting weird.
I think you’ll be waiting a pretty long time for high end RISC-V CPUs, unfortunately. I don’t particularly trust Qualcomm, but I’m really hoping to see some good arm laptops for Linux.
I kind of agree, in that ARM is even more locked down than x86, but if I could get an ARM with UEFI and all computational power is available to the Linux kernel, then I wouldn’t mind trying one out for a while.
But yes, I can’t wait for RISC-V systems to become mainstream for consumers.
Generally speaking, and I’m not talking about your Raspberry Pi’s, but even there we find some limitations for getting a system up and booting - and it’s not for lack of transistors.
But say if you take a consumer facing ARM device, almost always the bootloader is locked and apart of some read only ROM - that if you touch it without permission voids your warranty.
Compare that with an x86 system, whereby the boot loader is installed on an independent partition and has to be “declared” to the firmware, which means you can have several systems on the same machine.
Note how I’m talking about consumer devices and not servers for data centres or embedded systems.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.