i do not see the hype for immutable distros, they seem to be for an incredibly niche sub-section of the linux desktop which is already incredibly niche (i’ll probably be answering my own question). good for the devs for trying new things i guess but these seem like hell to use if you are a tinkerer, gnome is whack and won’t convince any new users to use linux. it seems like an operating system i’d install for my tech illiterate parents so i won’t have to troubleshoot anything if it had a desktop like cinnamon or kde because they would have some familliarity with a windows like ui.
Why would “people who don’t want to have to troubleshoot anything” be a niche? I love it exactly because it’s mostly the same as what I used to have, except that I don’t have to worry about updates breaking as much.
I’d like to take this opportunity to remind you that spotify sucks, they hate artists but love Joe Rogan. If you can’t buy albums via bandcamp, Tidal offers quality and royalties far superior to Spottily. You can transfer your playlist in a few clicks and the price is almost identical (6 accounts for like $15/m).
One Swedish company for another. Joke aside, isn’t the whole problem with royalties in the music scene still the issue that the record labels taking 90% of profits?
Thr issue with spotify I have is only one. Its pretty good at predicting new songs with radio that I may like and I usually use the radio feature as I dont like to repeat my own playlists over and over.
Tidal’s algorythm is excellent for suggestions and the radio feature works well. I wasn’t sure at first but after a few months of listening to my stuff, Tidal strated to get really good at suggestions. My only issue left is how picky the search engine is Any spelling mistake will get you no results, but I can live with that. I work in studio environnement so getting access to uncompressed master files is huge for me.
I gave it a short try just to see if my fav artists are there. Yes. Didnt expect this. Also feels much more serious than spotify. I will see if the algorithm does its job.
Its weird how at first it only displayed music I would never listen to or is not near the artists I selected at the beginning. I guess I need to listen and favoritize them. And wait?..
Edit: It got a bit better over time. But there are a few songs still missing on Tidal 💀
Yeah MQA felt indeed bit of a weird for a lossy codec. FLAC is a real lossless format that’s been around for a long time, I’m glad they now use it. I like the fact that Tidal can be set to different quality on wi-fi vs phone data. Anyway, Tidal is still a buisness with only profit as a goal, but they give 3 times more to artists. Best way to support artist will always be by going to shows and buying albums and merchs, but most people wants a streaming sevice so IMO Tidal is the best right now. One day maybe Funkwhale or another decentralized option will offer a real revenu model for artists.
It should be ok because nothing will run on your system without a permission prompt at least. So they that should ring some bells of system is asking for your password when you didn’t try to install anything.
But best practice would be log in as a regular user and use sudo to do any admin tasks.
Honestly friend I don’t give a rats ass about up or down votes. I’m just here to read, learn and converse. Some things I’ll get right, some I’ll get wrong. That’s life.
I could stop using this tomorrow and it would make zero difference to my life, know what I mean? It’s just some site. My real life is something altogether different.
A process running as root does not need a prompt or any user interaction to do whatever the hell it wants on most (nearing ALL, but I'd be wary of absolutes with Linux) systems. I'm unaware of any means that a Desktop Environment could restrict a process running with root permissions by requiring an interactive prompt of some sort for anything. If your DE is running as root, all of its children are also running as root (unless you've rigged things up to run explicitly as other users) which means just about anything you are doing could be running rampant malicious actors on your system and nothing would seem amiss until it made itself evident.
Now, it does seem unlikely that anyone has written any malicious code that would run in a browser expecting to be root on a Linux system, so that's likely the saving grace here, but that's only security through obscurity and that's not much to hang your hopes on for any system you care about.
I regularly remote into in order to manage, usually logged into KDE Plasma as root. Usually they just have several command line windows and a file manager open (I personally just find it more convenient to use the command line from a remote desktop instead of directly SSH-ing into the system)
I’m not going to judge you (too much), it’s your system, but that’s unnecessarily risky setup. You should never need to logon to root desktop like that, even for convenience reasons.
I hope this is done over VPN and that you have 2FA configured on the VPN endpoint? Please don’t tell me it’s just portforward directly to a VNC running on the servers or something similar because then you have bigger problems than just random ‘oops’.
I do also remember using the browser in my main server to figure out how to set up the PiHole
To be honest, you’re most probably OK - malicious ad campaigns are normally not running 24/7 globally. Chances of you randomly tumbling into a malicious drive-by exploit are quite small (normally they redirect you to install fake addons/updates etc), but of course its hard to tell because you don’t remember what sites you visited. Since most of this has gone through PiHole filters, I’d say there’s even smaller chance to get insta-pwned.
But have a look at browser history on the affected root accounts, the sites along with timestamps should be there. You can also examine your system logs and correlate events to your browser history, look for weird login events or anything that doesn’t look like “normal usage”. You can set up some network monitoring stuff (like SecurityOnion) on your routers SPAN, if you’re really paranoid and try to see if there’s any anomalous connections when you’re not using the system. You could also consider setting up ClamAV and doing a scan.
You’re probably OK and that’s just paranoia.
But… having mentioned paranoia… now you’ll always have that nagging lack of trust in your system that won’t go away. I can’t speak to how you deal with that, because it’s all about your own risk appetite and threat model.
Since these are home systems the potential monetary damage from downtime and re-install isn’t huge, so personally I’d just take the hit and wipe/reinstall. I’d learn from my mistakes and build it all up again with better routines and hygiene. But that’s what I’d do. You might choose to do something else and that might be OK too.
I hope this is done over VPN and that you have 2FA configured on the VPN endpoint? Please don’t tell me it’s just portforward directly to a VNC running on the servers or something similar because then you have bigger problems than just random ‘oops’.
I have never accessed any of my servers from the internet and haven’t even adjusted my router firewall settings to allow this. I kept wanting to but never got around to it.
Since these are home systems the potential monetary damage from downtime and re-install isn’t huge, so personally I’d just take the hit and wipe/reinstall. I’d learn from my mistakes and build it all up again with better routines and hygiene. But that’s what I’d do.
Yeah this and other comments have convinced me to reinstall and start from scratch. Will be super annoying to set everything back up but I am indeed paranoid.
I have never accessed any of my servers from the internet and haven’t even adjusted my router firewall settings to allow this. I kept wanting to but never got around to it.
Does that mean you realistically don’t even know your network (router) setup? Because it’s entirely possible your machine is completely open to the internet - say, thanks to IPv6 autoconfiguration - and you wouldn’t even know about it.
It’s pretty unlikely but could potentially happen with some ISPs. Please always set up a firewall, especially for a server type machine. It’s really simple to block incoming outside traffic.
Huh. I never even thought of that. I use my ISP’s router in bridge mode and have my own router running on mostly default settings, IIRC the only thing I explicitly changed was to have it forward DNS requests to my Pihole. I should inspect the settings more closely or as you said just configure the server to block the relevant ports from outside the LAN. Thank you.
Oh if you even have your own router then have a firewall (primarily) there, and simply block every incoming forward connection except the ones you actually want (probably forwarded to your server). Similarly even for the router input rules you likely need only ICMP and not much else.
Backwards compatibility forever sounds great, but the technical debt eventually becomes a giant fucking limitation on improvement. They chose not to stay backwards compatible for a reason.
I agree that at some point you have to be able to ditch technical debt, but you still should be able to do more or less the same things with the new system as with the old system and that’s currently still not the case.
The problem is that the architecture of Wayland and the organization around it themselves impose limitations that have a chilling effect on development for it. One issue is that Wayland has been deliberately left very slim, leaving a lot of complexity and implementation details up to the compositor. A compositor can be seen as something that approaches the size and complexity of an entire X display server. This means that if someone wants to create a window manager, they have to implement a whole compositor first. So instead of writing window manager code, which is what the developer is probably the most interested in, they are spending most of their time implementing the compositor.
Naturally this also leads to a lot of duplication of effort. For example: GNOME, KDE and the window managers that have implemented a wayland version each have their own compositor that by and large does the same thing.
Another issue is the standardization of the protocols and interfaces that the different compositors use, or lack thereof. There is a steering group containing the major stakeholders that votes on proposed extensions, but good proposals often get shot down because the major stakeholders can’t agree on it and sometimes ego or principles gets in the way. And then you have cases where one compositor just goes their own way and implements something regardless of what the others do.
For example, as a result of this there’s still no standard screen capture API, so if you want to do things like screenshots, remote desktop, desktop streaming, … whether or not you can do that, and with which tool, depends on the compositor you use. Another example: they’re currently still bickering over whether or not an application should be allowed to place windows with absolute coordinates, and how that should be implemented. We’re currently 15 years after initial release of Wayland…
In my opinion, this is all completely backwards. Both in an organizational and technical sense way too much has been left up to the individual compositors that should have been a core part of Wayland itself.
Unfortunately, it’s all too late to fix this. We’re 15 years into Wayland development, and the flawed architecture has been set in stone. Wayland isn’t going to go away soon either, too many parties are invested in it. So for me the reasonable thing to do is to wait and stick with X11 until the dust settles and something emerges on the other side that is better than what I currently have.
This means that if someone wants to create a window manager, they have to implement a whole compositor first. So instead of writing window manager code, which is what the developer is probably the most interested in, they are spending most of their time implementing the compositor.
wlroots has existed for almost 7 years and this misconception is still repeated.
I know wlroots exists. It’s a library that helps you implement a compositor (i.e. does some of the heavy lifting), but at the end of the day the window manager developer is still implementing a compositor and is responsible for maintaining his compositor.
The mere fact that wlroots, and other efforts like louvre, are necessary at all actually prove my point that it was an idiotic design to push everything off into “compositors”.
Yes. And it’s a bad analogy. Nobody is expecting you to be able to take a barge on railways. But existing linux applications are being expected to run on Wayland. As I said - railways didn’t replace canals - they’re different types of things.
Within the last 10 years and the next 5 years, software using old hacks instead of GUI toolkits are expected to switch, yes.
People can choose to continue to use X11 until KDE Plasma 6 hits Debian stable.
I don’t see a problem. Nobody forces Wayland onto anyone yet, except for bleeding edge distributions like Fedora. And unless you’ve been severely misled, you should know what you signed up for when you installed Fedora.
I didn’t say there was a problem. I’m saying it’s pretty disingenuous to act like Wayland isn’t intended as a replacement for X11. All of which you seem to agree with. As you say “nobody forces Wayland onto anyone yet” (emphasis mine).
Also - I just love how your comment is written like a politician would have written it. “Sure you can use the dirty old X11 if you really want to, or you can use the nice new God-fearing Wayland”.
If you bring the two parts of your comment together and dial back the assumptions of bad faith, you’ll get a consistent picture:
Wayland is a blank slate replacement for how to do window management on Linux. At some point it’ll become the standard for software that’s new or maintained. Unmaintained software that doesn’t talk to the internet and is therefore safe to run even with security holes will continue to be supported via XWayland. The giant scope and API surface is part of the reason why it’s deprecated. Maintainers are expected to target the new way to do things going forward, because there are people able and willing to maintain that support (many of those people former X11 maintainers who are looking forward to stop having to deal with that legacy behemoth)
That’s the state of things I wanted to express. Not my opinion, no agenda, just how I understand the situation.
Yeah sorry. But when you look at the events building europeen railways. More so in the UK as we had a huge canal system built in a few decades. But most of Europe denser areas.
Railways were very much a replacement for the to slow canal system.
Canals built a huge industry allowing manufacturers to ship goods to cities while shipping resources from the mines and farms etc.
But industries like meat fish milk and strongly enough market gardening (fresh flowers) were very limited to local areas before the railways. Took off hugely when the railways intentionally set up in direct competition to the canals.
Canals survived for a while moving the slower stuff. But started needing to redesign to support bigger and more boats faster. Before finally closing down.
The UK and most of Europe rebuilt/renovated them as a leasure activity from the 1950s. But most of the late 1800 to early 1900s railways vcompanies actually worked to replace and put canal companies out of business.
Passenger rail really was not a big thing untill about the 1920s.
We’re getting well away from the topic now. It depends on what you mean by “replace”. Railways and canals exist side-by-side as different solutions to similar problems - sure. And some railways have replaced some canals. But the panama canal will not be replaced by a railroad for example. It couldn’t do the same job. The pros/cons of each option depends on many factors.
The analogy is poor for comparing software. Linux distros will likely replace X11 with Wayland over time. To do the same thing that X11 was doing. It will be replaced “in place”. The very same thing you were using with X11 will now need to work on Wayland. This would be like running your barges on the railroad? Maybe? Depending on how you squint?
I wouldn’t expect my barge to work on the railroad. I do expect that Firefox will run on Wayland after having used it on X11 for 20 years.
But as a user of a barge if you needed wanted to use railways. Because they are faster. It would be the barge maker or a new train maker you would look at. Not the railway.
Just like canals X11 still exists. And is still being developed. It has its limitations but some applications are choosing not to port. Because like barge makers. They simply do not see the need. Or merit.
If the makers of railways insisted that all current users agents had to work on them without adaption. Many of the advantages would no longer be there.
Just as if waylaid did not expect Firefox et al to adapt to its methods. The security and other advantages they seek would not be practice.
Waylaid is a replacement. Not an upgrade.
(PS yeah living in the UK replace canal with inland waterways navigation. Tends to be how we think of it. As they are such a huge part of our industrial history. I forget the US really never went through that part of europeen industrial development. Your example is a fairly unique and modern by comparison, it dose not link to any network. Where as the inland waterways accross the UK and parts of Europe were a linked inferstructure like our railways. When the railways in Europe were built. They were very much seen as a replacement to our existing canal system. By both the corperations set up to build the inferstructure and the media of the time. It is literally a part of our industrial history thought is schools here. As so much of our culture and industrial revolution is built around the events)
I suspect that it’s not Linux that is on the rise, but overall PC market that is shrinking. It’s been a trend for quite a while for non-linux people to dump the PC entirely in favor of using just phone.
I wonder at the various nuances of that. My wife and I have 4 phones and 3 tablets between us between home and work. It would seem any multi-person household would be likely to have more mobile devices than PCs due to the variety of the former. So that chart seems to be that there are more mobile devices per person, but perhaps no reduction in PCs.
In fact, PC sales rocketed up in Q3’20 for very obvious reasons, and have largely not come back down to pre-COVID levels.
It’s been a trend for quite a while for non-linux people to dump the PC entirely in favor of using just phone.
Can’t do that if you play games.
Also that’s half of the reason Windows hasn’t lost the war on home desktop PCs yet. Another half is office applications.
Actually, these are thirds.
Another reason making me say so is that no major user-friendly distribution wants to be just that, they all have a particular madness with no good reason for it.
So I don’t know what to recommend, there should be something off the top of my head, but that’d be “just install Debian, it’s fine”.
So, any single reason of these going away would accelerate Linux adoption notably. Any two would make it a trend visible to housewives. And all three would resemble the flight of ICQ users to Skype.
What’s Ubuntu’s “particular madness”? They used to be a little FOSS-only, but they’ve chilled out on that.
I agree on the other points, though, with one caveat on both.
No matter how many games run on linux, it won’t be enough because there aren’t ever going to be linux exclusives. Without linux exclusives, there will always be more games that run in Windows than Linux, even if the majority of them run in linux AND run better than in Windows.
Office sounds like a big deal, but Apple managed to prove you don’t need it. The real problem Linux has with office is that it has no well-marketed office suite. There’s nothing wrong with Libre- or Open- except the complete lack of advertising and passive training to its nuances that we get from MS and Apple office products.
It’s not that linux can’t win on games or office. It’s that the game is rigged against it on both. It took me a few years back in the early 00’s, but I quickly realized that there will never be a “year of the linux desktop” regardless of how good Linux gets at games, office, user-friendliness, or anything.
And that’s ok because MY life is easier when I use linux.
I remember that it does too much, but without specifics. It’s been 4+ years since I touched Ubuntu.
They used to be a little FOSS-only
I vaguely remember that “Amazon lens” for Unity, I don’t think they ever were that much FOSS-only.
No matter how many games run on linux, it won’t be enough because there aren’t ever going to be linux exclusives.
It’s fine. That’d still be goal fulfilled.
Office sounds like a big deal, but Apple managed to prove you don’t need it.
How so?
There’s nothing wrong with Libre- or Open- except the complete lack of advertising and passive training to its nuances that we get from MS and Apple office products.
I recently had a problem with LO, while editing a document with lots of math formulae - from time to time while adding a formula about half of others (in the whole document) would just become empty.
Not sure something like that would happen under Apple suite’s analog of Word, whatever it’s called.
It’s not that linux can’t win on games or office. It’s that the game is rigged against it on both.
With that I agree, somewhere in 2012 I somehow realized that it’s already much better than the alternatives, and yes, for a housewife’s desktop just as well, if one’s honest and thinks of their own needs.
And if one’s comparing it to advertising of the competing commercial products, then it’s hopeless.
I recently been arguing with some dude about some PUBG mechanics. It took me quite some time to realize that he was playing PUBG mobile, never played the PC version or even knew that it even existed for that matter. For him, PUBG simply meant PUBG mobile. For those people, they don’t even consider using PC for gaming. They might consider console, but PC to them is just more or less a typewriter for school/office tasks.
I’ve been thinking for some time what to answer and concluded that the normie world is a world of pain.
We - as in FOSS OS users and FOSS paradigm users - desperately need open hardware, so that the rest of the industry could eat all the rubber dicks they want without affecting us significantly.
And I mean not only hardware design, but fabs.
It may seem an impossible future, with semiconductor deficit etc, and Taiwan being that important.
And with starting a fab being so expensive.
Still, they only way a conclusive FOSS victory resulting in even balance happens is if there is a public fab producing general-purpose hardware with public design.
Because right now lots of resources are being wasted on catching up in inherently disadvantageous areas, like supporting proprietary hardware which is always harder for FOSS developers than for MS or Apple.
Without full-chain FOSS hardware production it’ll always be bare survival.
And yet here I am looking to expanding my devices with a replacement server (linux) and a NUC (linux).
Finally ditched Windows on the desktop forever, about 7 months ago.
I agree with you on mobile. I my country many ppl ditched laptops and desktops for their phones.
Although I have a hard time understanding how they can actually get some work done on the phone, if they do any work from home that requires a computer. Well those ppl probably have an old laptop laying around.
I remember looking at pc sales data, and they have been shrinking in the last decade, with the curve flattening until the pandemic, when sales grew substantially, almost to the 2000s level. Now it’s shrinking back slowly. I’m not sure if people are abandoning desktops in favor of phones as much as we think. desktops are durable and we tend to have only one, while mobile devices are gaining different forms, and people are getting more of them. Perhaps the desktop market has not much more room to grow while mobile devices are still booming.
But that’s just one possible explanation, I might be wrong. I was going to post the data, but statista requires login to see it.
I don’t know if we know it’s shrinking back for sure. With the exception of Q1’23, there seems to be a balance around 19M sales per quarter. There’s a way to read it as shrinking, but there’s also a way to read it as stabilizing. There’s just not enough samples to be certain.
What we have to remember is that we’re finally reaching a turning point in GPU pricing. Laptops that were in the $2000+ range a year or two ago are closer to the $1000 commodity price. There had been a “value stall” that just broke, where a new computer used to not be a significant upgrade on an old one, and so people might hold onto their current computers a year or two longer.
I mean, I sure I pulled a few discounts out of my ass, but I just landed an i9 laptop with a 4090 for just over $2k as a replacement to a computer that died. Two years ago almost to the day I bought a middle-of-the-road gaming machine with a 3070 in it for about the same price.
I had a security download (but not yet installed) ready yesterday. Logged off without installing. Turned on my device today and couldnt log in. Checked my pwd 3 times before seeing "authentication service not working " iirc.
After reboot it installed and logging in worked.
Is this related or not and is it expected? Not being able to log in without a mandatory patch first so to say?
I have nothing against the people that are working on AI and appreciate the work they do. However every time I see an article about a company using AI like this I just get the vibe that it’s a bunch of middle aged men trying desperately to make things like the “future” they saw when they were a kid. I’ve seen amazing implementations of AI in a lot of different ways but I’m so sick of dumb ideas like this because some guy that used to watch Star Trek as a kid wants to feel like they live in the future while piggybacking on someone else’s work. It’s like the painted tunnel in cartoons where it looks like a real tunnel but in reality it’s just a very convincing lie. And that’s all that it is. Complexity does not mean sophistication when it comes to AI and never has and to treat it as such is just a forceful way to make your ideas come true without putting in the real effort.
Sorry, I had to get that out. Also I have nothing against Star Trek and I used to watch it as a kid because my parents watched it all the time.
some guy that used to watch Star Trek as a kid wants to feel like they live in the future while piggybacking on someone else’s work.
I don't think they care about their own nostalgia. I think they ant to use other people's dreams to make a lot of money. I'm also sure some of them genuinely just ant to push the technological envelope just cause they can, ethics be damned. But ultimately, it's just money.
I would love nothing more than the utopian future Trek promised but greed is killing it.
Complexity does not mean sophistication when it comes to AI and never has and to treat it as such is just a forceful way to make your ideas come true without putting in the real effort.
It’s a bit off-topic, but what I really want is a language model that assigns semantic values to the tokens, and handles those values instead of directly working with the tokens themselves. That would be probably far less complex than current state-of-art LLMs, but way more sophisticated, and require far less data for “training”.
I’m not sure I understand. Do you mean hearing codewords triggering actions as opposed to trying to understand the users intent through language? Or is are there a few more layers to this whole thing than my moderate nerd cred will allow me to understand?
Not quite. I’m focusing on chatbots like Bard, ChatGPT and the likes, and their technology (LLM, or large language model).
At the core those LLMs work like this: they pick words, split them into “tokens”, and then perform a few operations on those tokens, across multiple layers. But at the end of the day they still work with the words themselves, not with the meaning being encoded by those words.
What I want is an LLM that assigns multiple meanings for those words, and performs the operations above on the meaning itself. In other words the LLM would actually understand you, not just chain words.
Semantic embeddings are a thing. LLMs “work with tokens” but they associate them with semantic models internally. You can externalize it via semantic embeddings so that the same semantic models can be shared between LLMs.
The source that I’ve linked mentions semantic embedding; so does further literature on the internet. However, the operations are still being performed with the vectors resulting from the tokens themselves, with said embedding playing a secondary role.
This is evident for example through excerpts like
The token embeddings map a token ID to a fixed-size vector with some semantic meaning of the tokens. These brings some interesting properties: similar tokens will have a similar embedding (in other words, calculating the cosine similarity between two embeddings will give us a good idea of how similar the tokens are).
Emphasis mine. A similar conclusion (that the LLM is still handling the tokens, not their meaning) can be reached by analysing the hallucinations that your typical LLM bot outputs, and asking why that hallu is there.
What I’m proposing is deeper than that. It’s to use the input tokens (i.e. morphemes) only to retrieve the sememes (units of meaning; further info here) that they’re conveying, then discard the tokens themselves, and perform the operations solely on the sememes. Then for the output you translate the sememes obtained by the transformer into morphemes=tokens again.
I believe that this would have two big benefits:
The amount of data necessary to “train” the LLM will decrease. Perhaps by orders of magnitude.
A major type of hallucination will go away: self-contradiction (for example: states that A exists, then that A doesn’t exist).
And it might be an additional layer, but the whole approach is considerably simpler than what’s being done currently - pretending that the tokens themselves have some intrinsic value, then playing whack-a-mole with situations where the token and the contextually assigned value (by the human using the LLM) differ.
[This could even go deeper, handling a pragmatic layer beyond the tokens/morphemes and the units of meaning/sememes. It would be closer to what @njordomir understood from my other comment, as it would then deal with the intent of the utterance.]
This is pretty sick. Not just flatpaks but easily install any application, using apt or dnf package managers, or deb or rpm files, in a container with a simple syntax. Wow. Wrap a GUI around it and this may be a winning formula for an easy and stable Linux desktop.
That’s not mentioned in this specific blog post, but that’s always been one of Vanilla OS’s defining features, it’s “apx” package manager to install those various types of packages
It’s even using Distrobox actually, but the point is to make it simpler to install packages for those contrainers, with the user not worrying as much about managing the individual containers, and not having to memorize the specific commands for each individual distro’s package manager
Basically, like the rest of Vanilla OS, the point isn’t that you can’t do this stuff elsewhere, it’s that it’s trying to make it easier to do it
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.