lemmyvore

@lemmyvore@feddit.nl

This profile is from a federated server and may be incomplete. Browse more on the original instance.

lemmyvore,

You forgot the part where they don’t need Wayland and its reduced features, because everything works fine in Xorg.

Stop pushing people towards Wayland, let it happen naturally when it will be ready and better, and they’ll come. Trying to force adoption will just make people resent it.

lemmyvore,

I’m glad Wayland solves problems for you, but it creates them for others.

Imagine being forced to go the other way. Could you be coerced into going back to Xorg? What would you do if a distro attempted to do that to you?

lemmyvore,

“Linux” is not an entity with well defined goals, it’s a community that mostly does whatever it wants. That has the fortunate side effect of producing labors of love in software, that prove really useful in the real world. But it also ignores things like user experience, which affect things like the desktop the most.

On Linux the user is a second-class citizen, because worth in the community is determined by how much a person contributes (in code, testing, artwork, documentation etc.)

The Linux mindset is best expressed by a quote from Simon Travaglia (which I paraphrase because I don’t remember it verbatim): “We’re tasked with the well-being of the servers, not the users. They’re lucky we even let them log in since users technically upset the smooth operation of the servers.”

lemmyvore,

You forgot the part where this is what is happening.

All I see is a rift in the community over one side pushing software that’s beta-quality at best, and acting very arrogant and dismissive towards real adoption impediments.

Which is par for the course for Linux, naturally, but “it’s happening” is wishful thinking at this stage. At this rate and with this attitude it will take at least another 5 years.

Wayland’s worst enemy is its own fans.

lemmyvore,

They aren’t facts, again, they’re wishful thinking. I’m a long time contributor and developer and I can assure you that with things as complex as X and Wayland things would move slowly even if everybody was of the same mind, let alone in the “herding cats” style of FOSS.

Wayland has been in development for 15 years and it’s still not ready – please, it’s not, and stomping our feet and claiming otherwise won’t make it so. Another 5 years will probably see it reach a more stable state.

What do I mean by ready? Well the desktop stack [on Linux and *NIX] is extremely complex. Whenever you’re dealing with something extremely complex in software, over the years, you amass a large amount of solutions that solve real world problems. That’s what I call “ready”. Most of those solutions will be dealing with quirks and use cases which do not affect everybody equally, but they’re each crucial in their own way to a varying slice of the userbase.

Whenever you rewrite something from scratch you throw away the bulk of those quirks. It’s a common fallacy for developers to look at the shiny new thing and think that it’s better. In reality it’s worthless without the quirks, and accumulating those quirks all over again takes a long time. X has been accumulating them for 40 years. Wayland is barely scratching the surface.

The fact the protocol places and splits the burden over the various DE and WM teams will NOT help. We will need libraries that solve the same problem once instead of over and over, and most DE/WM will come to depend on those libraries. The end result will be eerily similar to X. Ironically, by the time Wayland will be done it will have spent a comparable time in development to X, and will have accumulated the same amount of baggage that people dislike about X.

What percentage of the Linux Desktop universe are you expecting will still be using X at the end of 2025?

More or less the same that’s using X right now. GNOME, KDE and the various distros will get a bloody nose trying to force Wayland through but if that’s the only way they learn, so be it.

The Steam Deck actually has one of the few use cases where Wayland actually makes sense, it’s a turnkey, highly controlled stack (both software and hardware) where users don’t have any reasons to care about what’s under the hood. I expect them to switch ASAP.

Another place where Wayland can be used straightaway is the desktop graphical login screen (which is the original reason it was created for anyway). It’s a singular application with reduced requirements and simplistic interactions.

lemmyvore,

but why should AMD, Intel and NVIDIA care about Linux desktop

They care because it’s free testing for their more lucrative Linux-based products. We’re their lab rats.

deleted_by_author

  • Loading...
  • lemmyvore,

    I think you bit a big mouthful if you’re just starting out on the NAS game. I would suggest breaking things down into smaller pieces:

    1. Prepare a standalone container only with the VPN.
    2. Try to set up a torrent client container on its own.
    3. Learn how to set up docker networks for the 1st and 2nd container so that the torrent client will always use the VPN.
    4. Try to set up a Jellyfin container on its own.
    5. Move on to the *arr stack.

    nixOS also has a bit of a learning curve and it would’ve probably been easier if you started with something else. Up to you if you want to stick to it. IMO it’s mostly overkill for an OS that will simply serve as the base for a docker setup.

    lemmyvore,

    What I do is use Claws Mail with POP3, it has an option that allows a message to only be deleted from the server after a configurable period of time. So if you set it for 10 days for example the message will exist both locally on your PC and on the server for 10 days, after which it will only exist on the PC.

    It works pretty well in general. The only account giving me some trouble is Yahoo, which I suspect has some quirks, which occasionally cause the messages to be downloaded again and duplicated. Thankfully it’s easily fixed because Claws also has a feature to delete duplicates.

    This approach is different from IMAP, which would maintain a local offline cache of the live inbox, but you wouldn’t be able to only keep local messages — any change in one side would be reflected in both.

    However, Claws allows you to do both. You can have both a POP3 and an IMAP account connected to the same live box use the POP3 for offline archival, and the IMAP for when you want to put something back on the server, or if you need to look at other folders on the server besides inbox (POP3 cab only see the inbox, not trash, sent etc.)

    Normally I only do folders locally on the PC, on the mailbox connected with POP3, so none of the organization is reflected on the live mailbox, which is inbox only. Every once in a while I connect via IMAP to recover emails from the sent folder, which I’ve sent with webmail or from mobile (using IMAP on mobile too).

    If this doesn’t fit your workflow turn there are lots of IMAP syncing tools like you’ve noticed. IMAPsync is pretty good.

    The last step for my workflow would be to self host an IMAP server that will index the POP3 mailbox, and expose it read-only (without SMTP) through a webmail app, for archival and search only. I may have to look at Piler. The quirk here is that the Claws mailbox format is slightly different from IMAP, it’s very similar to mbox but not identical, will have to see if any IMAP server will accept it.

    Thunderbird is no go unfortunately, its main box format is to keep all messages on one big file instead of individual files, which complicates things a lot.

    lemmyvore,

    I think (and hope) tha the logical conclusion of the DNT lawsuit v LinkedIn will be that DNT will be deemed necessary and sufficient, and that this setting will replace all the cookie banners. But even if that comes to pass it will be years before all the banners will be gone.

    lemmyvore,

    if I copy my /home (someone said /etc too) over to my laptop, and back it up as well, I’m golden?

    /home yes., but ideally only files and dirs starting with a dot (so called “dotfiles” under your home dir. tar cvfa homedots.tar.gz /home/username/.??* should take care of it.

    Please note it will include some large stuff that’s probably not needed, like .cache, or some individual caches for other apps that don’t use .cache, like the browsers.

    Don’t copy /etc, it’s usually machine-specific.

    would different hostnames and usernames make a problem?

    Hostname no (if you don’t bring etc). Username technically yes, you may want to rename the home dir. The user id and group id are important too but usually off it’s the first user on the same distro it will receive the same ids (typically 1000 nowadays). If not, you can change that manually and recursively chown 1000:1000 -r /home/username.

    lemmyvore,

    You’re welcome.

    To clarify, /etc can have things that are relevant for the machine so you may want to back it up, but it’s not usually transferrable directly to another machine because it probably doesn’t play the exact same role. It has things like service configs, network configs etc.

    Even if you’re trying to migrate a machine to new hardware and the machine will play the same role it’s best to pick and choose files from /etc/ on a case by case basis. What I do is grab a tarball of /etc and set it aside, then if I need to redo something the same way it was on the old machine I can dig through the tarball and only use the relevant files.

    Like I said it’s extremely specific. For example if I want to reconfigure the SSH daemon that’s usually a couple of lines which I know by heart (turn root login and password logins off) which I can do by hand; if I want to reconfigure CUPS printing it’s best to use the CUPS admin interface to autodetect the printer, you don’t usually want to mess with its config files; for some things like /etc/fstab or NFS or RAID I may want to copy some stuff but edit the disk UUIDs; for some things like Samba I could in theory copy the config straight over. It varies.

    The list of installed packages may also be relevant when you migrate to a new machine. Different distros have different commands for obtaining a list of installed packages, and different ways of using that on the new machine to restore the same package selection. This is useful and typically can get you started much faster on the new machine.

    lemmyvore,

    What on Earth for. I don’t think I’ve used it more than a couple of times over the last 5 years, and that was for arcane stuff like enabling rc.local (which is something every user should probably not know about…)

    lemmyvore,

    Why doesn’t it start automatically anyway?

    lemmyvore, (edited )

    Out of curiosity, what’s the point of installing Bluetooth but keeping it disabled?

    I imagine the opposite would be the default most people wanted (enable it by default and let power users with a bizarre use case disable it manually).

    lemmyvore,

    So, like, you have to manually enable every service you install?

    lemmyvore,

    Because if I install bluetooth it’s because I have some bluetooth devices I want to use?..

    lemmyvore, (edited )

    I’ve been using Linux for over 20 years and I don’t get it either. I don’t know why a vocal minority get so fixated on it. It’s not like it’s the only manufacturer with proprietary drivers. As long as the drivers work and are easy to install I don’t see a problem.

    I’ve used ATI/AMD cards equally over the years and I’ve always ended up having more problems overall with them than with Nvidia cards & drivers. If I were inclined to generalize I could say that open source drivers are apparently lower quality, right? 🙂

    But that would be just as silly as the other way around. I don’t think that open or closed drivers, in itself, automatically says anything about quality.

    If closed source drivers really were a problem then Nvidia wouldn’t be used by 80% of Linux gamers.

    lemmyvore,

    What are the dependencies? I always have trouble figuring those out. It could be any obscure combination of stuff to install, with specific versions. Yeah bottles makes it easy to install them but what’s the use of you don’t know which.

    lemmyvore, (edited )

    Counterpoint: I don’t think any Linux DE will ever see mainstream adoption.

    It has nothing to do with how good they are. It’s not related to software support either. They could support every piece of software ever made; Linux supports 90% of games for Windows and emulators for dozens of other platforms and it still hasn’t attracted more than like 2% of gamers.

    It’s related to what OP said: to gain mass adoption you need to put up with a lot of bullshit. It takes a company with some financial gain to do that, and paid developers. Volunteer contributors will eventually say “screw this” or go mental like Torvalds.

    There’s no company that can do this. They tried and failed, because Microsoft. Apple and Google had to create their own platforms from scratch to get away from it.

    lemmyvore,

    With other init systems you don’t have to write any custom config files. You just have to start docker; it already has container maintenance built-in.

    I’ll never understand why they had to complicate it and require every container to also have a unit of explicit management.

    lemmyvore,

    It is, it’s what restart: always does. It will restart a container on failure and start it on boot, unless explicitly stopped.

    lemmyvore,

    Screen recording/screen sharing and keyboard/window automation are the big ones missing for me.

    lemmyvore,

    I’m using Claws Mail. It has a plugin that can do notifications in many ways, including a tray icon. You can configure it to start hidden in the tray, configure how often it checks email and on which accounts, to which folders the notification should react etc.

    lemmyvore,

    You can probably pick up a decent desktop machine for $50 from your local ads and put the rest into upgrades and still have some money left over.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #