TBH Amazon has a whole zoo of devices. Even if they are putting a small team of 2 or 3 people in charge for porting this to each device, they might end up with a few hundred people
They’ve been completely dropping the ball for years. I used to donate regularly but have completely given up on this project. It’s a farce at this point.
Thankfully I only have simple needs so Krita suffices and I don’t have to deal with the never-improving UX nightmare and never-releasing changes.
Yeah, I’m salty. It’s just that GIMP was a shining star of FOSS and it’s just been slowly rotting from inaction.
They’ve been completely dropping the ball for years. I used to donate regularly but have completely given up on this project. It’s a farce at this point.
Liberapay shows the number of donors has almost doubled in the last few months (look at “view income history”), so i hope it is an indication that they made good changes to the project management and the future will be better.
That’s good to hear, and I really would love for things to get sorted out. Gimp 3.x has many improvements for sure but there’s a long way to go and actually releasing these improvements is necessary…
If gimp can become another blender that would be incredible.
In X11 it’s server side, and in gnome wayland it’s of course client side, but they look exactly the same as the SSD ones. I doubt they’ll change that between the current beta and the 3.x release.
The GTK3 port is done, and now they need to finalize the new extension API and improve their color space support (particularly CMYK). It would be nice if Wayland had a color management protocol extension standardized by then, but I don’t think it’s a blocker.
I made the swich a year or two ago. It is much better I find. I leave it running in a tmux session on my server . with btop on one pane and switch to another with a split view to do work. It allows me to take a quick glance at any time while not taking the focus from what I was working on.
I saw him with “-1” so actually 2 people not just one person have misclicked according to your theory. Hmmm i don’t know, but i hope it’s true, better then the alternative
As others have mentionned downloading the .deb and running it will also work, but I feel nobody gave your a tldr of why you may want to follow those instructions instead, so here it is:
Those instructions configure your package manager (apt) with a new repository for this application.
The upside to that is that anytime you will look for updates, this app will also get updated.
It’s a bit more work up front, but it can pay off when you have dozens of app updating as part of normal system operations.
Imagine a world where windows updates would also update all your software, that’s what this is.
Also, no, this is not an ideal way to do this. Ideally every package you want is in your distro’s repos so you’d just need to do “apt install [package]”.
The reason this one isn’t is because mullvad wants to make sure you use their tested, secure, and updated version and they don’t want to maintain that for every distro. So they have you configure your package manager to use their repos.
This is relatively uncommon to come across in Debian. You’ll normally only find it in security applications or very niche ones. The Debian repos aren’t the most comprehensive but they’ll contain the vast majority of common softwares.
I daily drive Fedora, but I’ve used Arch, OpenSUSE, Debian, and more. Once you get used to how Linux works, distro doesn’t really matter that much aside from edge case distros that operate totally differently like Nix. I chose Fedora because I like the dnf package manager.
The only distro I don’t like is Ubuntu. I had to setup a Linux VM at work so I figured Ubuntu would be a good choice for that. Firefox is painfully slow to open because of Snap, so I uninstall it and run “apt install firefox” which Ubuntu overrides and installs the Snap again.
Fuck. That. Deleted the VM and installed Debian instead.
Yeah, over the years they’ve all become largely the same except for package management and the locations of some config files and system binaries (/bin,/sbin,/usr/local/sbin, etc…). Some attempt to be a one size fits all model and contain everything that you’d want, while others give you the bare minimum.
Exactly. I tried using Linux and I just don’t understand how to use it, and I consider myself fairly tech savvy. It would bring my productivity to a grinding halt if I had to switch to Linux.
There are many many outdated patterns how to do things in Windows that are cemented in public knowledge. Running random executable installers from the web giving them superuser permissions is I thing the most popular one.
How to share all user settings between system installations? How to change the logo in the desktop bar? How to add a directory to an applications bar? How to change system build-in keyboard shortcut? How to reinstall just the system keeping the programs? How to make a file run on a shortcut? Those are things I use daily, that are impossible or need some hacky programs to work on anything other than Linux, I would die if I had to switch back now.
Depends honestly but for most people it will work fine if you use something like Pop OS, Nobara, or other distros that set it up for you (or you know how to set it up yourself but that’s unlikely to be the case)
It is far too confusing what to use - even as someone who uses Linux on various servers, a media centre, WSL and used to run a Gentoo laptop I still don’t know which distro to use, let alone which of KDE/Gnome, X11/Wayland, init/systemd etc.
Use whatever is popular and has a cool logo. Distro is basically a software library, preinstalled programs and default settings. You can transform any distro to behave like the other one.
KDE, Gnome, XFCE…? Which is looking better for you or which one was default. Init system? Which was the default. X11/Wayland? Wayland. Go with X11 only if Wayland is having problems with your graphics card.
Marketing is monopolized with Google and Facebook. Manufacturers and Microsoft won’t make one-click installs happen. Tech support would be chicken and egg problem. Ugh…
I'd also bet that a huge portion of those offices rely on at least some kind of proprietary software that doesn't play nice/officially support Linux. MS Office, for example, or Autodesk's stuff. When I saw what a headache it would be to get these working on Linux, I just shrugged and decided I'd keep my dual boot available for when I inevitably have need.
You're turning up the cost dial for every additional workaround or adjustment you ask of people. Just to save what is fundamentally seen as $50-200 up front cost on a system for a new Windows 11 Pro license.
The article and post title itself alludes to the fact that windows 11 won’t support millions of machines, so a w11 license is useless. And if you meant you can buy a PC that supports w11 and is worth using, for $50, I need to consult with you for the world’s best shopping tips
None of the main adobe suite works on Linux either, so let’s not pretend my use case is so narrow. Literally none of the programs I use to work (Cubase, Audition, After Effects, Illustrator, Premiere, yes I can install a virtual windows machine but that completely defeats the purpose) works with Linux. And from what I gather last time I researched this, hardly any audio interfaces are Linux compatible. Most of the games I want to play also are not Linux-compatible.
Fact of the matter is, despite the large dedicated userbase (which I appreciate), it still has a giant gap where many prosumers and casual users cannot utilise it. It’s no good saying “ahhh well YOU’RE not compatible with US! No u!”. I’d love to switch and tbh am strongly considering a setup for live PA that’s Linux based, in the hope that it brings greater stability. But it’s going to be a large investment of time, and I’ll have to buy a different audio interface if I have a hope of making it work.
And this is a huge barrier for a lot of users, a massive roadblock. But the article talk about houndres of millions of computers, my point was just about that even if millions like you cannot switch, still in this statistics are millions that can especially non-professional that do not make audio or video, but that are going to throw away a working machine.
I feel like you might feel being personally directed by my comment, because of your respond with “YOU’re not compatible”. Maybe it was bad wording, sorry. What I ment was that it can be frustrating to see “Linux doesn’t support …” when actually it has everything needed to support this software and the burden to make it available is on the software developer. Like saying that USB-C doesn’t support iPhone 13. Lack of it still hurts the Linux side anyway, but I just don’t want misconsaptions about which side should make a port happen.
Yep I definitely took it wrong, one of the problems with text only communication… No body language or audio cues! No worries.
The devs of my audio interface have definitely been asked a fair bit about Linux compatibility… But considering they’ve not even bothered bringing their new DAW to PC, it seems they’re strongly focussed on mac ecosystems only for the foreseeable.
Personally I think compatibility should be a two way street pun not intended! But unfortunately companies tend to vote with our wallets, so until Linux becomes even more established I doubt they will dedicate much if any resources to making their devices work on it. Shame.
I bought a new audio interface for live work a few months back, went for an audient id24 partly because it’s Linux compatible (although no native drivers). So I will get stuck in at some point. I started using PCs back when floppy disks were actually floppy so I’m not afraid of command line stuff!
For me, it’s not the DAW (Reaper works fine), but this is not the case for every DAW and it must be recognized that switching DAWs is non-trivial (nor should it be expected). In my case, it’s the HW. I can likely get my interface to run (unsupported) but my Maschine is a non-starter. Yes - I know there are a few drivers for similar HW around written by clever folk who’ve done reverse engineering, but it only covers a few minor use cases and is, at best a science experiment and not something one should ever depend on even if it did work.
SW is a problem too - yes most plugins can be coaxed into working, but certainly not all. Add to that the underlying tech is usually wine, and it’s a perpetual game of whackamole to maybe get the stuff you paid for to run.
The folks writing these bridging tools are not too blame - it’s brilliant, wonderful work. Fundamentally, it’s an act of good will that one can’t rely afford to fully depend on if it even does work. I love FOSS, but it’s not everything - I certainly don’t expect a free ride, but I do want the option to pay to run what I want.
The issue is the HW and SW manufacturers - they need a critical mass of potential users to be bothered to commit to developing for Linux. My hope is that as user bases grow (in places like India) the cost/benefit analysis shifts.
At most, you might be able to get midi mode to work (if you scrounge the internet for experimental and old reverse engineered scripts.) But almost certainly not the core Maschine functionality (ie - the main reason for buying maschine in the first place).
Even if you can get it to work none of it will be supported and you’re always at risk of an update rendering things inoperable.
It’s worth noting that only the old Native Access installer runs in wine (with coaxing). The newer one does not, and from what I’ve read, the break points are features that will never be supported in wine.
Wine is clever, but it’s always an incomplete game of whackamole. A workaround at best.
Well this doesn’t sound appealing! And this just speaks to what I was trying to explain to the person at the start of this thread… Linux may be growing rapidly but there’s still giant holes in the driverset etc for many tasks.
I think prob the best solution will be to perform a hard reset / clean on the laptop, remove any bloatware, keep it offline once I’ve installed necessary updates / plugins, and only have live PA software installed.
The Adobe case is a big one. For me, it’s lightroom that has no real Linux counterpart. The app itself isn’t where the magic is - darktable exists. The magic is in the interapp interoperability - bi-directional syncs and edits in any platform. FOSS is very unlikely to create something like this (would love to be wrong) as it’s less of a tech challenge than an enterprise architecture challenge with a component systems falling in line. This sort of thing requires money to be executed effectively, unfortunately.
Really hope overall user base in Linux can grow enough to catch attention of SW/HW manufacturers, but have been hoping this for many, many years…
Maybe there should be a centralised GitXXX documentation „Windows to Linux” with everything from choosing a distro to troubleshooting and links to appropriate wikis. There are so many guides/blogs, each saying something different
True, but I’m sure there could be something like „awesome-xxx” that’s just… one main one. Maybe I should just try doing that myself with my limited knowledge, I can’t really code, but I always wanted to contribute somehow
The solution is donate them. Don’t send them to a landfill. Give poor students a free laptop with Linux installed, etc. There are probably thousands of uses for an old computer that are better than sending it to a landfill.
One of the most common I downvote comments is including things like "Edit: why all the downvotes?" in topics that aren't about the voting system (instinctually downvoted this topic, but un-downvoted), . But also just downvote things things are spammy, *phobic, defending genocides, etc.
Most of the time, it feels like people are just saying "yall are just mad cause I'm right" but using different words because its often obvious why: an unpopular opinion or believed to be objectively false. These comments already have plenty of replies explaining why their comment is bad in some way. The only cases where there should be confusion about why is is if you are posting in a community that gets the same comments all the time and so its spam and you don't know it, or you said something that is being misinterpreted but for whatever reason you are unable to tell why and you haven't gotten any replies already (but for some reason are paying close attention to your internet points).
It’s the Internet. People who participate in good faith discussions probably aren’t downvoting willy nilly. Everyone else isn’t going to be swayed or give meaningful feedback anyway.
Downvoted get abused a lot where they exist. People dog pile pretty quickly. It seems like an image human characteristic. It’s just a fickle mob. The smaller the community, where members know each other by handle, are usually the best for actual discussions.
That’s probably the reason why instances like lemmy.blahaj.zone, pricefield.org, and reddthat.com chose to disable them. They aren’t constructive and more importantly they lead to people using them instead of reporting, which is really bad when it comes to enforcing rule violations.
Don't worry about it. If you were really wrong someone would chomp at the bit to reply to you about how wrong you are. If they're not, you either have an unpopular or popular enough to be spam, opinion.
If people are downvoting and not commenting there is probably an obvious reason why.
Usually you just said some type of heresy in that community, like going to a NASA forum and saying it’s idiotic to still be trying for manned space missions to the moon or elsewhere.
It’s so anathema to the community they don’t even want to engage in a discussion about it, they just want to say “you’re wrong/I don’t like this” and move on.
Far more civil than how religions used to deal with heretics, imo
Almost a decade ago there was a discussion how to draw into display buffers for Wayland. Everybody agreed on using Mesa GBM, nvidia wasn’t really interested, but said they’d do EGLstreams.
As nvidia wasn’t interested, and generally is a dick to everybody anyway Wayland development just progressed ignoring nvidia, and now they have to catch up to where all the other graphics driver were at already years ago. While ignoring most of the things those others learned, because they want to keep their own tiny proprietary island.
Just avoid supporting nvidias dickish behaviour by not giving them money, and eventually they might learn and change.
For all worrying about it I’d like to say, you can re-add driver code and compile your own kernel, and everything will be working fine, and last time I’ve read wiki there’s SLTC support for Linux 6.1 means your GPUs will be officially supported until 2033
AMD and nVidia on Windows: So your GPU is still very capable and useful for almost everything including most gaming tasks, but it’s a couple years old and not making us money any more? Sucks to be you, have fun hunting for unmaintained legacy drivers with likely security holes from questionable sources.
Linux: Your video card is from a long bygone era of computing, before the term “GPU” was a thing, and basically a museum piece by now? We’ll maintain a long-term support version for you for the next ten years.
For a moment I thought this post was about the LTT host. And was like they could replace him with any of his doppelgangers in the group and no one will notice.
It was a mistake to come down from the trees if you ask me. These days there’s even people saying we should of stayed in the water were life was simpler.
Of course there’s the total extremists who think life was better as a single celled microbe. Those people are always hard to talk to.
We should HAVE stayed in the water. The real fringe radicals are those who defend the idea that crystals are alive. I think they’re lesser lifeforms who don’t deserve social security
Want to exchange information in json? plaintext? binary data? Sockets can do it.
This is exactly why you need something like dbus. If you just have a socket, you know nothing about how the data is structured, what the communication protocol is, etc. dbus defines all this.
In either case you still need to read the documentation of whatever daemon you’re trying to interface with to understand how it actually works. Dbus just adds the extra overhead of also needing to understand how dbus itself works. Meanwhile sockets can be explained in sixteen words: “It’s like a TCP server, but listening on a path instead of an ip and port”.
It’s much easier to understand how dbus works once than to understand how every daemon you connect to works every time you interface with a new daemon.
Yeah that’s the case with programming… well anything. This at least gives you a way to automatically receive all of that data from any app without excessive prior knowledge. With a small amount of info you can filter for specific events and create all kinds of robust functionality. That’s the power of a set protocol - it is to make things widely compatible with one another by only depending on the dbus protocol and app name. Otherwise you may need to depend on some shared objects which makes deployment and maintenance a total clusterfuck.
You got downvoted for speaking the truth. You can’t talk to a dbus app without understanding how it communicates. You can’t talk to a sockets app without understanding how it communicates.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.