Because Toolbox is a Red Hat/Fedora project and Distrobox isn’t. Also, Distrobox is a much more recent project (2021) compared to Toolbox, which was developed back in 2018. When Silverblue came out, there was a need to make it easier to install apps, and thus Toolbox was born.
Since Toolbox is a Red Hat/Fedora project, it means that it’s officially supported, whereas Distrobox isn’t. Not that it means much from a community support/home use case of course, but that might matter if you’re an enterprise and you want support from Red Hat or official Fedora communities.
But both use podman behind the scenes so internally they aren’t that different, but you can think of Distrobox as a more distro-agnostic and user-friendly version of Toolbox. If you’re a home user then stick to Distrobox.
What they are talking about is that some of the Wayland compositors rely on things like libinput and libdrm which are Linux specific.
This is not “Wayland” really but, from the point of view of a regular user, it may as well be. As the OP points out, there is no /usr/bin/Wayland
It is not really a great criticism although it must be frustrating for the BSD folks and others. Of course, the answer like always is to contribute. Nothing stopping anybody from taking wlroots ( or whatever ) and adding abstractions that make it more portable.
Non-Linux operating systems have already added Wayland support ( like Haiku ). If I had the time, I would add it to SerenityOS myself.
Actually, if I had the time, I might write a WaylandServer for X. First, it would be funny. Second, the people that do not want to move could stay on X forever even when everything stops supporting it. I would have to make sure that my WaylandServer could run XWayland of course.
Yeah, I was going to ask if the Wayland protocol included some Linux-kernel-specific data structures or something that would make it somehow more awkward to implement on non-Linux kernels or something.
Like if I created a protocol that included sending data encoded using the Python serialization framework called “pickle”, one could say that was a Python-specific protocol in that while it would be possible to use that protocol from other languages, it would be very weird and awkward to do so at best.
Not really knowing much about the specifics of Wayland, I wouldn’t truly know if there could possibly be anything Linux-specific in Wayland. But as far as I know, it’s entirely possible theshatterstone54 knows something I don’t.
It looks like it’s in a binary format and is printing control characters because it isn’t a format intended to be viewed/edited. You could try opening it in a hex editor and see if you can make any sense from that.
This screenshot is the only metric where btrfs is incredibly slow.
Bcachefs random and sequential writes and reads are much slower than other filesystems in this benchmark.
I have no idea how the actual real world performance will be. Bcachefs still misses a lot of features so I’ll continue to follow the development, hopefully including performance improvements.
Bcachefs sequential write performance in this out-of-the-box comparison was coming in at around half the speed of Btrfs while XFS, F2FS, and EXT4 were the fastest.
Wait, so the benchmarks mostly contradict claims that bcachefs is almost as fast as ext4 except in application startup time? What kind of test performed for that application startup time benchmark?
Note that in this benchmark, bcachefs had a debug variable turned on that allegedly severely hampered performance. Bcachefs has released an update to disable this variable but Phoronix hasn’t redone benchmarks yet. I wouldn’t put much value into any bcachefs-related comparisons from this current benchmark.
normal application tray and buttons for close, maximise and minimize
dolphin ! (But any capable filemanager with spacesaving UI, extensions, an editable location bar, drag/drop dialogs, selection mode, preview, pinned favourites, kfind integration,… would do)
are all simply better than the GNOME counterpart. Also things like the clickboxes of decorations actually reaching to the top corner is something so obvious its crazy that GNOME simply ignores that and you need to directly point to the “x”.
Did the windows before not have regular menu with all that? I think its an okay concept, even though I can imagine something like workspaces could make sense too.
As the first paragraph says: “The GNOME Foundation is thrilled to announce the GNOME project is receiving €1M from the Sovereign Tech Fund to modernize the platform, improve tooling and accessibility, and support features that are in the public interest.”
I’m also on KDE at the moment, but I appreciate the money going into FOSS desktop experience. Most importantly as keeping things viable for the future. Also KDE and GNOME both, one presumes, learn from each others successes.
I’d hope so. It wouldn’t be good if they were blind.
Anyway, I didn’t question the accuracy of what he said, just that his manner of delivery is obnoxious and portrays an attitude of self-importance. It feels like he’s yelling at the viewer with hostility.
Aussies tend to be quite direct. It’s basically our natural state. I get how it can be perceived as hostile, but I don’t actually think Brodie is very abrasive. He seems like a pretty relaxed guy.
Also, GitHub PRs atleast to me feel like they encourage reviewing changes by the total diff of the entire PR, and not each commit. I don’t want a slog of commits that don’t add any value - it just makes doing things like reverts more annoying. Stuff like Gerrit and phabricator enforce reviews by making you review individual commits / changes / whatever you want to call them and not branch diffs.
GitHub has an option when merging a PR to “squash and merge”. This option squashes all of the commits on the PR branch into a single commit and cherry-picks it on top of the base branch. We use this by default in our open source projects at work. Most people are not gonna go through the effort of making a well defined patch series the way it would be required for a Linux kernel contribution. Most changes aren’t that big though and so it doesn’t really matter. Send as many commits as you want in the PR, I’ll just review the diff as a whole and squash it when I’m done. Workflows should adapt to user preference, not the other way, and this is a good example of that.
How much of that is what GitHub encourages and how much of that is what Users prefer? Plenty of users seem to enjoy phabricator / Gerrit for code review in practice precisely because of their workflows.
Well squash and merge isn’t default or pushed in any way. It’s an option, and we chose to enable it ourselves because that’s what works best for us. It’s what works well for many other projects too, which is why many choose to enable it instead of the default merge commit.
Yeah, but phabricator and Gerrit are entirely separate workflows from GitHub, and a lot of people prefer that workflow because it leads to encouraging better histories and reviews. It helps you in getting rid of the “fixed typos” type of commits, while still letting you make larger PRs.
GitHub obviously does let you keep a clean git history, but the code review workflow in GH just doesn’t encourage reviewing commits.
I think the idea here is that reviewing individual commits is irrelevant if the plan is just to squash it all down. Each PR corresponds to a single change on the main branch in the end, the fact there was a main commit followed by a half size “fixed typos” and “fixed bug” commits doesn’t actually matter since it will be blown away in the end. The process results in the same clean history with good individual commits on the main branch, just as if the user squashes those commits locally before pushing it up to the code review platform.
Right, but squashed commits don’t scale for large PRs. You could argue that large PRs should be avoided, but sometimes they make sense. And in the case where you do have a large PR, a commit by commit review makes a lot of sense to keep your history clean.
Large features that are relatively isolated from the rest of the codebase make perfect sense to do in a different branch before merging it in - you don’t merge in half broken code. Squashing a large feature into one commit gets rid of any useful history that branch may have had.
I agree, and GitHub allows choosing how to merge each PR individually if you need to do something different for a specific PR. Large PRs like that are at most 1% of our total PRs, and we review those more per-commit and use a merge commit instead of a squash. By default we optimize for the other 99%.
“Although we’ll be hosting the repository on GitHub, our contribution workflow will remain unchanged and we will not be accepting Pull Requests at this time”
We can all read it right there plan as day.
If they weren’t planning to/considering it, then why specify “at this time”?
I’m only a native English speaker, so guess I could be interpreting it wrong.
Do tell oh wise one, what alternative meaning could it possibly have?
I’m only a native English speaker, so guess I could be interpreting it wrong.
You should try being a native English reader.
What it means is “they will not be accepting pull requests at this time.” Whether or not they are open to changing this in the future is not specified. They have not specifically stated that this is off the table, nor have they stated this is their intent.
So they are considering it.
Thanks for confirming my point.
If they weren’t, they wouldn’t have specified, they’d just say “we will not be accepting Pull Requests”.
How does the opinion of your supposed internal contact at mozilla affect the basic English interpretation of the public announcement?
We’d see who’s interpretation is right? Duh.
Padenot (contributer with direct ties to Mozilla internal) agrees with me on GitHub PR being terrible. 🤣
Waiting for other responses. 10000007231000000724
Note : most of them are sleeping rn, so it’s going to take a bit of time.
I asked Glob (the literal author of the announcement) directly as well. Waiting for him to wake up and see it, he was up at 3am last night, lol.
You’re quite the lunatic. I’m obviously not defending GitHub PRs, or saying Mozilla should or should not use them. I said “we are not open to PRs at this time” is not the same as “we will be open to PRs in the future.” The truth of that statement has absolutely nothing to do with whether or not Mozilla is, in fact, open to using PRs in the future. But there’s no point in telling you that, because you’re clearly unhinged. Have a good life.
Is not what I said they meant to being with. I said planning/considering, which is wildly different.
Second :
Who’s unhinged? 10000007281000000729Looks like I was right all along, they were indeed considering it but have since decided against it because of the same concerns I had mentioned previously. Is this definitive enough for you?
Nobody here needs “a long history of submitting patches to Firefox” to have an opinion on the tools used to manage the project. I assume that most here sharing their opinion don’t and yet you need not scroll far. You merely need some knowledge and experience with the tools, be it in personal, corporate, FOSS, etc. projects. Besides I don’t spend my free time helping FOSS projects just to use it to be like “my opinion better” that’s literally just the “appeal to authority fallacy”. But if you must know, I have helped here and there throughout the years under various different aliases/accounts. (Why “various aliases”? because I enjoy helping not some meaningless credit, it’s just how I am.)
So what you are saying is that as someone who has never worked on the Firefox codebase, you still somehow know more about managing contributions to one of the largest FOSS projects in the world that has been running pretty successfully for the last 25 years?
Idk, maybe try a bit of humility - like if it looks like they are making a weird decision, maybe it’s not because they are dumb and you are very smart, maybe it’s because they know stuff that you don’t?
First off, not what I said.
Second off, I never called them dumb. I actually happen to have a good relationship with them, so I take offense to what you’re implying. I mearly stated that I don’t like GitHub and gave some legitimate reasons. Which btw : 100000072310000007241000000728Maybe the one who should learn humility is you.
I have an a770. The only issue that I’ve had with what little gaming I do, is that CS2 ran pretty terribly, although I tried again last night and it seemed much better.
Appreciate it. It sounds like with the new announcement they’re putting quite a bit of support behind it so I’m optimistic improvements are made quickly
Intel have just released a driver update to combat this. Its somethimg to do with a transition layer implemetation that has been massively improved giving 500%+ performance boost.
I use btop, iotop, jnettop, and radeontop. I rarely need any individual piece of information any of them but they make for an incredible spread of blinkenlights.
hollywood is an installable app which when run takes over your machine with a fullscreen terminal and multiple panels with lots of dyanamic data to look like a hacking scene from a Hollywood film. :)
If you avoid Nvidia, it have been ready for many years. And to be honset, not sure X11 was really stable with Nvidia either. My main issue with Wayland, is that X doesn’t have multi dpi support… and for that I really cannot blame Wayland. Also, Skype doesn’t have screensharing, well, they actually had for a while, but then removed it… still, hard to blame on Wayland.
But as a general rule, if you have Nvidia, then you are not allowed to complain about anything… that was your choice, and with Nvidia under Linux, all bets are off. I thought that was clear a long time a go, especially after Linus not so subtle outburst.
My personal experience could never agree with that. I could never use Wayland on KDE on either one of my laptops with Intel graphics due to numerous glitches and incompatibilities, so nvidia is not even the scapegoat I wish it was.
I’m looking forward to plasma 6 next month, but at least on KDE, Wayland has not really been usable so far.
Machine learning pays my bills, and I never had a choice on my graphics card brand. To be sure, I wanted an AMD for the open source drivers, but CUDA remains essential to me. RocM support from AMD is a joke, and isn’t anywhere close to an alternative. Reseachers release code that only runs on CUDA for a good reason. To say that I don’t get to complain is going too far
Thanks to nouveau, I can still use GNOME even after dropping X11 🥳 I have a GeForce 6800M GT, I think, which would need a proprietary nvidia driver that is not supported (but patched by community) since kernel 5 I believe. Only thing that needed to be considered is, that one has to boot via legacy BIOS and not EFI, even on a mac laptop which normally uses EFI to boot into macOS and the grafic card still works. Would be nice if the nouveau team would get the card running on EFI as well.
But as a general rule, if you have Nvidia, then you are not allowed to complain about anything… that was your choice, and with Nvidia under Linux, all bets are off. I thought that was clear a long time a go, especially after Linus not so subtle outburst.
See, this attitude is exactly why Linux will never become mainstream. On Windows you don’t need to research if your machine will be able to run your operating system of choice, it just works.
If you’re a user, and you can install Linux without seeing a single warning that your hardware is going to cause issues, your distro is at fault. The moment you boot the installer, it knows damn well that your using Nvidia hardware and what the implications are. Distros either ignore the predictable instability, or they believe there is no problem, and either way the end user isn’t to blame for taking that at face value.
The truth is, Linux on Nvidia works fine, except for some very specific laptops with stupid mux chips, and even that is something Linux should fix, not the end user. Luckily, Linux installers don’t even boot on those machines, so the end user can just ignore Linux and continue using Windows.
You just can’t use Wayland if you want your Linux system to be stable, but X11 works fine and it will continue to do so for many years. Part of the Wayland issues still come from intermediate code refusing to work around Nvidia’s bullshit, ignoring known bugs and technically-spec-compliant-but-different stuff because it’s easier to blame Nvidia for everything. Wayland also makes some weird assumptions that I disagree with (“if the Wayland socket dies, your application must crash, there is no recovery”) which make minor stability problems a lot worse in practice.
Nvidia may be to blame for their shitty drivers when it comes to the core problem of the bad experiences Nvidia owners will have, not the end users buying the wrong hardware. You can’t seriously expect people who try it out for the first time to read up on the drama and controversy Linus Torvalds has caused over the years.
And even with all that, many serious Linux users who know full well the pain they’re about to subject themselves to still need Nvidia. ROCm is great but it’s nowhere near to as efficient and well-supported as CUDA. Whatever Intel has doesn’t come close and whatever macOS offers doesn’t work because even Nvidia has perfect Linux support compared to Apple.
I got my Nvidia GPU before I even considered moving to Linux. I am honestly getting pretty tired of reading these gatekeeping comments telling me “I’m not allowed to complain about anything” or how I’m a trash person for buying an Nvidia card in the first place. Nvidia is the largest GPU manufacter, people are going to own Nvidia cards, you need to live with it. Be constructive and nice to other people.
X11 is rock solid with Nvidia, never had a single problem.
I had a lot of issues with Wayland on KDE, lots of flickering issues all the time. I moved to Hyprland and things are mostly fine. IntelliJ has ocasional problems but they are working on a Wayland version anyways.
Ha, your first sentence is just plain wrong. It was quite broken under “normal” usecases with per-DE bugs.
For example, on KDE, about 1.5 years ago the bug finally got fixed where your Wayland session would completely crash if your monitor lost any signal whatsoever (monitor sleep or shutting off the monitor). If you ask me, that is an very standard usecase without which there is no world where said action crashing the entire session would be considered ready for general use.
I think we are there now, just some visual glitches nowadays, also some recent glitches with monitor sleep, but Wayland very rarely crashes anymore.
I thoroughly backup up my slow nvme before installing a new faster one. I actually didn’t even want to reuse the installation, just the files at /home.
So I mounted it at /mnt/backupnvme0n1, 2, etc and rsynced
The first few dry runs showed a lot of data was redundant, so I geniously thought “wow I should delete some of these”. And that’s when I did a classic sudo rm -rf in the /mnt root folder instead of /mnt/dirthathadthoseredundantfiles
If you’re a tinkerer it’s kind of addicting. I thought I’d give it a try just to see what it was like, and ended up staying up all night customizing it, and now about a month later I don’t really want to go back to KDE (been using KDE for almost 20 years)
If you use GNOME DE you go to the online accounts dialog, click Google and setup with your credentials, it adds GDrive to Nautilus, integrates gmail and calendar into evolution client.
It shows in the Mounts section of nautilus, for apps that don’t recognize that you may have to go to /run/media/username/mount if it doesn’t show up in the Other section of file pickers
I’m not aware of what is available for KDE. i didn’t see it when I tried KDE, but maybe somebody has successfully used the packages to setup something similar
Do you work for them; To know?They have slowly matched googles offerings and offer linux integration. User suggestions/pressure can direct their efforts. Many of us have dumped Google for Proton. They announced desktop app for Windows and MacOS
No, I’ve just been a customer for several years. Development is slow and things like this are simply not a priority. They’re not even a little close to matching Google.
Dev is slow because they release a good User experience, rather than buggy junk. Linux seems to be 3rd on their list but it comes eventually. Per the link you can use Windows or Mac sync now. Don’t forget google had a long head start and almost unlimited devs.
Dev is slow because they release a good User experience, rather than buggy junk
The reason is irrelevant. It wasn’t a criticism, just an observation.
Linux seems to be 3rd on their list but it comes eventually.
No, they have almost no Linux support. Most things have to be done in the browser. When there is Linux support, it is extremely basic.
Per the link you can use Windows or Mac sync now.
Cool. Doesn’t help Linux users.
Don’t forget google had a long head start and almost unlimited devs.
See point 1.
There was a long podcast interview with the CEO where he basically said Linux is and will continue to be looked over due to increased development costs and very low adoption.
Actually their pages say it is hard to find Linux devs for desktop, and that is why it is slow. And there is already a proton drive API you can use with rclone on linux.
And as far as critisim you said specifically not as good as google, so I provide a reason why. you can’t then change you tact and say it wasn’t critism whenvyou do a compare. It will come, things take time. You seem to keep moving goal posts here so have a good rest of your week.
Yep, and some linux community will most likely pickup on development if Proton doesn’t turn it into a full desktop linux app like the Windows or Mac version.
Actually their pages say it is hard to find Linux devs for desktop, and that is why it is slow.
Again, the reason is irrelevant. The point is, it ain’t happening.
And as far as critisim you said specifically not as good as google, so I provide a reason why. you can’t then change you tact and say it wasn’t critism
That’s not “changing tact”. It’s not as good as Google from a user perspective. That doesn’t mean it doesn’t have it’s own merits. I pay for a Proton subscription rather than use a free, much more fully-featured Google one, so I obviously understand the value proposition. I also understand it’s shortcomings.
You seem to keep moving goal posts here so have a good rest of your week.
I don’t suppose you want to elaborate on what goal posts I’ve supposedly moved?
Your initial comment was “Not gonna happen since Proton is all encrypted.” When I pointed out that that makes no difference–and we have Windows and Mac version (that accesses this encrypted data) then you switched to another reason. It won’t end, so I have to say good bye, knowing that My Proton Vpn on linux install works, the e-mail bridge works, somebody will integrate the Proton drive API with linux because that’s what the community does even if Proton doesn’t release it.
When I pointed out that that makes no difference…then you switched to another reason.
It’s not another reason. It’s the same reason.
If it wasn’t encrypted it would be trivial to spin up a local integration like Google or MS already have.
Since it is encrypted, it makes it significantly more complicated to develop. While this development may make sense on MS or Mac, it doesn’t on Linux, because it requires more resources and serves a much much smaller number of users.
I’ve already explained all of this in the previous comments.
My Proton Vpn on linux install works
“Works” is right. Like I said, it’s extremely basic compared to it’s MS and Mac counterparts.
the e-mail bridge works
Notice how MS and Mac get fully-featured desktop clients and all Linux gets is a “bridge” to connect to an inbox client developed by someone else.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.