I don’t feel like they’re inherently bad, but they’ve become so popular that they all feel like they’re blending together. I think it’s kind of stale at this point.
I don’t run Linux (though I’m admittedly more interested in it than I used to be) but the reddit API stuff definitely made me learn more about foss, and value it more.
I do value FOSS sodtware and like linux for it being foss(there are many other reasons too though). I do think understanding importance of Free software is much important than admiring one of the(most important) free software projects. I can see yku usibg linux soon or later in the future, along with other free programs
I think maybe I’m misunderstanding—are you saying that valuing free software is more important than valuing FOSS? FOSS is inherently free, no? Free Open Source Software. I would understand if I was talking about open source in general, but FOSS does include being free. Maybe that’s not what you meant.
Interesting to know that steam, gog, and epic (specifically) all work well for you, I’ve heard mixed results with Epic, some say it doesn’t work. Maybe I’ve gotten wrong info.
I have an older laptop, and as soon as I can upgrade to something better, I’m going to use it as a Linux practice.
I am using heroic launcher to play blazing sails on epic right now. I am on Arch, which I believe is a positive since the steam deck is arch based (i heard).
The escapist 2 i have not gotten to work properly though. It runs but with like 1fps. Apparently this is because epics implementation and it runs smooth with steam. Definitely test things on a game by game basis.
and yet they are still loosing money by running ChayGPT 3.5 for free. I guess that in the future they’ll switch to a local small model in the hardware that is capable enough.
I think it’s like anything on the modern web, they’ll lose money until they reach a critical mass of users who get accustomed to using ChatGPT in their day-to-day life, and then they’ll kill the free tier.
Except their free tier is still around for everything that they started as free. Outlook, bing, Visual Studio Code, even office is free for students and teachers.
They’ll always keep the low tier free to get people hooked and charge businesses whatever.
Microsoft has free tier Office tools because they’re data brokers now. TMK they didn’t always have free Outlook, it was bundled in Office, which cost money. I don’t see ChatGPT remaining free forever, it costs too much to run. I could be wrong though, depending on how much valuable data they can scrape from it.
Yeah they didn’t gave a free Office, Outlook or Visual studio. Now they do and there is no sign of them stopping it. Bing is expensive and they aren’t stopping it.
Chatgpt is MS’s first real chance of dethroning Google search. They’re going to keep a free tier forever.
Every change will bring it’s fair share of complainers, not much we can do about that. LILO to GRUB, SysV to systemd and now X11 to Wayland. No one is forcing your hand (unless you use a pre-packaged distro like Ubuntu/Fedora, in which case you go with whatever the distro provides), keep using X11 if you want stability, if you wanna dip your toes in bleeding-edge software and increase it’s userbase to show hardware manufacturers that their drivers need to be updated (I’m looking at you, NVIDIA) then feel free to mess around.
Eventually the day will come when Wayland apps will simply not launch on X11 and you’ll migrate too.
In case of Gnome it was addressed, just by different people. Gnome 2 continues to live on as MATE, so anyone who doesn't like Gnome 3 can use it instead.
I don’t understand why anyone ever expects a different outcome. They fork something that has quite some investment into the original version. How do they expect to keep up?
There were news about Ubuntu doing it too some time ago, maybe they realized it’s not feasible yet. I don’t follow their development as I don’t use those distros
Go tell Fedora that then lol. They want it gone to the point where Nate is telling users who want X to stay away on that post. Xwayland I believe will still be around though.
They’ll recant after their usage drops to a fraction. This move makes zero sense no matter how you look at it. As a generalist distro it’s too early to drop X.
If they want to become a niche distro whose only claim to fame is “we only pack Plasma 6”, big whoop, like there’s any shortage of that. What kind of distro defines itself by what it does not offer? And is that the kind of distro that Fedora aims to be?
lmfao Wayland is already ready for over 90% of use cases. Hell, GNOME has been wayland-default since twenty-fucking-sixteen if I remember my dates right. You’re overestimating the value X.Org provides.
It was not sysv to systemD, and it was forced (by making udev not work without it).
Other then nvidia, wayland is still missing some protocols (example: what virtual desktop you want your window to be on). But those protocols are (still) being worked on. And you will always be able to run x11 programs on wayland.
The advantages of wayland are a more direct path to hardware, and trowing away lots of code.
I’d say that’s already becoming the case in a few places. Hyprland isn’t just “Wayland good”, it’s “You should use Wayland good”.
Yes, I know the devs behind it act like pissants. That’s bad and I’m sorry for liking their software. I use Emacs too and RMS was kind of an asshole. Hell, I use Lemmy even though one of the devs has slighted me on more than one occasion.
… has gotten some help and is now a pretty well-adjusted human being, who still tells right wing trolls to go suck it, and still tells paid professionals that they should have known better when they should have known better, but in language that isn’t abusive.
I think you’re like 5 years behind on this. It’s true, just read up on it. Linus took time off after criticism for his language got too much. And he improved by a lot. You’ll find no more name calling directed at contributors after a certain date.
I daily drive Hyprland too, there are some shortcomings with how the mouse behaves with XWayland but I don’t think it’s a Hyprland issue and Gamescope remedies that problem so overall, it’s a great experience.
What if, sometime after Win 10 loses support a virus takes advantage of the lack of patches and propagates across all the machines with a simple message “This operating system is no longer supported, please click here to upgrade.” The button then runs a script to download and install a user friendly Linux distro. The world is then saved.
Double and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).
Without them, if you want to do some number crunching on your GPU and have your data on the host (“CPU”) memory, then you’d basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There’s one big issue here: during the memory transfer, your GPU is idle because you’re waiting for the copy to finish, so you’re wasting precious GPU compute.
So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let’s call them buffer_0 and buffer_1. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it to buffer_0, then run your GPU code asynchronously on that buffer. While it’s running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously to buffer_1. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time on buffer_1, again asynchronously. Then you copy, asynchronously again, another chunk from the host to buffer_0 this time and you keep swapping the buffers like this for the rest of your loop.
Now some GPU programmers don’t want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.
So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it’s exactly the same principle.
Lol, why own up to adding animations the system can’t handle when you can blame app and web devs? Gnome users always know where the blame should be laid, and it’s never Gnome.
If the system can’t keep up with the animation of e.g. Gnome’s overview, the fps halfes because of double buffered vsync for a moment. This is perceived as stutter.
With triple buffer vsync the fps only drop a little (e .g 60 fps -> 55 fps), which isn’t as big of drop of fps, so the stutter isn’t as big (if it’s even noticeable).
Biased opinion here as I haven’t used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.
Not only slow, it drops frames constantly. Doesn’t matter how good your hardware is.
There’s always the Android route, why fix the animations when you can just add high framerate screens to all the hardware to hide the jank. Ah, who am I kidding, Gnome wouldn’t know how to properly support high framerates across multiple monitors either. How many years did fractional scaling take?
Default? Top left. It should be visually appealing to most people, and it would honestly just be odd to have the default wallpaper be cartoon styled. And the bottom left looks too much like W11. But I think they should all be included as options.
I use linux to run my law office, so it can be done. Most of what I use is web-based these days, so headaches are minor. That being typed, I’ve been using linux off and on since the 1990s, and there was a fair amount of learning involved. A few notes:
-Libreoffice is good enough for document drafting, unless you’re extremely reliant on templates generated in Word. Even then, that’s a few hours of clerical work that you can farm out with, presumably, no confidentiality issues to flag. Also bear in mind that if you end up using different Linux distributions on more than one computer, then you may run into minor formatting differences between different versions of your word processing software. Microsoft Office will be a reliable option unless you run windows as a virtual machine. There are workarounds, but they aren’t business ready.
-Some aspects of PDF authoring can be tricky if you’re doing discovery prep, redaction, and related tasks in-house. This is very workflow-specific, so if you’re not a litigator or your jurisdiction doesn’t have a lot of specific requirements for pdf submissions, it might not be something that you need to worry about. If it becomes a problem, then a Windows virtual machine might be a solution.
-Video support depends greatly on the linux distribution, so you may want to do a bit of research and avoid distributions like Fedora, where certain mainstream AV formats are not supported by default for philosophical/licensing reasons.
-Compatibility with co-counsel and clients will be hit or miss. I don’t let anything leave my office that hasn’t been converted to PDF and I only do collaboration when there is a special request to do so. I can fall back on a computer that I have which runs Office. It sounds like you have more than one computer, so you can have a backup plan.
-Hardware support is critical. If you need to videoconference and it turns out that your webcam doesn’t have a linux driver, then you may be hosed. Research and test on the front-end so that you don’t find yourself in an embarrassing situation of your own making.
-Learning curves cost money. If you’re using an entirely new set of user software AND you’re hopping between different distributions to find the version of linux that works for you, you’ll waste a LOT of time that you could be using to generate billable work.
Paper printing is no big deal if you stick carefully to your first thought about linux-compatible hardware.
I use Brother laser printers whenever I need a hard copy. That brand tends to work well with linux, but research the model number in conjunction with the distribution that you’re using before you purchase.
Your point about locked in software is very important. Even in my own industry, some of my earlier jobs relied on custom Windows software for billing, dictation, document creation, and more. A lot of former nonstarters have been pushed to the cloud, but there are still challenges.
Indian here. There are a lot of Indians that love tech experimentation and “jugaad”, and just the mere act of dailying Linux allows us to step up our IT industry game for zero cost.
I guess there’s that beginner period when that should be allowed. I kind of wished it happened to me again, instead of daily driving boring Arch systems with no incentive to ever change.
Yeah when you’re a beginner or when you get back into Linux you have like a grace period to reproduce a productive environment, then you’re worried about changing too much in case it all breaks and goes wrong
Wait for Arch to slowly grind away at your sanity. One day you will realise that stability is pretty damn important, and the hopping will start once again.
This is pretty cool. We really have moved over from Reddit, since we already have some of the niche communities. There are plenty of Linux users already, so it shouldn’t take long for people to start posting there.
Bossmang, I know that we’re paying more for RHEL licences than for the entire IT department, but if we switch to Arch we’ll cut down the costs significantly.
I am so happy that my parents didn’t buy me a better laptop a decade ago, so I was forced to use a shitty thinkpad laptop. After reading online, I figured out that Linux makes it faster…
Over on Nate's other blog entry he indicates this:
The fundamental X11 development model was to have a heavyweight window server–called Xorg–which would handle everything, and everyone would use it. Well, in theory there could be others, and at various points in time there were, but in practice writing a new one that isn’t a fork of an old one is nearly impossible
And I think this is something people tend to forget. X11 as a protocol is complex and writing an implementation of it is difficult to say the least. Because of this, we've all kind of relied on Xorg's implementation of it and things like KDE and GNOME piggyback on top of that. However, nothing (outside of the pure complexity) prevented KWin (just as an example) implementing it's own X server. KWin having it's own X server would give it specific things that would better handle the things KWin specifically needed.
Good parallel is how crazy insane the HTML5 spec has become and how now pretty much only Google can write a browser for that spec (with thankfully Firefox also keeping up) and everyone is just cloning that browser and putting their specific spin to it. But if a deep enough core change happens, that's likely to find its way into all of the spins. And that was some of the issue with X. Good example here, because of the specific way X works an "OK" button (as an example) is actually implemented by your toolkit as a child window. Menus those are windows too. In fact pretty much no toolkit uses primitives anymore. It's all windows with lots and lots of text attributes. And your toolkit Qt, Gtk, WINGs, EFL, etc handle all those attributes so that events like "clicking a mouse button" work like had you clicked a button and not a window that's drawn to look like a button.
That's all because these toolkits want to do things that X won't explicitly allow them to do. Now the various DEs can just write an X server that has their concept of what a button should do, how it should look, etc... And that would work except that, say you fire up GIMP that uses Gtk and Gtk has it's idea of how that widget should look and work and boom things break with the KDE X server. That's because of the way X11 is defined. There's this middle man that always sits there dictating how things work. Clients draw to you, not to the screen in X. And that's fundamentally how X and Wayland are different.
I think people think of Wayland in the same way of X11. That there's this Xorg that exists and we'll all be using it and configuring it. And that's not wholly true. In X we have the X server and in that department we had Xorg/XFree86 (and some other minor bit players). The analog for that in Wayland (roughly, because Wayland ≠ X) is the Compositor. Of which we have Mutter, Clayland, KWin, Weston, Enlightenment, and so on. Which that's more than just one that we're used to. That's because the Wayland protocol is simple enough for these multiple implementations.
The skinny is that a Compositor needs to at the very least provide these:
wl_display - This is the protocol itself.
wl_registry - A place to register objects that come into the compositor.
wl_surface - A place for things to draw.
wl_buffer - When those things draw there should be one of these for them to pack the data into.
wl_output - Where rubber hits the road pretty much, wl_surface should display wl_buffer onto this thing.
wl_keyboard/wl_touch/etc - The things that will interact with the other things.
wl_seat - The bringing together of the above into something a human being is interacting with.
And that's about it. The specifics of how to interface with hardware and what not is mostly left to the kernel. In fact, pretty much compositors are just doing everything in EGL, that is KWin's wl_buffer (just random example here) is a eglCreatePbufferSurface with other stuff specific to what KWin needs and that's it. I would assume Mutter is pretty much the same case here. This gets a ton of the formality stuff that X11 required out of the way and allows Compositors more direct access to the underlying hardware. Which was pretty much the case for all of the Window Managers since 2010ish. All of them basically Window Manage in OpenGL because OpenGL allowed them to skip a lot of X, but of course there is GLX (that one bit where X and OpenGL cross) but that's so much better than dealing with Xlib and everything it requires that would routinely require "creative" workarounds.
This is what's great about Wayland, it allows KWin to focus on what KWin needs, mutter to focus on what mutter needs, but provides enough generic interface that Qt applications will show up on mutter just fine. Wayland goes out of its way to get out of the way. BUT that means things we've enjoyed previously aren't there, like clipboards, screen recording, etc. Because X dictated those things and for Wayland, that's outside of scope.
That’s my problem with this. It tries to be a desktop display server protocol without unifying all desktop requirements. Sure, X11 is old and have unnecessary things that aren’t relevant anymore, however, as someone who builds their own DE, (e.g.: tiling window managers) I see it as the end of this masterrace. Unless everybody moves to wlroots. Flameshot, for example, is already dealing with this, having at least 5 implementations only for linux, and only wlroots and x11 are standards.
Also, imo, having windows in windows is useful when you want to use your favourite terminal in your favourite IDE. But as you said DEs can implement it simply. Let’s say wlroots will implement this but others can decide otherwise. And for those the app won’t run.
Another example, that affects my app personally, is the ability to query which monitor is the pointer at. Wayland doesn’t care having these so I doesn’t care supporting wayland. And I"m being sad about this because X is slowly fading away so new apps will not run on my desktop.
Moreover with X11 I could write my own hotkey daemon in my lanuage of choice, now I would have to fork the compositor.
Also, imo, having windows in windows is useful when you want to use your favourite terminal in your favourite IDE.
The wayland way to do that is to have the application be a compositor, they made sure that nesting introduces only minimal overhead. And that ties in with the base protocol being so simple: If you only need to deal with talking to the compositor you’re running on, and to the client that you want to embed, a wayland compositor is indeed very small and lean. Much of the codebase in the big compositors deals with kms, multiple monitor support, complex windowing logic that you don’t need, etc.
Oh and just for the record that doesn’t mean that you can’t undock the terminal: Just ask the compositor you’re running on for a second window and compose it there. You can in principle even reparent (client disconnecting from one compositor and connecting to the other) but I think that’s only standardised for the crash case there’s no standard protocol to ask a client to connect to another compositor. Just need to standardise the negotiation protocol, not the mechanism.
I know this isn’t the answer you were looking for, but they’re all the same. Arch, Debian, Ubuntu, Fedora, I’ve tried them all, and there isn’t a discernable difference.
Well, I’m currently using VMware on Ubuntu to run Win 10 and Kali Linux. I don’t know what exactly caused the problem, it was either Ubuntu’s updates or VMware’s updates, but now Win 10 is unusable because it crashes (same with Kali Linux)
Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.
I would second Debian for stability, it’s what I use for all my VM servers. I have always preferred KVM however, as I had a lot of trouble with VMware hogging my cpu years ago. KVM has the virtual machine manager available for GUI monitoring but I’m not sure how far it goes for creating new VMs as I’ve always handled the setup directly from command line.
Since you’ve been on Ubuntu, I would suggest Debian. The commands are pretty much the same across the board, and it’s one of the most stable distros in the wild.
Well there’s your mistake: using VMware on a Linux host.
QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.
Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.
Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.
It’ll definitely run Kali well, Windows will be left without hardware acceleration for 2D/3D so it’ll be a little laggy but it’s usable.
VMware has its own driver that converts enough DirectX for Windows to run smoother and not fall back to the basic VGA path.
But VMware being proprietary software, changing distro won’t make it better so it’s either you deal with the VMware bugs or you deal with stable but slow software rendering Windows.
That said on the QEMU side, it’s possible to attach one of your host’s GPUs to the VM, where it will get full 3D acceleration. Many people are straight up gaming in competitive online games, in a VM with QEMU. If you have more than one GPU, even if it’s an integrated GPU + a dedicated one like is common with most Intel consumer non-F CPUs, you can make that happen and it’s really nice. Well worth buying a used GTX 1050 or RX 540 if your workflow depends on a Windows VM running smoothly. Be sure your CPU and motherboard support it properly before investing though, it can be finicky, but so awesome when it works.
On Vista and up, there’s only the Display Only Driver (DOD) driver which gets resolutions and auto resizing to work, but it’s got no graphical acceleration in itself.
I use virt-manager GUI to control KVM easily, but you can control anything easily with virsh command lines. I dislike VMware and VirtualBox, neither needed. Also, on terminal client virsh you can do much more configurations than just with virt-manager.
Remember that Desktop and Server editions are very different in terms of stability. Ubuntu has got to be one of the, if not the, most widely used linux distros for servers, that’s where the money is really in for them, so it’s more deeply tested before release to the public at large, but in my experience, in the last decade or so, Ubuntu is painfully lacking on too many fronts in it’s desktop versions.
My only issue with qemu is that folder sharing is not a great experience with windows guests. Other than that Ive had a great experience, especially using it with aqemu
So, what, people are only allowed to like your content? Can’t possibly be shit posts or anything like that, clearly it’s just all the downvoters who are wrong.
OR a downvote is as meaningful as an upvote, and it’s pretty childish to complain about them. (Especially considering that many instances don’t even count or display them)
See? I didn’t consider your post harmful, but I did consider it worthy of a downvote, simply due to how I felt it contributed to the discussion.
And people who don’t feel like I’m contributing meaningfully can downvote my posts. Almost as if that was the point of the button, to give an indicator of how much readers liked or disliked the content.
Negative opinions are every bit as valid as positive ones. Even more so in a culture where criticism is considered “rude” and socially suppressed.
It is criticism, and certainly from a subjective standpoint it’s very valid criticism
But I’m free to downvote criticism I don’t like or agree with 😁just like you’re free to downvote a comment you felt was rude, in addition to pointing that out. It would also mean something different if you didnt downvote but also commented that I was being rude.
Almost like the downvote was providing useful information
ZFS is a crazy beast that’s best for high end server systems with tiered storage and lots of RAM.
ext4 is really just a basic file system. Its superior to NTFS and fat as it does have extra features to try to prevent corruption but it doesn’t have a large feature set.
Btrfs is kind of the new kid on the block. It has strong protection against corruption and has better real world performance than ext4. It also has more advanced features like sub volumes and snapshots. subvolumes are basically virtual drives.
Another few older options include things like XFS but I won’t go into those.
Yes, but most filesystems are already optimized for flash storage. Arch wiki says f2fs is prone to corruption on power loss. Based on that and the lack of information on its anti-corruption measures I’m inclined to think it doesn’t have one and that’s why it’s faster. I wouldn’t use it in a non-battery operated device.
Catastrophic battery failure isn’t really any less likely than catastrophic power supply failure (conceptually. If you use a brandless grey power supply, results may vary).
That link is for kernel 5.14, so I’d say those results are pretty much invalid for most users (unless you’re actually on it, or the 5.15 LTS kernel). There have been a ton of improvements in every filesystem since then, with pretty much every single kernel release.
A more relevant test would be this one - although it talks about bcachefs, other filesystems are also included in it. As you can see, F2FS is no longer the fastest - bcachefs and XFS beat it in several tests, and even btrfs beats it in some tests. F2FS only wins in the Dbench and CockroachDB benchmarks.
Not quite. Bcachefs can be used on any drive, but it shines the best when you have a fast + slow drive in your PC (eg NVMe + HDD), so the faster drive can be used as a cache drive to store frequently accessed data.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.