Well, the other side would be operating systems you can’t really screw up too badly because they are locked down harder, so perhaps it’s fear of the unknown?
Or in the office, the hardware-software relations between the laptop and Windows and in some parts Linux are strained at best, where drivers, power management, and so on get crappy. E.g. after a year or two of updates, it gets out of control and nice things like hibernations don’t work. It’s usually a driver for some small thing you don’t care about that forgot to read the Windows specification change and now it can’t do that power handling in a good way. Oops the computer refuses to sleep and your bag is burning, your battery is 1% when picking the computer up again.
I completely understand that with windows, especially with hibernation like what the fuck is “windows modern standby”
but with Linux, it depends on the distro you use.
if you’re using something such as Pop_OS, I can pretty much guarantee you you’re never going to run into a power management issue or even a driver issue for that matter since its based off of Ubuntu and is very well supported.
That’s a lot of money, but same sentiment in the opposite. I would avoid any dev job requiring me to use Windows. Chances are they’re also using some crap tech stack too.
I’ve seen plenty shit stacks on macos tbf. Windows has better window management which saves a lot of time when you’re juggling between seperate windows.
I’m not sure, many developers use mac to get working unix tools and working “enterprise” tools at work like Teams and other crap that the company uses for “everyone”. Sadly many of these tools work like crap on Linux and maybe in best case the web-version is workable.
You’re confusing developers with power users here. At my company, the developers can do one thing well, but are far, far from power users with any technology. The amount of times I’ve seen them get stuck at a simple error message without doing more than throwing their hands up thinking they don’t have permissions or something is actually broken, without doing the least bit of troubleshooting is both baffling and frustrating.
I will always choose Windows over Mac, if I have to. Using MacOS is infuriating on so many levels, I’d rather give myself the bullet (which doesn’t mean much tbh). At least I can ameliorate Windows for my VMs (in case of apps not working on Wine) which makes it ok to use
I know that this is irrational and I try to not let it influence my perception of people, but my brain is usually wired to “Mac user detected: technical opinion discarded”
ik and I do notice it (am currently doing an apprenticeship in software development). Maybe if you’d noticed that it’s mostly the frontend (/“js bro”) crowd, you’d be aware that JS and a few markup “languages” require little to no technical knowledge
EDIT: lmao just noticed the username “mac”. Apple fans are truly a special breed
Linux > windows 10 > Windows 11 > MacOS is my experience, I just can’t stand apple and their walled gardens. I hate that they try to force me to use their shitty cloud and prevent me from installing third party apps. Windows 11 hhurts my eyes. And as a W10 refugee that’s gotten used to linux, I think its tolarable
In my experience, win 8.1 < win 10: less CPU and ram hogging, less telemetry, and overall less frustration. Although, yeah, you’d have to replace metro crap with something less tablet-oriented.
Well, duh. Kinda funny how windows server is a better desktop windows than a regular windows. Basically, you get less candy_crush-like crap plus only security updates, as far as I remember. But yeah, there are different unnecessary features (unless you’re in the corporate environment, ofc)
Although, I’m not really sure nowadays since I haven’t used windows for a few years 😅
Honestly you can use a Mac perfectly well without ever signing into iCloud + you don’t get prompted to sign in ever really unless you click on a feature that requires iCloud like the iCloud tab in settings.
You can even use the apps like mail without ever signing into iCloud.
I’m an Android fan, but I do like the walled garden for iPhones. There are so many people who just do not understand how to protect their privacy online, and phones are a treasure trove of personal information. I’ve no doubt that the tight controls on iOS have saved many people from identity theft due to their own negligence. That, combined with the ease of use and the superior accessibility features over Android makes iPhone the better choice for older generations who don’t understand technology as well.
I would agree that a reasonably locked down device helps certain audiences stay secure, but to me that always sounds like a convenient excuse. Surely they could at least implement some way to regain control, even if that meant having to unlock the boot-loader and flashing the device, which is not something your average person would/could do.
A package typically includes the program and its data inside the package. It’s not just an install script. Imagine if Chrome’s MSI installer was simply a wrapper that also downloaded the browser. Imagine if there was a vulnerability with this, and it downloaded and installed something else. Since the package didn’t include the program files, it wouldn’t be able to tell if they were genuine. It only fetched the MSI, which was a download that initially passed the expected checksum (if it even does that).
Additionally, file lists help ensure that programs and packages don’t conflict with one another. What if you wanted Chromium and Chrome at the same time. Can you do that? Simply wrapping an MSI doesn’t guarantee that. Perhaps there are conditionals in an installer that includes a vendored library under some circumstances, which would make them conflict.
What about package removals? Some programs leave a bunch of junk behind in their uninstaller. Typically, since packages very often contain their own files, they simply delete their files when they’re being upgraded or removed. If a package manager puts full trust in an MSI to always be exactly correct, then it loses complete control over correctly managing file removals.
I could go on and on, with more examples, but “run this binary installer” is the Wild West of putting software on your system. This is mostly the status quo on Windows, but this is a very poor standard. Other operating systems have solved this problem with proper packaging for decades.
When building a package from sources, it makes sense to wrap installers, but then you produce a package that is typically distributed by a mirror. These packages would then by downloaded by you, and contain the source of truth that is trusted to be what it is and that it’ll do what it’s supposed to do without any doubts to consistency and security.
Either the community on GitHub, or someone inside Microsoft.
You can find their repository here (I think most people here are not interested in it tho lol)
I have packaged some software for winget back when I was still using Windows, and yes it runs msi ( or exe ) silently under the hood. Installation processes that are usually done on GUI are automated just like how Homebrew does.
Yeah, but it’s mid at best. Many apps open a GUI installer even with winget. Also updates for many apps don’t work (if the app doesn’t save its version properly in the registry).
I know a lot of people like macOS, and I’m sure they get a lot done with it. For me however, it’s easily my least favorite popular OS. That’s even considering the terminal running zsh by default, which is miles ahead of Windows.
A quirk that recently bit us at work is that Safari has a maximum allowed version based off your OS version. Now if it was just me as a user, I’d download a 3rd party browser. However, as a developer, I have to build solutions that work for every “reasonable” browser. This means I can’t use features that every modern browser has, including Safari, because Safari from 4 years ago didn’t have it.
This used to be the case with IE. you’re always going to have to support at least one legacy browser.that’s one of the few real benefits of everyone moving to chromium based browsers.
Yeah, thankfully I never had to develop with IE in mind. Though I have heard a lot of people dislike it for that reason.
You’re totally right about that being a benefit to everyone moving to chromium. Thankfully Firefox has kept pretty up to date with new features/standards too.
I’ve been a software engineer for many years so trust me when I say this has nothing to do with how hard or easy it is to install. I used to run Gentoo at some point so I’m not exactly CLI averse. The problem isn’t the installation, it’s maintenance. Shit just keeps on breaking for no reason and I’m tired of figuring out how to fix it.
Linux is simply an enormous timesink. It constantly needs handholding and babysitting in order to work. And it doesn’t even reward you for it with a superior user experience, just a steady stream of problems to fix. Windows might not be perfect, but it at least it works. Meanwhile, Linux is like an insecure girlfriend, it constantly needs reassurance that you still love it.
Linux needs constant babysitting? Hmm I wonder why the majority of the internet servers is Linux not Windows. Even in critical infrastructure where stability is valued, not cost.
However you can’t choose a bad distro (bad for your needs that is) ans expect a flawless experience. When I read your first sentence I expected you to be a video editor or in a field where the industry standard software is only limited to Windows. But if your a developer it’s 100% your fault. I am running Linux for over a decade with zero problems. Only time when I had a problem, I was running Arch (btw) and updating the system blindly, daily.
You aren’t dynamically changing configs, libraries and programs on a production server like you are on a user facing system. That the killer. Linux servers are only stable when you leave them alone.
Updates to servers are generally done by beta testing them on identical hardware in the lab and when you have a functioning image you send that to production. To expect that kind of treatment on a user facing system when you say update the web browser would be beyond unacceptable.
As long as GNU/Linux systems continue to have ABI compatibility issues and general buggy issues between updates, it will never be considered a decent user facing system.
Also the quality of code for CLI programs is far more roadtested than GUI related code since there are major corporate efforts to make Linux servers more stable. Since GUI systems aren’t needed for servers they don’t get the same level of attention. That attention comes from the KDE and gnome foundations which don’t have nearly the same kind of money.
There’s a reason people are celebrating Valve contributing to KDE and related GUI projects as there’s finally some real money being thrown at the problem with real results.
I have had zero problems with Linux so I lack knowledge and am overpaid? You have problems therefore you are paid fairly? Hmm sounds very logical. Any critical infrastructure project would be lucky to have you.
Furthermore, you have told another commentor in this same thread that they reek of incompetence because they have a 7 hour Windows install, yet I am being overpaid because I don’t have any problems in Linux? So a competent developer should breeze through Windows but should struggle in Linux? Is that it? Kinda contradictory don’t you think?
I was running Linux Mint until the other day when I found out Linux Mint Debian Edition existed so I installed that. I’m a recent Linux convert and I can safely say that Lemmy might have partially been the reason. I’ve been loving it so far.
Thinking about it, it’s weird that there hasn’t been any real change in operating systems for about 50 years. Unix and its derivatives seem to be almost the only game in town, apart from desktops running Windows.
I think the last one to make any real headway was BeOS and they’ve been dying a thousand deaths ever since Apple bought NeXT instead of them. Though admittedly that perspective is coming from a person who used BeOS once in the 90s and has never touched Haiku.
It’s because you don’t want to reinvent the wheel all the time. It sucks doing it. Lots of effort. It’s much better to build on existing stuff and maybe improve it for your needs.
But that’s the thing: is there only one wheel? Maybe wheels are a bad metaphor here, but isn’t it weird, that there aren’t any fundamentally new concepts? Unix was developed basically during the preschool years of computing and we all just kind of stuck with its concepts.
Depends on the level of abstraction you’re looking at. Operating systems today are vastly more capable of organizing different provesses, distributing work amongst multiple CPU cores, CPU caches, etc. I guess the von Neumann architecture has just proven really successful in practice. And von Neumann machines require a certain set of capabilities in their OSes.
Maybe look at embedded systems, where we find a bit more variety. Things like DSPs or microcontrollers.
There’s other engine designs (ex:rotary engine) but the 4 stroke has over a century of testing, improvements, and refinements. A new design can adapt some of the refinements, but would have to catch up on decades of innovation and testing just to catch up!
On the Unix side, there’s the evolution of the Posix standard (which was based on Unix).
I would point out, by comparison, that piston engines are effectively obsolete for certain applications. Most aircraft operate on some type of jet engine, which involves the same core concepts of thermodynamics and aeronautics, but are still fundamentally different. They also optimize for different criteria, which is why neither jet engines nor piston engines hold a monopoly on any class of vehicle.
This is really stretching the computer metaphor. I think my point is that there will be room for rethinking paradigms as our applications of computers grow to include things that weren’t originally planned for. But in a mature technology there’s a lot of established precedent, and that’s not easily overcome. It takes something that can improve the field like jet engines made new aircraft possible.
LMDE6 came out within the last couple of months. It's based on Debian 12 which, at time of writing, is less than 6 months old.
Upgrading is still wise every couple of years because the base Debian distro also reaches EOL, but yes, rolling updates occur constantly in the meantime. Provided the system owner allows them to anyway.
Thank you, Callyral. I didn’t know either. But now I’m trying to learn Linux again after 30 years of not touching it, so this is helpful.
If I may ask an additional possibly stupid question (coming from Windows/Mac): as an init system in Linux, after you get past BIOS and POST at power up, is systemd also responsible for the initial OS software boot process (the “bootstrap” or Boot Manager in DOS/Windows) or is that another process altogether?
Or, asked another way, does systemd load the Linux kernel, and if not, what does?
Just so you know, I have no real skin in this game yet; I’m just trying to figure out where systemd starts and stops so that I can follow the [endless] debate, lol.
Or, asked another way, does systemd load the Linux kernel, and if not, what does?
Immediately after the BIOS/POST, the first thing that starts is the boot loader. This is usually a piece of software called GRUB. There’s a part of GRUB in the Master Boot Record on the drive, that the loads the rest of GRUB from /boot. /boot has to be a basic partition so that the MBR code can mount it, so for example if you use something a bit fancier (like LVM) then you’ll usually have a separate small ext2 or FAT partition just for /boot.
GRUB shows a list of available kernels, and other operating systems (if any are installed), based on a config in /boot.
Once you select a kernel to boot (or wait a few seconds for it to automatically choose the default option), it starts loading the kernel. There is a small disk image called the “initial ramdisk” in /boot, usually with a name like initrd or initramfs. This is a small ramdisk that contains all the drivers needed to mount your root partition - for example, drive drivers (NVMe, SATA, etc), file system drivers (ext4, ZFS, XFS, etc), LVM, RAID drivers if needed, and so on. If the root disk is on an NFS network share (not as common any more, but still doable), it also needs to contain network drivers for your network card. It also contains a few basic utilities, usually provided by BusyBox.
Some Linux distros (such as Debian) build a custom initramfs, whereas others (like Fedora) have a generic one containing all possible drivers.
The initial ramdisk then mounts the root partition and hands control over to the Linux kernel, which starts actually booting the OS. The very first process the kernel starts running is the init process, which these days is usually systemd but can be a different one like sysvinit or runit.
Okay, yeah. This makes much more sense now. I really appreciate it. I’ve been seeing the GRUB menu in LiveUSB boots but didn’t understand that it was part of the initial boot process for general Linux systems (for whatever reason I had it stuck in my head that it was just for USB booting). And you’ve placed systemd exactly where it makes sense to me as the init process for that OS.
That is extremely helpful. Thank you so much for taking the time to write the entire boot order, because it just got crystal clear for me. Much appreciated!
Oversimplified: It’s the service that handles starting and stopping of other services, including starting them in the right order after boot. Many people hate it because of astrology and supersticion. Allegedly it’s “bloated”. But still it has become the standard on many (most?) distros, effectively replacing init.
I like init. It’s simple. I like systemd as well. It’s convenient. Beyond that i don’t have very strong feelings on the matter.
I think the arguments against the “bloat” are not towards systemd as an init system, but rather are because systemd does so many things other than being an init system. I also don’t mind systemd, but I absolutely hate systemd-resolved. I do not want my init system to proxy DNS queries by setting my resolv.conf to 127.0.0.53. Just write systemd- and press tab, that’s “the bloat”. I’m not saying that the systemd devs should not develop any new tools, but why put them all inside one software package? systemd-homed is cool, but useless for 99% of users. Same with enrolling FIDO2 tokens in a LUKS2 volume with systemd-cryptenroll. Far from useless or “bad”, but still bloat for an init system.
Now that you mention it, I find systemd messing with my DNS settings incredibly annoying as well, so I can’t help but agree on that point. At this production system at work, when troubleshooting, I often need to alter DNS between local, local (in chroot), some other server in the same cluster, and a public one. This is done across several service restarts and the occasional reboot. Not being able to trust that resolv.conf remains as I left it is frustrating.
On the newest version of our production image, systemd-resolvd is disabled.
linuxmemes
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.