Nothing would make me more happy. I really wish it weren’t such a pain to deal with the telephony. You check devices on postmarketOS & while some devices can boot, it’s usually the actual phone part that isn’t working–which is kind of an important part. The open hardware phones work fine, but their specs are ancient while being as expensive as flagships. I still have eventual hope tho as device needs have started to plateau.
Aliyaut’s logo? It is clean, but it’s hardly even identifiable as a gecko. It blends in too much with all the modern corporate logos we have today IMHO. It’s not a bad choice if they decide to go with it, but they could do better.
The author is exited but I’m not. I am not a big fan of corporations taking the free work of FOSS developers and turning it into a proprietary dystopia.
I think that having a strong public domain is good for everyone. For instance properties like Sherlock Holmes really took off once it was in the public domain and people could write spin-offs and whatnot without worry that a copyright lawyer would come along and sue them.
Linux is the same thing, Amazon using the kernel and stuff to build an OS on doesn’t take anything away from anyone else who uses Linux as a desktop or server environment, and in fact can lead to some good pass back, even if it is just that the devices are easier to root. Take a look at the Open-wrt project, where Linksys built their router on top of a Linux kernel and it led to a whole ecosystem of open routers. People went out of their way to buy a WRT-42G just with the intent of rooting it, and Linksys got their money either way.
It’s pretty annoying you replied to someone’s nice, well thought out comment with your own bullshit. Then speculated about something you could have googled in 7 seconds max.
Amazon using Linux isn’t the concern. What OP was referring to are things like their use of Elasticsearch. It’s basically Amazon’s version of embrace, extend, extinguish. It got so bad, that the devs of Elasticsearch changed their licensing as a way to fight against Amazon’s tactics.
Open source is great. But when other companies take the open source code as their own to the detriment to the original open source devs, that’s not sustainable. That behaviour will kill open source.
AFAIK if you buy any computer from within the last 20 years, there’s a good chance you can get a 6.X Kernel running on it. 32-bit support is fading out, though. If you buy a 64-bit computer, you’ll be able (with sufficient RAM and hard disk space) to install any modern distro on it.
Then why have I had such a terrible experience with my newer Dell Xps 13 9310 experience? user error or proprietary b.s.? because I have been told that the new Dells are going the more propriety route.
I’d say that single core performance and amount of RAM you have are the biggest issues with running anything on old hardware. Apparently, in theory, you could run even modern kernel with just 4MB of RAM (or even less, good luck finding an 32bit system with less than 4MB). I don’t think you could fit any kind of graphical environment on top of that, but for an SSH terminal or something else lightweight it would be enough.
However a modern browser will easily consume couple gigabytes of RAM and even a ‘lightweight’ desktop environment like XFCE will consume couple hundred MB’s without much going on. So it depends heavily on what you consider to be ‘old’.
The computer at garage (which I’m writing this with) is Thinkstation S20 I got for free from the office years ago is from 2011. 12GB of RAM, 4 core Xeon CPU and aftermarket SSD on SATA-bus and this thing can easily do everything I need for it in this use case. Browsing the web on how to fix whatever I’m working with at the garage, listen music from spotify, occasional youtube-video, signal and things lke that. Granted this was on a higher end when it was new, but maybe it gives some perspective on things.
I’m running Arch on a very early 2000s computer. Dual core athlon with two gigabytes of RAM. With KDE desktop on a period correct display. Works great as long as you are not trying to push it hard with modern tasks. Browses the internet just fine and can even watch videos of a size more appropriate for that era. But yeah, you get into 1080p displays and high resolution videos. Or modern bloated websites. It’s definitely going to chug.
Oh, right, the screen resolution is something I didn’t even consider that much. My system has 1600x1200 display and GPU is Quadro FX570. This thing would absolutely struggle anything higher than 1080p, but as all the parts are free (minus the SSD, 128G drives are something like 30€ or less) this thing is easily good enough for what I use it for and it wouldn’t be that big of a stretch to run this thing as a daily driver, just add bigger SSD and maybe a bit more modern GPU with a 2k display and you’d be good to go.
And 1600x1200 isn’t that much anyways, if memory serves I used to have that resolution on a CRT back in the day. At least moving things around is much easier today.
As old as my system, is. Anything much more modern than what’s already in it would be bottled necked by the system bus. It’s PCIe. Not PCI 2 3 or 4 lol. And SATA, early SATA at that. Still has two IDE headers. But I used to use a lot less to run blender on back in the day. I have it pushing a good old 1024 x768 4x3 display.
Reflecting on my first year running solely Linux (as opposed to dual-booting), I think that this culture comes from the fact that, on Linux, problems can more often than not be solved. If not solved, then at least understood. When you want to change something on Windows, or something breaks, you have far less room to maneuver.
When I was a Windows user, I’d barely ever submitted a bug report for anything, in spite of being very tech-literate. It felt hopeless, as my entire experience with the OS was that if a fix would come, it’d have to be done by someone else.
Linux treating its users like adults, produces users who are more confident and more willing to contribute.
Is it even possible to report bugs to Microsoft without paid support? I always come across that Windows community forum where every solution to a problem is to update drivers, run sfc /scannow, etc. I doubt anybody on that forum can relay problems to Microsoft staff.
The Feedback Hub was introduced to fix this gap in user reports for Windows. Microsoft does actively monitor this. They respond when necessary, merge topics, deny or approve bugs/suggestions, etc. For their software, such as Terminal or VS Code, you can use GitHub issues.
Keep in mind, like most companies, Microsoft has guidelines on what employees can say when responding to any user feedback. This is why we typically see a lot of copy and paste. When it is more than that, wording is selective and you may not get more than one or two responses in total.
You can do the exact same thing in Windows, just think of the SysInternals Suite and its power. It’s just that people on Linux expect problems, while the overwhelming majority of people on Windows/MacOS is using their device expecting it to work and if it doesn’t they go do something else or buy another device.
Also this completely untrue notion that you cannot fix Windows or play around with its internals is very prevalent, to the point that it’s a meme, so people don’t even try.
But I have to fight the stupid OS to give me useful information. I have to install 3^(rd) party stuff. By default you only get this useless error reporting tool. Even if you report an error your likely to never hear from anyone and the chance of the error being fixed is virtually nonexistent.
On Linux the necessary information is usually readily available. The worst offender in my experience is Steam itself. You can get logs from games fairy easily. But if Steam misbehaves things can get more complicated.
I found bugs in Windows server products all the time, and there was no way of reporting them. If you opened a ticket (by paying, of course), they would never admit it was a bug. Half the time I got the impression I was the only person in the world that every encountered said issue, and that what I was doing was complete edge-case. Which was bullshit, I would investigate and find dozens of references (which never got resolved) because it was pretty much the only way to use X product feature.
Microsoft QA and support is utter trash. You can get better support in Linux on damn near anything by some rando on IRC or the specific product forum, or, gods forbid, Reddit. There is an almost 100% chance you can fix anything on Linux if you look hard enough, even if you have to go dig through the code. Nothing like that happens in the Windows ecosystem.
Also, the types of information you find are very different. On windows, you’ll find various forum posts about your problem, and some proposed solutions. Usually, nobody seems to know exactly what’s causing the problem, and that’s why the solutions are a bit random. Same goes for iOS related problems too.
On Linux, you might not need forum posts, because sometimes the error messages tells you what’s wrong and how to fix it. If that’s not the case, you’ll find posts about your problem, and usually there’s someone who explains what’s broken and what are the commands to fix it.
There’s none of that guesswork about trying 7 unrelated things to see if any of them magically solve your problem. It’s straight to the point. Your problem is caused by that setting over there, and here’s how to change it.
When it comes to closed-source software developed opaquely by for-profit corporations, particularly the huge, monolithic ones like Microsoft, I generally have the attitude that, if I do discover a problem:
They won’t take my detailed report
If they do take my report, it goes straight into a shredder bin (or a massive queue where low priority problems go to die, which may as well be the same thing)
If they do read my report, then it’s likely something they already are aware of
If they don’t know about it somehow, the issue is probably so low-priority and niche that it wouldn’t escape the backlog anyway
Probably not nearly as bleak as I make it out. But when you can’t see the process, how can you tell?
With open source projects, these things can all still happen, but at least the process is more transparent. You can see exactly where your issue is, and what’s been done to it so far, if anything. Other users can discover and vouch for your problem. And if the dev team takes pull requests, and you are willing, able, and permitted to contribute, you can make the fix yourself.
Also, with open source projects, I actually want to help the developer improve their project, whereas with Windows I simply do not care and won’t donate a second of my time to a large corporation for free.
Interesting aside for anyone interested; you can subscribe to her Peertube account with your Lemmy account by searching !veronicaexplains in your instance’s search bar (or clicking that link). Then any video she uploads in the future will show up in your lemmy feed, and any comments you leave on lemmy should show up on the peertube video! :D
The link you posted seems to have the full url embedded so it doesn’t work in my client. I think this will work, pasted as plain text: !veronicaexplains
I’m not sure if the lemmy page will fill out with her previous uploads, I can only see the one about SSH on my feed too. She seems to upload fairly regularly, and this latest video about Linux Mint was uploaded 20 hours ago. I suspect if you’re the first person to subscribe on your instance, only future videos will show up on it, but I’m not entirely sure.
I believe she made a post/video a bit ago saying that she was taking a break from the videos for a bit, after quitting her job. She said she was going to focus more on her channel(s) as her main focus, to do something she enjoys.
I see a lot of her YouTube stuff posted a month ago, a couple of new ones, my guess is that her break is over and there will be more stuff coming.
and any comments you leave on lemmy should show up on the peertube video!
This is cool to see. Unfortunately, we from Lemmy can’t see comments posted by peertube or mastadon users. AFAIK, federation in Lemmy still needs improvement to interact with mastodon posts/comments.
because it isn’t modding it’s just aesthetic changes. that is why it is called “ricing”, because on the car community just changing the looks is considered trash tuning.
Is the concern the connection to “rice racers” japenese import cars? or the term when you rice potatoes or cauliflower through a ricing device, making it into tiny parts?
To clarify for those who come after: It’s quite blatantly the first one. You’re tricking your desktop out as is stereotypical of the cars you mentioned.
It’s possible that the majority of people weren’t aware of the first one when they started using it, but they don’t have an excuse if they continue to use it now.
Lived through the 90s when the import car scene was huge. The term ricing back then was used when referring to asians who modified their cars, as a pejorative.
It really bummed me out to see it creep into the Linux community. Tried voicing displeasure back when I used Reddit and got blasted with downvotes and really distasteful comments, felt like I was alone in this feeling. Thanks, from some random Asian Linux user.
It’s actually an acronym for Race Inspired Cosmetic Enhancement. The fact that some don’t know and use it to be racist says more about them as an individual than the term itself.
Not knowing what the acronym means and using it for stock Honda Accords, because “asian car” for instance. That is racist.
Tbh I don’t really even get the hate on Race Inspired Cosmetic Enhancement, I see it as a different facet of “car enthusiast,” like the dudes with Donks and Low-Riders. Still though it isn’t racism, just eleitism or regular old gatekeeping from the racing people.
?? The arch wiki is one of the greatest Linux resources out there. Sure there may be situations where it doesn’t have the answer for something, but for a new user? It has all bases covered.
On one hand, the archlinux bbs had the only exact reference to the issue I was having. On the other hand, no one could replicate it enough to figure anything out. :/
It’s actually really great… if you know how to interpret and apply the information on it to your situation and adapt as needed. A good new user experience it does not make however.
Aaaaah. I really, really wanted to complain about the excessive amount of keys.
(My comment above is partially a joke - don’t take it too seriously. Even if a new key was added it would be a bit more clutter, but not that big of a deal.)
Lol it was “fun” two days ago when all the outlets originally wrote about it. Now it just feels like the headline might as well be, “heh, remember how we all laughed at that thing two days ago?”
Doesn’t seem necessary to bring it up again for 6.6.7 any more than it will for 6.7.6 or 7.6.6. I’m not really familiar with the outlet though, maybe they make a headline for every minor patch release.
I’m with you. Not even necessarily with your original comment, but I have also had the experience of making a mild complaint and being dogpiled by people who are somehow super butthurt about it. It’s weird. They could have read your comment and moved on like they are demanding you do.
It seems like what you’re asking about are more what I’d think of as components of an a Linux “system” or “install.”
First off, it’s definitely worth saying that there aren’t a lot of rules that would apply to “all” Linux systems. Linux is huge in embedded systems, for instance, and it’s not terribly uncommon to find embedded Linux systems with no shells, no DE/WM, and no package manager. (I’m not 100% sure a filesystem is technically necessary. If it is, you can probably get away with something that’s… kinda sorta a filesystem. But I’ll get to that.)
Also, it’s very common to find “headless” systems without any graphical system whatsoever. Just text-mode. These are usually either servers that are intended to be interacted with over a network or embedded systems without screens. But there are a lot of them in the wild.
There’s also Linux From Scratch. You can decide for yourself whether it qualifies as a “distribution”, but it’s a way of running Linux on (typically) a PC (including things like DE’s) without a package manager.
All that I’d say are truly necessary for all Linux systems are 1) a bootloader, 2) a Linux kernel, 3) A PID 1 process which may or may not be an init system. (The “PID 1 process” is just the first process that is run by the Linux kernel after the kernel starts.)
The “bunch of default applications and daemons” feels like three or four different items to me:
Systemd is an example of an “init system.” There are several available. OpenRC, Runit, etc. It’s main job is to manage/supervise the daemons. Ensure they’re running when they’re supposed to be. (I’ll mention quickly here that Systemd has a lot more functionality built in than just for managing daemons and gets a bad rap for it. Network configuration, cron, dbus for communication between processes, etc. But it still probably qualifies as “an init system.” Just not just an init system.)
Daemons are programs that kindof run in the background and handle various things.
Coreutils are probably something I’d list separately from user applications. Coreutils are mostly for interacting with low-ish level things. Formatting filesystems. Basic shell commands. Things like that.
User applications are the programs that you run on demand and interact with. Terminal emulators, browsers compilers, things like that. (I’ll admit the line between coreutils and user applications might be a little fuzzy.)
As for your question about graphical systems, X11 and Wayland work a little differently. X11 is a graphical system that technically can be run without a desktop environment or window manager, but it’s pretty limited without one. The DE/WM runs as one or more separate processes communicating with X11 to add functionality like a taskbar, window decorations, the ability to have two or more separate windows and move them around and switch between them, etc. A Wayland “compositor” is generally the same process handling everything X11 would handle plus everything the DE/WM would handle. (Except for the Weston compositor that uses different “shells” for DE/WM kind of functionality.)
As far as things that might be missing from your list, I’ll mention the initrd/initramfs. Typically, the way things are done, when the Linux kernel is first loaded by the bootloader, it an “initial ramdisk” is also loaded. Basically, it creates a filesystem that lives only in ram and populates it from an archive file called an “initramfs”. (“initrd” is the older way to do the same thing.) Sometimes the initramfs is bundled into the same file as the kernel itself. But, that initial ramdisk provides an initial set of tools necessary to load the “main” root filesystem. The initramfs can also do some cool things like handling full disk encryption.
So, the whole list of typical components for a PC-installed Linux system to be interacted with directly as I’d personally construct it would be something like:
Bootloader
Linux Kernel
Initramfs
Filesystem(s)
Shell(s)
Init system
Daemons
Coreutils
Graphical system (X11 or Wayland potentially with a DE/WM.)
User applications
Package manager
But techinically, you could have a functional, working “Linux system” with just:
Bootloader
Linux Kernel
Either a nonvolatile filesystem or initrd/initramfs (and I’m not 100% sure this one is even strictly necessary)
A PID 1 process
Hopefully this all helps and answers your questions! Never stop learning. :D
You would need some non volatile storage to hold your bootloader be that on the network or local. Also any shell more complicated than tty will need to store config files to run.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.