If the computer is modern enough that you’d consider buying it to use, I can almost guarantee that you’ll be fine to run the latest distros. I just threw Arch + KDE on a 14ish year old laptop I found, and it runs so well that I may daily drive it for a while just for the hell of it.
At worst, you may need a lighter-weight desktop environment (DE) than some of the pretty ones you see in screenshots. And those are simple to install and try out.
So then there’s really nothing special you look out for? why have I had such issues with linux issues and my Dell Xps 13 9310? user error or proprietary b.s.?
Proprietary BS, Dell has become kinda notorious for that. A lot of their stuff has weird hacky workarounds to get Linux running properly. Unfortunately there isn’t a great way to know that in advance, other than poking through wikis or asking around.
For most computers, it really isn’t much different than installing Windows. Most things will just work, maybe a few drivers to install, and you’re good to go.
Both, but consumer is generally worse. For reference, check here for issues related to yours. The instructions are geared toward Arch, but the problems affect most distros.
It requires 2-step verification which requires a phone number, security key or google prompt for google accounts. Hard pass :D edit: i stand corrected, this seems to have been changed
When you make a project with git, what you’re doing is essentially making a database to control a sequence of changes (or history) that build up your codebase. You can send this database to someone else (or in other words they can clone it), and they can make their own changes on top. If they want to send you changes back, they can send you “patches” to apply on your own database (or rather, your own history).
Note: everything here is decentralized. Everyone has the entire history, and they send history they want others to have. Now, this can be a hassle with many developers involved. You can imagine sending everyone patches, and them putting it into their own tree, and vice versa. It’s a pain for coordination. So in practice what ends up happening is we have a few (or often, one) repo that works as a source of truth. Everyone sends patches to that repo - and pulls down patches from that repo. That’s where code forges like GitHub come in. Their job is to control this source of truth repo, and essentially coordinate what patches are “officially” in the code.
In practice, even things like the Linux kernel have sources of truth. Linus’s tree is the “true” Linux, all the maintainers have their own tree that works as the source of truth for their own version of Linux (which they send changes back to Linus when ready), and so on. Your company might have their own repo for their internal project to send to the maintainers as well.
In practice that means everyone has a copy of the entire repo, but we designate one repo as the real one for the project at hand. This entire (somewhat convoluted mess) is just a way to decide - “where do I get my changes from”. Sending your changes to everyone doesn’t scale, so in practice we just choose who everyone coordinates with.
Git is completely decentralized (it’s just a database - and everyone has their own copy), but project development isn’t. Code forges like GitHub just represent that.
Well the bugtracker and additional features are not inside of the git repository. So they’d get lost. But each ‘git clone’ is a complete clone of the (source code) repository including all of the history of changes, the commit messages, dates and individual changes. That’s stored on every single computer that cloned the repository and you have a copy of everything locally. Though it might be out of date if you didn’t pull the latest changes. But apart from that it’s the same data that Github stores. You could just make it available somewhere else and continue.
Nearly all hardware support is kept in the kernel until and unless it bitrots to the point of unusability. I’ve had no issues with a 5.10-series kernel on my 2008 laptop, and I don’t expect any issues when I finally get around to upgrading it to 6.x (well, except the usual tedium of compiling a kernel on a machine that weak).
The difference isn’t all that noticeable, to be honest, or at least I’ve never found it so. If you’re using older hardware, you’re going to get an older “experience” anyway. The most user-visible kernel improvements tend to be improvements in hardware support, which is irrelevant if your hardware is already fully supported. However, I don’t do anything fancy with my machines—no full-disc encryption or the like. I usually don’t even need an initram to boot the system. So maybe you would notice something if your machines were more complicated.
(Note that the laptop I mentioned above started out with, um, a 3.x kernel? It gets a new one every year or so. The only kernel changes affecting it that were significant enough to draw my attention since 2008 were a fix in the support for the Broadcom wireless card it carries, and some changes to how hibernation works, which didn’t matter in the end because I basically never did try all that hard to get hibernation working on that machine.)
See I fear this, being stuck to only kernels up to a certain version. Because don’t the older ones lose support and stuff like that? how the heck do you maintain your system if the distro isn’t pushing anymore updates and such?
You’re unlikely to have issues unless an entire architecture loses support from your distro, and if you’re running x86_64, that isn’t going to happen for a long, long time. I’ve never been in a position where I couldn’t compile a new workable kernel for an existing system out of Gentoo’s repositories. The only time I’ve ever needed to put an upgrade aside for a few months involved a machine’s video card losing driver support from nvidia—I needed a few spare hours to make sure there were no issues while over to nouveau before I could install a new kernel.
Note that you can run an up-to-date userland on an older kernel, too, provided you make sensible software choices. Changes to the kernel are not supposed to break userspace—that’s meant to keep older software running on newer kernels, but it also works the other way around quite a bit of the time.
AFAIK if you buy any computer from within the last 20 years, there’s a good chance you can get a 6.X Kernel running on it. 32-bit support is fading out, though. If you buy a 64-bit computer, you’ll be able (with sufficient RAM and hard disk space) to install any modern distro on it.
Then why have I had such a terrible experience with my newer Dell Xps 13 9310 experience? user error or proprietary b.s.? because I have been told that the new Dells are going the more propriety route.
I’d say that single core performance and amount of RAM you have are the biggest issues with running anything on old hardware. Apparently, in theory, you could run even modern kernel with just 4MB of RAM (or even less, good luck finding an 32bit system with less than 4MB). I don’t think you could fit any kind of graphical environment on top of that, but for an SSH terminal or something else lightweight it would be enough.
However a modern browser will easily consume couple gigabytes of RAM and even a ‘lightweight’ desktop environment like XFCE will consume couple hundred MB’s without much going on. So it depends heavily on what you consider to be ‘old’.
The computer at garage (which I’m writing this with) is Thinkstation S20 I got for free from the office years ago is from 2011. 12GB of RAM, 4 core Xeon CPU and aftermarket SSD on SATA-bus and this thing can easily do everything I need for it in this use case. Browsing the web on how to fix whatever I’m working with at the garage, listen music from spotify, occasional youtube-video, signal and things lke that. Granted this was on a higher end when it was new, but maybe it gives some perspective on things.
I’m running Arch on a very early 2000s computer. Dual core athlon with two gigabytes of RAM. With KDE desktop on a period correct display. Works great as long as you are not trying to push it hard with modern tasks. Browses the internet just fine and can even watch videos of a size more appropriate for that era. But yeah, you get into 1080p displays and high resolution videos. Or modern bloated websites. It’s definitely going to chug.
Oh, right, the screen resolution is something I didn’t even consider that much. My system has 1600x1200 display and GPU is Quadro FX570. This thing would absolutely struggle anything higher than 1080p, but as all the parts are free (minus the SSD, 128G drives are something like 30€ or less) this thing is easily good enough for what I use it for and it wouldn’t be that big of a stretch to run this thing as a daily driver, just add bigger SSD and maybe a bit more modern GPU with a 2k display and you’d be good to go.
And 1600x1200 isn’t that much anyways, if memory serves I used to have that resolution on a CRT back in the day. At least moving things around is much easier today.
As old as my system, is. Anything much more modern than what’s already in it would be bottled necked by the system bus. It’s PCIe. Not PCI 2 3 or 4 lol. And SATA, early SATA at that. Still has two IDE headers. But I used to use a lot less to run blender on back in the day. I have it pushing a good old 1024 x768 4x3 display.
I oughtta browse ebay and see if anybody’s selling some system76 stuff. I gotta see what to do with my Dell Xps 13 9310 thats stuck in manufacturing mode first. probably sell for parts or idk?
You basically already know the drill; buy it from a Linux-first vendor that offers devices that you can afford. A list of vendors can be found here. Personally, I’m quite fond of NovaCustom and Star Labs. Fortunately, both have ‘cheaper’ offerings with their NJ50 Series and StarLite respectively.
Thanks! but when it comes to linux hardware vendors like those, for me at least, it’s hard to know which ones are good and which ones are bad or unknowns. also, i did look into the lower grade star labs and there was something about the processors they used… i did a little reading and they got poor marks for being uber slow or something. i could have misinterpreted things though.
but when it comes to linux hardware vendors like those, for me at least, it’s hard to know which ones are good and which ones are bad or unknowns.
You hit the nail on the head with that remark. Because, quite frankly, it’s hard for all of us; I would love to read reviews done by Notebookcheck (or similarly high-profile reviewers), unfortunately that’s simply not the case. In this case, you would have to scrape whatever knowledge you can find about these specific devices (and their vendors) before judging for yourself if it’s worth taking the risk.
The reason, why I’m personally fond of NovaCustom and Star Labs, is because they’re known to contribute back significantly to the open-source community; same applies to System76, Purism and Tuxedo. I didn’t name any these in my previous post, because none of them seemed to be sufficiently affordable.
i did look into the lower grade star labs and there was something about the processors they used… i did a little reading and they got poor marks for being uber slow or something. i could have misinterpreted things though.
If it’s about the processor being slow, then I’m not surprised. It’s from Intel’s N-series, which is somewhat of a spiritual successor to Intel’s Celeron and Pentium lines. Both of which are known to be not powerful. And for that price you shouldn’t expect a lot more, but I agree that an i3 (or something else with similar processing power) should have been possible at that price-range.
Aha! so im not so stupid after all lol I was pretty much right. so how do you figure which manufacturers or even models are more open source and less proprietary?
The icons you hate are icon set specific. I haven’t tried tinkering with them (I don’t actually use them, most of those plugins that come by default on most distros are removed on my installs), but I think you can change icon sets… or maybe themes (some themes also hold icon sets).
So, basically, you should install new icon sets and/or themes to get new icons and just pick one that you like, unistall the rest. Your default repo should hold most popular themes and icon sets for xfce.
PS: Some things may be inacurate, but I’m not much of a graphical person, I usually use xfce with default settings and maybe Greybird Dark as a theme. I leave everything else to default, whatever the defaults may be.
hey there, i have done that already with both changing themes and icons and that only affects everything else but those few icons that never change. it’s very weird
Hm, that is weird… they should change with the theme…
I don’t know if there is an xfce comm here on Lemmy, but if there is, it’s best to ask there, since this is an xfce specific thing (KDE or other DEs may implement this differently).
EDIT: There is, !xfce, but the last post there is from 4 months ago 😔.
Xfce is cool, I use it on all my installs. But than again, I have never tinkered with themes that much or tray icons 🤷.
Try Void if you’re not too afraid of the terminal 😁. The repo is pretty good and stuff mostly works out of the box. If they don’t, you just need to configure them correctly.
I may give Linux Lite a try, which is of course xfce based. Void I hear is very good, but after researching it a bit, I feel it’s more complicated or advanced than what it appears. more for like advanced users that really know how to work linux. i’m more intermediate.
The funny thing is though, I wasn’t as advanced when I jumped ship, but I never felt lost in it either. Like with Ubuntu and similar distros, things are fairly simple, but once you start getting nitty gritty with the system, start tinkering and whatnot, things just start not working. Like I was banging my head why this particular app just can’t access the internet, when all of the time it was ufw that was blocking it 😒.
What really pissed me off was the sheer number of apps that got installed allongside the main sustem. Like LibreOffice, maybe I didn’t want that installed on my system. And systemd seemed way too slugush and buggy for my taste, I really wanted something simpler and very easy to configure and run. So Void fit in there perfectly. Just xfce with some basic apps and plugins, that’s it.
Also, one of the main reasons why I bailed ship regarding conventional distros was dependency hell. You try and compile from source and there is always some dependency that’s outdated and just doesn’t compile 😒. This really really pissed me off, cuz I wanted to use the rig for, let’s say encoding, but the x265 lib in the repos was outdated. I wanted the latest, cuz I also wanted to test the progress of x265… things like this really grind my gears and I decided that conventional distros are probably not for me.
I mean… Depends what you mean by 100% free firmware… If you mean only the boot firmware, that’s the case for PCs like the ThinkPads T400, T500, R500, W500, X200, as well as the Dell Latitude E6400. Note Libreboot even recommends the latter for new full libre buys, as it can be software-flashed without disassembly.
But if you mean 100% free including EC firmware, wireless firmware, and disk firmware, then this will probably never happen, or at least not until a very very long time.
What I’m trying to say is that it’s an uphill battle, arguably pointless too.
Before going with the current 30 series, I was using X200 and X60. They’re both good machines, don’t get me wrong. However, their age shows when trying to do modern tasks, even something as simple as web browsing.
The X60 doesn’t even have the hardware acceleration capability for my usual KDE setup. By the way, you’d be stuck with DDR2.
The X200 is much more capable than X60, but try to browse most modern sites and you’ll feel the machine getting hot. You could turn off javascript, but then you’ll be missing quite a bit of functionalities. I definitely wouldn’t run VSCodium on it for work. I’m currently using this one as a testbed for distrohopping.
To me, the 30 series is a sweet spot. The Ivy bridge is not too old for demanding computations of modern days. If you opt for the highest tier i7, you could beat a lot of the average ones from the following generations. If you don’t get the processor you want, you can always replace it since it’s socketed, at least for my W530, which should apply for T430 &T530 (not X230).
You might want to ask yourself: what are you trying to achieve, and more importantly, how can you measure what you’ve actually achieved? No, blindly following online articles is not a good measurement.
I found out later on that I had no way of actually verifying anything with libreboot. The build system is a pain in the neck to follow thru. I then tried doing it with coreboot upstream, and my experience building with it was much better. Even with it, I wouldn’t have the chance to look thru every line of code, I still need to just “trust” somebody.
You can definitely play around, but if that’s all you do, you’d be asking yourself why you did all that when you get bored.
Yes, the linux kernel will work! I’d say it’s even more likely that wifi, soundcard, etc. work without any problems than if you’d buy a bleeding edge laptop (although these mostly also just work nowadays). The oldest machine I’ve got is a laptop from 12 years ago which easily runs modern linux, but even much older machines shouldn’t have a problem with that, at least not with the kernel.
I like Kinoite, have been happy for a year or so (how time flies). Pretty bulletproof, automatic updates and rollbacks, lotsa good stuff. One minor but relevant gotcha is it doesn’t like docker particularly much, I found the path of least resistance was to move to podman (which is more secure, can be easily turned into (–user) system.d units and has a cool auto update feature), podman-compose is your friend…
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.