I just installed Linux on a six-year-old budget laptop this morning. My first time using Linux. What was a uselessly slow machine is now just humming along.
Nice. That is what started me into Linux. Wife’s 2011 laptop became useless with W10 upgrade, now it runs linux and she has fast browsing, zoom calls etc, and it is peppy like a new computer.
I am but a simple man. All my music is FLAC. It is arranged neatly in folders. I just want to select an album to play. I do not need album covers, playlists, search, streaming, tags, lyrics, analyzers, or scrobbling.
but no one seems to be discussing how risky it actually is.
That is because people stopped doing it ages ago.
But shouldn’t Firefox be sandboxing every website and not allowing anything to access the base system?
Security is always a matter of layers. Any given layer can fail some of the time but you want to set up your security so situations where all the layers fail together are rare.
My favorite part of this thread is everyone just saying copy and paste the commands so it will work. Like we should totally get users into the habit of running random commands off the net as root.
“I have no idea what I’m doing here” <- Happens in the beginning. How about you start by trying to know what exactly you are doing? Let me give you a fasttrack…
The first command you get in the instructions is curl. It is generally used to download stuff from a networked server.
1.1. To understand the -fsSLo in the command, I strongly advise you to check out the manual of curl using man curl in a terminal.
The second command in the instructions is echo “something” | sudo tee some/file
2.1 Here you see 3 commands echo , sudo and tee. 2.1.1 Again, you can use man command-name to check the manual pages for these commands 2.2 There is a | symbol over here. It is called the “pipe symbol”, which is what you can use to search for it. It is usually difficult to search for the symbol itself and I haven’t found a man page for it, but open man bash and look for “Pipelines” and you’ll know what it is about. Use Link, Link and Link to help yourself understand this.
e.g. If you have a program called Xorg from 5 years ago, and a program called mesa from 5 years ago and Xorg depends upon mesa to work. Here, if you replace your mesa with a new, recent mesa yourself, there is a good chance Xorg will not work. The Package Manager prevents that from happening.
The gist of what the instructions are making you do is, telling the Package Manager that there is another place from where you want it to look for packages.
To understand man pages better, check out this link.
Don’t think too badly of people dissing you in the comments. They are tired and fed up of help vampires. Hopefully, you can try not to become one.
Try and build your own process of understanding the commands you see on the internet before entering them into the terminal.
The comments telling you to just follow the instructions, are coming from the perspective that you don’t have the patience and determination to understand them yourself, which, a lot of people don’t. I will leave it upto you to determine which one you decide to be. It is, however, a bad idea to follow instructions on any website, just because it “seems legit”. You can’t really say you “trust” the site until you have the ability to find out for yourself whether you want to trust it.
In the grub menu, choose advanced options and then choose an older kernel to boot into.
If that boots fine, remove and reinstall the newest installed kernel and run sudo update-grub.
That should be the easiest way to fix the most possible causes of this error.
Edit: Now would be a very good time to back up all your data to an external drive. This might be a sign of your hard drive failing.
I wouldn’t assume a failing disk either.
But every time there’s an error you can’t pin on something you just did, a full backup should be the first thing you do as a matter of principle.
ok, I’ll backup all data first. How can I remove old kernel without enter in grub menu (since usually boot works well) and select the oldone as default? Thanks
By default, your grub menu should show up every time you boot.
If it doesn’t, boot your PC and do: sudo nano /etc/default/grub
You need these lines: GRUB_TIMEOUT=10 GRUB_TIMEOUT_STYLE=menu
Every line starting with: GRUB_HIDDEN_TIMEOUT
should be commented out like so: #GRUB_HIDDEN_TIMEOUT
Then run sudo update-grub and reboot.
What this does:
sets a countdown of 10 seconds before grub boots the kernel
tells grub to show the boot menu during that countdown
doesn’t use a hidden countdown that waits for a button press to show the menu
In the grub menu, select advanced options and there you should be able to select an older kernel to boot.
ok, thank you very much for detailed explanation, yes I remember that I had removed timeout from grub in the past, I will follow your procedure and select previous kernel. Another question, once I’v selected the older kernel did you think that removing (it’s fine using apt?) and resinstall newest kernel will fix the issue or I’v to keep the older kernel? In case I’v to keep the older kernel how can I avoid that it will be overwritten once I update the os?
I think the newer kernel should work after reinstallation.
If it doesn’t and you want to stay with the older one:
apt list --installed linux-image*
There should be a package with a specific version number in its name. For example, the standard kernel for Debian 11 is: linux-image-5.10.0-26-amd64
Uninstall the linux-image-… package you don’t want to keep.
Also uninstall linux-image-amd64 which is the meta-package that pulls in the newest kernel version. Without it, you won’t get new kernel versions in upgrades.
I second the advice to switch to a different/previous/known good kernel. That has been the cause a most boot problems for me. I just had it happen on a VM a couple of weeks ago, so I switched to the old kernel, then removed the new kernel. I’ll wait for another kernel before upgrading.
It’s probably worth scanning your disk just in case as well.
After he got a handle on it, Torvalds returned to the kernel. He’s been much more mild-tempered since then. As he mentioned in Tokyo, he won’t be “giving some company the finger. I learned my lesson.”
This is probably a good thing.
Looking ahead, Hohndel said, we must talk about “artificial intelligence large language models (LLM). I typically say artificial intelligence is autocorrect on steroids. Because all a large language model does is it predicts what’s the most likely next word that you’re going to use, and then it extrapolates from there, so not really very intelligent, but obviously, the impact that it has on our lives and the reality we live in is significant. Do you think we will see LLM written code that is submitted to you?”
Torvalds replied, “I’m convinced it’s gonna happen. And it may well be happening already, maybe on a smaller scale where people use it more to help write code.” But, unlike many people, Torvalds isn’t too worried about AI. “It’s clearly something where automation has always helped people write code. This is not anything new at all.”
Indeed, Torvalds hopes that AI might really help by being able “to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don’t need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say ‘this pattern does not look like the regular pattern. Are you sure this is what you need?’ And the answer may be ‘No, that was not at all what I meant. You found an obvious bag. Thank you very much.’ We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do.”
But, “What about hallucinations?,” asked Hohndel. Torvalds, who will never stop being a little snarky, said, “I see the bugs that happen without AI every day. So that’s why I’m not so worried. I think we’re doing just fine at making mistakes on our own.”
There were no questions about whether maintainers would start utilizing LLMs. The questions were focused on how maintainers would respond to LLM-generated (or -assisted) patches being submitted to them. This attitude seems perfectly reasonable to me, but it would have been more interesting to ask questions about whether maintainers would start using LLMs in their work. Torvalds might have responded with a more interesting answer.
I’m a newbie programmer (and have been for quite a few years), but I’ve recently started trying to build useful programs. They’re small ones (under 1000 lines of code), but they accomplish the general task well enough. I’m also really busy, so as much as I like learning this stuff, I don’t have a lot of time to dedicate to it. The first program, which was 300 lines of code, took me about a week to build. I did it all myself in Python. It was a really good learning experience. I learned everything from how to read technical specifications to how to package the program for others to easily install.
The second program I built was about 500 lines of code, a little smaller in scope, and prototyped entirely in ChatGPT. I needed to get this done in a weekend, and so I got it done in 6 hours. It used SQLite and a lot of database queries that I didn’t know much about before starting the project, which surely would have taken hours to research. I spent about 4 hours fixing the things ChatGPT screwed up myself. I think I still learned a lot from the project, though I obviously would have learned more if I had to do it myself. One thing I asked it to do was to generate a man page, because I don’t know Groff. I was able to improve it afterward by glancing at the Groff docs, and I’m pretty happy with it. I still have yet to write a man page for the first program, despite wanting to do it over a year ago.
I was not particularly concerned about my programs being used as training data because they used a free license anyway. LLMs seem great for doing the work you don’t want to do, or don’t want to do right now. In a completely unrelated example, I sometimes ask ChatGPT to generate names for countries/continents because I really don’t care that much about that stuff in my story. The ones it comes up with are a lot better than any half-assed stuff I could have thought of, which probably says more about me than anything else.
On the other hand, I really don’t like how LLMs seem to be mainly controlled by large corporations. Most don’t even meet the open source definition, but even if they did, they’re not something a much smaller business can run. I almost want to reject LLMs for that reason on principle. I think we’re also likely to see a dramatic increase in pricing and enshittification in the next few years, once the excitement dies down. I want to avoid becoming dependent on this stuff, so I don’t use it much.
I think LLMs would be great for automating a lot of the junk work away, as you say. The problem I see is they aren’t reliable, and reliability is a crucial aspect of automation. You never really know what you’re going to get out of an LLM. Despite that, they’ll probably save you time anyway.
I’m no expert, but neither is most of the workforce (although kernel work is, again, much more in the expert realm).
I think experts are the ones who would benefit from LLMs the most, despite LLMs consistently producing average work in my experience. They know enough to tell when it’s wrong, and they’re not so close to the code that they miss the obvious. For years, translators have been using machine translation tools to speed up their work, basically relegating them to being translation checkers. Of course, you’d probably see a lot of this with companies that contract translators at pitiful rates per word who need to work really hard to get decent pay. Which means the company now expects everyone to perform at that level, which means everyone needs to use machine translation tools to keep up, which means efficiency is prioritized over quality.
This is a very different scenario to kernel work. Translation has kind of been like that for a while from what I know, so LLMs are just the latest thing to exacerbate the issues.
I’m still pretty undecided on where I fall on the issue of LLMs. Ugh, nothing in life can ever be simple. Sorry for jumping all over the place, lol. That’s why I would have been interested in Linus Torvalds’ opinion :)
As pointed out, in Windows defence, it’s actually faster where it matters. And none of it is going to matter in adoption until every thing is supported 1-1.
The only reason we’re behind on adoption vs Windows as this point is that people who write software for Windows, don’t do it for GNU/Linux, or even publish specs in the case of drivers.
It’s not the OSes problem. It hasn’t been for a long time. It’s stubborn developers (mainly corporations like Broadcom, Nvidia and Epic). We shouldn’t need to write compatibility layers for completely foreign software to run, or write drivers to drive a megacorporation’s hardware, and those are both a monumental task, but the community continues to achieve it anyways.
A lot has been done and continues to be done by the community, and that’s great, but the real problem is the corporations who refuse to invest a little bit of their time in GNU/Linux support (and those who have an irrational vendetta against it).
Causes are a part of the reality. And when people go online and complain about how “lInUX SuXxx” because their proprietary Nvidia drivers didn’t work, and blame the OS instead of the company who is meant to be providing proper support for their devices or at least documentation for other developers to use, it plants the idea in people’s minds that the OS itself is simply inferior, which has connotations of it just being a bad system. Instead of “it will work perfectly when drivers are actually released by the manufacturer”. It tarnishes it’s reputation even after that particular device gains support, and that is another reason why adoption is low.
Hell, nVidia was actively working against having a working opensource driver reverse engineered by Nouveau. Linux is a thorn in their side and the only reason they somewhat support it today is that GPU compute works so much better on Linux.
What is it with Microsoft Word that makes you prefer it to others?
LibreOffice and OnlyOffice are pretty much the only free software office suites that really hold a candle to Microsoft Office’s functionality. LibreOffice defaults to the Toolbar interface but changing it to Tabbed will make it look like Microsoft Office. It takes some getting used to and isn’t as smooth but once you start using it for a few weeks you will get used to it.
WPS Office is a Microsoft Office clone that works fine on Linux. It’s a pretty common Microsoft Office substitute and is nearly identical in most aspects of its interface. It’s made by Kingsoft, a Chinese company. The software is closed-source and there is a free version that contains advertisements.
Microsoft Office Online is available through your browser free of charge at portal.office.com. It contains Word, PowerPoint, and Excel but only has basic functionalities. Collaborative editing is still supported on it which you might care about.
Microsoft Office can be installed using WINE but in my experience, it is usually not stable enough for daily use. I would not bother with it. You should not install things manually using WINE. It’s highly recommended that you use some wrapper software like Bottles, PlayOnLinux, or Lutris (common for games).
For me, I use the office suite at work, and one of the simplest things that makes me wish i could use it at home is that damn search bar in the top.
After that, I appreciate that libreoffice introduced the ribbon UI. I grew up with word 2003, so i know what it was like, but after they introduced the ribbon ui, it immediately felt more easy to use. Especially the style picker.
@JustEnoughDucks@NateNate60 I'm sure about the first two features: Yes. I don't know about a picture of your manual signature, unless you talk about simply embedding it in a document: That's for sure possible.
Where doed WPS office source it’s ads? I mean, if you run it in a (more or less) sandbox (well, you might want to have access to the files you’re editing), and without access to internet, how does the ad interface behave?
I actually don’t remember seeing adverts on the Linux version when I tried it out a few years ago. Maybe that’s changed, or maybe they just don’t run adverts on the Linux version.
Have you tried languagetool? There is an integration for Libre Office, Obsidian, MS Word and others. It offers spell checking, rephrasing and is superior to the build in checker in my experience. You could compare it to DeepL versus Azure Translate.
As someone with dyslexia, the superior spell and grammar check is what I miss most in libreOffice. I usually have to use an external tool for spell check like grammarly.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.