It’s copy and pasting 5 lines into the terminal and hitting enter. It’s not that hard. If it’s not worth the 15 seconds of ‘work’ you probably don’t need the software that badly.
And it’s not the default. Usually you shouldn’t add random software sources and download software from some websites. Your Linux package manager should be the source for software. (Software Manager / Store / Synaptics, … whatever Ubuntu calls it) It installs software with one or two clicks with the mouse, the software there is tested and tied into the rest of the systems and tens of thousands of packages are available. No malware guaranteed, and updates are handled automatically.
And with other Operating systems it’s also ridiculous: You need to find the website of some software, avoid malware and copycats that advertise similar software with ads, click download, click ‘yes’ I accept a download with a harmful extension. Then you need to open the file manager and double click on it. Then a window opens and you need to click ‘next’. Accept the terms. Give permission to install and maybe remove a few ticks and choose a location. I’d say it’s about the same amount of work and the downside is it doesn’t necessarily handle updates and security fixes.
I think Ubuntu doesn’t have Mullvad available in their own repository. I took another approach and imported their settings/profile into the VPN/network manager that is available per default on many Linux distributions. No install required at all. But importing the settings isn’t easier, so YMMV here. And I think you have to create a profile for each and every country/endpoint which is a bit cumbersome, depending on what you’re trying to do with the VPN.
It’s copy and pasting 5 lines into the terminal and hitting enter. It’s not that hard. If it’s not worth the 15 seconds of ‘work’ you probably don’t need the software that badly.
Telling people to just run random code they found on the internet and don’t understand is really bad advice.
But how do you think you install software on Windows? You download a random installer from the internet and double-click it. The installer is an executable file and runs some code on your computer to set up the software. I’d argue it’s exactly the same.
In the one instance you copy and paste code and run it. In the other instance you execute an installer that also contains the random code. And you can’t even have a look what happens.
The real issue is: You have to trust the vendor. If you don’t trust Mullvad, don’t run their 5 lines of code. But you then also shouldn’t install their software and not run their windows installer. I don’t see a way around this ‘trust’ issue.
The proper way of course would be a standardised process that also confines the software into containers with minimal permissions. Something like Android Apps. In theory you could add a default update process so the vendor just needs to define an update server in the (apk) installer file. Google didn’t do this, but they want people to use their Play Store. And I don’t think we have a permission system that is actively used on any of the major desktop operating systems, anyways.
I think it depends on what you are looking for. While Linux Mint is a safe option, it does have some drawbacks… well it’s more drawbacks from Ubuntu but as Linux Mint is based off of it, it’s also impacted. Primarily the fact that Ubuntu packages are terribly out of date. Thankfully mint makes adding PPA’s painless, but for apps that don’t have a PPA it’s a pain to install them from scratch like Mangohud. It’s not impossible, but there is an expect level of Linux knowledge which is required before going in.
Another option is Manjaro. You will hear the litany of endless criticism about it from the community, some of it is valid. But for the most part, while it’s not as nice as Linux Mint, I think the OS will get you to the point where you can start using your machine faster. Mostly thanks to Arch’s rolling release, as well as the AUR for filling the gap between official packages and flatpaks.
I was using Manjaro for the longest time, but switched to Mint due to a freak bootloader accident. I prefer Manjaro in terms of how well it handled Games and Windows software due to it’s association with Arch. But I like how well Mint manages my laptop’s battery and performance or lack thereof due to it’s pitiful cooler.
Keep in mind they have zero Linux experience so I doubt they’ll be needing packages that are too obscure for mint, and I wouldn’t recommend trying to run windows software in Linux to a novice.
While I agree, the issue is, that they’ll want to. Thankfully Valve is handling games well enough right now that it’s a non problem. Regardless which distro you are on.
Its not advised to install steam os on non steamdeck devices. Pop os, nobara and Garuda I believe come configured for gaming out of the box and are pretty uptodate.
Opening the connections is one thing but resends and stream ordering can also cause issues since they might delay the latest information reaching the user space application even if the packet for them has actually arrived just because some earlier packet has not. There can also be issues with implementations waiting for enough data to be available before sending a packet.
If your connection is stable, the latency will more or less be the same, but TCP will consume more bandwidth because of acknowledgement packets, making it harder to keep your connection stable.
On an unstable connection, TCP latency will skyrocket as it resends packets, while UDP will just drop those packets unless the game engine has its own way of resending them. Most engines have that, but they only do it for data that is marked as “important”. For example using an item is important, but the position of your character probably isn’t, because it’ll be updated on the next tick anyway.
please please please avoid an LTS (long term support) distro for desktop use, especially if gaming. this includes Linux mint which is based on Ubuntu LTS. the packages are painfully old and cause problems playing the latest games, particularly where they don’t work out of the box with lutris/proton/wine etc. LTS is great for servers and workstations but not end users
experienced users can make any distro work including LTS but it’s extra overhead for new folks
It’s not about Laptop vs worksation. It’s about how new is the Hardware compared to the Linux Kernel shipping with the LTS distribution. If your hardware is older than the kernel, you will most likely not have any problems. For example, let’s say you use Ubuntu 22.04 LTS, which ships with Kernel 5.17 releaed in may 2022, if your computer is made of parts released in 2021, no problem.
OP, this is terrible advice. Do not follow! Unless you run into a problem with Ubuntu LTS or distro based on it that you and the community cannot solve and it’s due to the LTS, stick with LTS. The vast majority of users are on LTS which is why there are tested solutions for most common problems you might run into. LTS releases last for many years so once you solve a problem, it’s likely you won’t have to solve it again for a long time, unless you decide to make your life more interesting by upgrading or changing the OS. Non-LTS releases last for 9 months or so, then you’re thrust onto a new set of changes and bugs that may or may not hit you, with much fewer comrades to test them and find solutions for. As a new user, if you’re going with Ubuntu or Ubuntu-based OS, stick to LTS. You’ll have enough hurdles to cross getting acquainted with the OS itself.
my lug tries to help people trying to run lutris on old LTS versions and for one example we ended up having to tell them to use some .deb for lutris since ubuntu shipped a broken lutris version for a year or whatever users should always be able to depend on their package manager alone instead of side loading content. even had instances of their version of wget or curl being incompatible with winetricks and gitlab and githubs apis
Being able to always rely on the package manager alone, in other words on the built-in repos alone, has never been achievable on a stable system. You have to throw stability out of the window to allow for that to happen. There are huge downsides to that, especially for new users who have no clue how to isolate and work around defects. That’s why sideloading content via third party repos or individual debs has always been a part of the reality of Debian-based OSes. As a result, most open source communities and proprietary vendors provide one or the other.
Don’t you have any other requirements for a POS? Like connecting a card reader, special software etc. Those will probably be your main problem, not the OS.
Not sure if it’s still the same as it was back in my day, but KDE’s “release candidate” nomenclature was always a bit of a misnomer. You’d never see RC1 actually released as final. What it really means is that the alpha “feature refinement” beta “bug fixing” phase is over, and it’s the final testing phase for showstoppers. However, the definition of showstopper seemed always to be very wide. Thus, a lot of bugs still get reported and fixed during this phase, and RC really means “beta, but towards the end of the pipeline”.
Which is in contrast to the Linux kernel where a RC can be declared ship-ready and simply get renamed.
Admittedly there’s a fairly large impact difference between kernel level bugs, and say a bug in Okular…
The nomenclature is actually correct here, and a lot of other software use it, at least from everything I’ve seen. Release candidate means it’s stable and (usually) feature complete but could have bugs and needs testing before they launch it.
It’s still a misuse of the word - if your software needs testing it’s not a candidate you would release unless you’re a multi-billion gaming company or Cisco
Wiktionary: (software engineering) A version of a program that is nearly ready for release but may still have a few bugs; the status between beta version and release version.
Oxford: a version of a product, especially computer software, that is fully developed and nearly ready to be made available to the public. It comes after the beta version.
I couldn’t find more definitions from “big” dictionaries, but literally no definition I’ve seen agrees with you. I wonder why that is.
KDE has a predefined schedule for “release candidates”, which includes RC2 later this month. So “RC1” is clearly not going to be the final version. See: community.kde.org/…/February_2024_MegaRelease
This is at least somewhat common. In fact, it’s the same way the Linux kernel development cycle works. They have 7 release candidates, released on a weekly basis between the beta period and final release. See: www.kernel.org/category/releases.html
In the world of proprietary corporate software, I more often see release candidates presented as potentially final; i.e. literal candidates for release. The idea of scheduling multiple RCs in advance doesn’t make sense in that context, since each one is intended to be the last (with fingers crossed).
It’s kind of splitting hairs, honestly, and I suspect this distinction has more to do with the transparency of open-source projects than anything else. Apple, for example, may indeed have a schedule for multiple macOS RCs right from the start and simply choose not to share that information. They present every “release candidate” as being potentially the final version (and indeed, the final version will be the same build as the final RC), but in practice there’s always more than one. Also, Apple is hardly an ideal example to follow, since they’ve apparently never even heard of semantic version numbering. Major compatibility-breaking changes are often introduced in minor point releases. It’s infuriating. But I digress.
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.