A crash in the window manager takes down all running applications: Yes, because the compositor IS the server, window manager AND compositor at the same time.
FreeBSD’s Wayland support is through a Linux compatibility library. The major Wayland implementations are Linux only and there’s no way around it other than implementing Linux libraries like FreeBSD did.
That something entirely different than the protocol being biased towards Linux. It's like complaining that TCP/IP is biased towards Linux because the Linux kernel's networking module can't be used in BSD kernels.
One or two years ago Mozilla hired the main developer of K-9 Mail. The dev previously raised enough donations to work on K-9 Mail for a while and modernize it after not having an official release for years.
Seriously, unless you are extremely specialized and know exactly what you are doing, IMHO the answer is: Always (and even being extremely specialized, I would still enable a firewall. :-P)
Operating systems nowadays are extremely complex with a lot of moving parts. There are security relevant bugs in your network stack and in all applications that you are running. There might be open ports on your computer you did not even think about, and unless you are monitoring 24/7 your local open ports, you don’t know what is open.
First of all, you can never trust other devices on a network. There is no way to know, if they are compromised. You can also never trust the software running on your own computer - just look at CVEs, even without malicious intentions your software is not secure and never will be.
As soon as you are part of a network, your computer is exposed, doesn’t matter if desktop/laptop, and especially for attacking Linux there is a lot of drive by attacks happening 24/7.
Your needs for firewalls mostly depend on your threat model, but just disabling accepting incoming requests is trivial and increases your security by a great margin. Further, setting a rate limit for failed connection attempts for open ports like SSH if you use this services, is another big improvement for security. (… and of course disabling password authentication, YADA YADA)
That said, obviously security has to be seen in context, the only snake oil that I know of are virus scanners, but that’s another story.
People, which claim you don’t need a firewall make at least one of the following wrong assumptions:
Your software is secure - demonstrably wrong, as proven by CVEs
You know exactly what is running/reachable on your computer - this might be correct for very small specialized embedded systems, even for them one still must always assume security relevant bugs in software/hardware/drivers
Security is a game, and no usable system can be absolutely secure. With firewalls, you can (hopefully) increase the price for successful attacks, and that is important.
You may also want to check up on regulations and laws of your country.
In Belgium, for instance, I am responsible for any and all attacks originating from my PC. If you were hacked and said hackers used your computer to stage an attack, the burden of proof is upon you. So instead of hiring very expensive people to trace the real source of an attack originating from your own PC, enabling a firewall just makes sense, besides making it harder on hackers…
That’s a strange law. That’s like saying one should be held responsible for a thief stealing their car and then running over someone with it (well, perhaps an argument could be made for that, but I would disagree with it).
Seriously, unless you are extremely specialized and know exactly what you are doing, IMHO the answer is: Always
In what capacity, though? I see potential issues with both server firewals, and client firewalls. Unless one wants their devices to be offline, there will always be at least one open port (for example, inbound on a server, and outbound on a client) which can be used as an attack vector.
Weird, the video announcement states the software is FOSS. I think that’s regarding his YouTube content and not this stuff. The video also states github is wip
I stopped caring. When my GF bought a laptop I just installed Linux there and she has no issues using it. Linux is where I always wanted it to be. Now when I see someone using Windows I just think “you poor soul” to myself and move on.
This is one of the hardest walls for people to jump over mentally, from scavenging the internet for binaries to using a package manager.
I think ideally one should understand what they’re doing, I think that if you did you would realise it’s not hard, just different from what you’re used to. Usually you install things using the graphical package manager, of which there are a lot, since I don’t know which one you are using nor have I used any of them in a long while, I’ll use the terminal as an example (same reason the site uses terminal commands), but all of this is almost assuredly possible via GUI.
To install things you usually do sudo apt install , this is a huge advantage on Linux, it works similar to your phone in that everything gets updated together but also it installs dependencies separately, which means that instead of having 10 copies of the same library for 10 programs that use it (like on Windows) you get a single one, which is part of the reason binaries are smaller on Linux.
The problem with this approach is that some programs are NOT listed there, the only programs there are the ones the maintainers of your distro (Ubuntu in this case) can review and approve. So you can have a lot of different solutions for this:
The first and most obvious for Windows users is to download the .deb from the website and just run that like you would a binary on windows, i.e. double-clicking it, or from the terminal you can run sudo dpkg -i . This works, but you lose the advantages of a package installed via your package manager, i.e. you would get the same experience as on windows, so it’s not ideal.
The second way is the one they’re describing, essentially you’re adding a new repository to the package manager, that the people who wrote the program are maintaining (instead of Ubuntu guys), this is a two step process, sudo curl -fsSLo /usr/share/keyrings/mullvad-keyring.asc https://repository.mullvad.net/deb/mullvad-keyring.asc that command is downloading the file https://repository.mullvad.net/deb/mullvad-keyring.asc and putting it in /usr/share/keyrings/mullvad-keyring.asc, this is needed because repositories are not trusted by default, that would be a security nightmare, you can do this via GUI if your problem is with the terminal , just download the file and copy it to that location, it’s just harder to explain than giving you a command. Then it’s adding the repository to the repository list, the command is echo “deb [signed-by=/usr/share/keyrings/mullvad-keyring.asc arch=$( dpkg --print-architecture )] https://repository.mullvad.net/deb/stable $(lsb_release -cs) main” | sudo tee /etc/apt/sources.list.d/mullvad.list that command has a lot to unwrap, in essence it’s editing the file /etc/apt/sources.list.d/mullvad.list and writing a line like deb [signed-by=/usr/share/keyrings/mullvad-keyring.asc arch=amd64] https://repository.mullvad.net/deb/stable focal main" there, but because the guy who wrote this doesn’t know your architecture (e.g. amd64) nor your version (e.g. focal) he wrote a command that gets that information from your system, you can instead write the file yourself if you know those. Then install via package manager as normal.
There’s a third way which is more recent which is install via snap/flatpak which is similar to install via package manager, except you don’t add new repos.
There’s a fourth way which is manually, usually when you compile stuff you install them manually.
I know it’s a lot to take in, but I’m of the opinion that if you understand what’s happening it makes things easier.
Just as an addendum to your answer. In the command writing to mullvad.list the | sudo tee /etc/apt/sources.list.d/mullvad.list is using two helpful linux utilities to modify the command. The first is the | which is called a pipe and connects the text output of one program to the text input of another. The pipe is connecting the output of echo which simply prints a string, in this case composed of the outputs of several other commands to the program tee. Tee which is given admin privileges by the sudo takes an input stream and splits it between two files. In this case those are mullvad.list and since no other was provided stdout the output pipeline of the terminal running the command.
EDIT:
In the interest of further completeness. Another utility used in those commands is the command substitution operator of sh. So when the terminal is interpretting text $(some command) gets substituted out for the text output by the command in the parentheses. It is another common way of connecting commands on the shell to allow for more flexible and powerful commands.
The problem is that for most users, when their setup is completed they won’t need to play with it for a while so after that any time they need to install something new through the terminal it means losing time to find instructions again.
Nothing is learned, to the eyes of a casual users it’s just meaningless entries getting copy/pasted and it’s information getting repeated again and again and again just with slightly different entries for each program. Meanwhile “how to install a program on Windows” would basically require one page on the whole internet to cover 99% of situations: “Download the install file, double click it, follow the on screen instructions to automatically install the program”.
The problem is that for most users, when their setup is completed they won’t need to play with it for a while so after that any time they need to install something new through the terminal it means losing time to find instructions again.
Which is why it’s better to understand what you’re doing than blindly copying pasting. You won’t need to remember these since whatever you want to install if it’s not on the repos you’ll have to google it same as if you were on Windows.
Also on Windows the steps are: Download the install file, double click it, follow the on screen instructions to automatically install the program, then every week or so go back to the website, check if a new version was released, if so download it and install it again. If the configuration would be destroyed by doing this first, make a backup first, if the new version is not backwards compatible for configuration move the existing configuration changes so that you get the new default after install and can apply your changes afterwards.
That’s closer to the truth, and you need to do that weekly for every one of the dozen or hundreds of programs a person has, no wonder people don’t update their programs on Windows and become susceptible to lots of exploits over time.
Also, read my option 1, which is what most websites offer you first, i.e. download a .deb and run it which is the equivalent of Windows, with all of its downsides. For example if you go to that website that OP posted and click on downloads you can select Windows, Mac or Linux, and you can download an installer that way and be done with it. But only Linux has a better option that takes a couple more steps but saves you lots of time in the future.
then every week or so go back to the website, check if a new version was released, if so download it and install it again
Don’t know what kind of program you’re running but… No.
Also if you want adoption you need to make your product easy to use and not ask them to become experts at how things work. Do you think all bike riders know how to adjust their derailer or even care to know? No, because people have other things they care about. Same guess 6 for computers, if Linux requires users to understand how to do things manually in the terminal then the “year of the Linux computer” will never happen.
the “year of the Linux computer” will never happen.
It won’t, that’s fine. People who don’t want to lean anything about computers use iOS and Android now. And that’s fine. I never want Linux distros to become like that.
Don’t know what kind of program you’re running but… No.
My Linux has updates every week, which means that if I was on Windows to keep everything the same up to date I would need to check every website to see which app released this week, maybe this week Firefox had a new release, maybe next week it’s mullvad VPN, and next week is the NVIDIA driver, but if I hadn’t checked all of them I would not know which ones have a new release.
Also if you want adoption you need to make your product easy to use and not ask them to become experts at how things work
Agreed, but also you should have options so that power users can take advantage of it.
Do you think all bike riders know how to adjust their derailer or even care to know?
Do you think that bikes should not have gears since most people don’t know how they work? No, because even if you don’t understand the mechanics you can understand the general terms, and even if you don’t understand gears you can just not use them, same as a package manager. Options is always better.
if Linux requires users to understand how to do things manually in the terminal then the “year of the Linux computer” will never happen.
It doesn’t require it, you’re ignoring the fact that OP could have just clicked download and download an installer same as he would on Windows. But if you can use the terminal and understand package managers you can use Linux in a way that Windows is impossible, if you can’t you can still use Linux in the same way you would windows with all of the sales downsides.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.