You seriously need to stop what you’re doing. Log in with ssh only. If you need multiple terminals use multiple ssh sessions, or screen/tmux. If you need to search something do it on your desktop system.
The server should not have Firefox installed, or KDE, or anything related to desktop apps. There’s no point and nothing good can come of it.
I’m on 545 and I have no issues. But I’m also not using Ubuntu…
Maybe it’s the distro that’s the problem not the backup. I mean I’d rather have a distro with smooth updates than one that makes me need snapshots.
What’s even the point with such a distro, ok so I restore previous working state, then what, I can’t do updates anymore? Living in fear of official updates sounds terrible.
I think it’s just dumb to not make a backup before large updates. There’s so many things happening, a lot can go wrong, especially if you have added 3rd party repos and have customized core parts of the system, not just through config files but let’s say you switched to latest kde plasma from the one your distro ships.
And what happens if you have to restore the backup?
You can look up what’s the solution to your problem in peace while everything is still working. If it was a server, all the services are still available, if it was your desktop you don’t have to use a live linux usb that’s without all your configs to find the solution
You make a good point. Ubuntu gives you so many ways to shoot yourself in the foot that it’s pretty much a given that it will get messed up eventually. So you have to use snapshots.
On Arch based distros the updates just work. I’ve never had to snapshot anything. But having just one single community repo (AUR) contributes to that a great deal.
Of course it can. And your PC can also fall off the desk. I’m saying a snapshot tool is a really poor solution for distro problems, it’s really a bandaid for a problem that shouldn’t exist.
Use a decent distro, take proper backups, and use snapshots for what they were intended — recovering small mistakes with personal files, not for system maintenance.
That’s the point – your claim about deb-based distros is just anecdotal.
The example here is Nvidia updates borking the system. I’ve have that happen to me numerous times on Arch-based systems.
I’ve run deb-based distros on some boxes over years of updates with no issues. On the other hand I’ve had updates cause breakages on Arch-based systems pretty much every time I’ve run them.
Which is to say anecdotes are useless, updates can break systems, and being able to immediately roll back to a working system and deal with updating later is a simple, nice thing to have with no downsides.
So you’re implying there exists a distro that is perfect and never breaks anything? Sounds like denial. Having time shift in place is risk management and says nothing about the distro, which btw all are imperfect and may break eventually. Kinda confused how one can run Linux and be unaware of how complex and fallible ALL software is.
No, I’m implying that official updates breaking the system is insane and should not be accepted as the norm to the point you casually need to use snapshots just to keep your system working.
That’s not an accurate portrayal of anything though. You’re implying risk management means you’re accepting that the system sucks and the only way to keep it running is to have a backup system.
That’s not what anyone is doing. They are acknowledging the flawed nature of software and humanity in general and guarding against the consequences of this. It’s being smart. Name one distro that has never broken with updates? You cannot because there obviously couldn’t exist one.
A major distro breaking your system is the equivalent of a flower pot falling on your head walking down the street. Does it happen? Sure. Do I want to spend my life wearing a motorcycle helmet and looking up all the time? No.
I’m not saying distros can’t crap on you, I’m saying stop tolerating it. Raise a stink or switch distro. There are distros out there where you don’t have to live in constant fear and where nothing happens if you don’t have snapshots.
I do have backups, precisely because shit happens. But there’s a difference between a helmet and health insurance.
Okay what distros are we talking about? What’s an example of an unreliable one and what’s a reliable one?
I think it’s much more complicated than this. Honestly there’s a reason it took decades for Linux distros in general to get as stable as they are today. It’s really really hard to build an operating system.
I think a better analogy for installing updates on linux would be riding a motorcycle. Accidents happen all the time. They’re bound to, be prepared. Just because you can ride for 30 years unscathed doesn’t mean you can take that for granted.
.deb distros are doomed from the start if you need to use third-party repos (which you do, for a desktop system) because they always end up undermining the stability of the packages from the core repos in the long run.
Try an Arch-based distro, they’re super stable because their compatibility model is more robust, and there are options depending on how much hand-holding you want — ranging from vanilla Arch to Endeavour to Manjaro.
Ya know, this is super interesting you mention Arch. The only person I’ve known IRL who uses and loves Arch champions it hardcore but with the caveat that you have to be okay with things breaking due to the rolling release model. Due to his guidance I have avoided arch specifically. I’ve been running Ubuntu based distros a couple of years and only had issues with updates breaking things like 2 times… Both of which didn’t require a wipe or anything.
Packages can break, not the distro. Packages can break at any time because there’s thousands of them and nobody can check all of them thoroughly. A rolling distro gets you both the bugs and the fixes faster.
Non-apt and non-rolling will limit your options considerably.
I might be confused. I thought that the distro itself was made up of packages and that’s what all updates did: update various packages bundled with the distro (plus any you installed yourself)
My 2 cents, OpenSUSE Leap is stable as hell. lots of QA happening with their automated testing, and keeping in lockstep with SUSE releases (now sharing same binaries). Every distro upgrade has gone flawlessly, but when I have had the urge to tinker and do stupid things inf config files the built in btrfs snapshots have been a godsend.
What you wrote and what I’m reading about openSuse leap sound excellent. My only concern is support. Just about all the apps and tools I use are well supported on Ubuntu based systems but I don’t recall seeing nearly as much support for rpm based oses, including open suse . What’s your experience? If you go to install just any random software you just heard about, how well does it typically go getting it installed/working?
I haven’t had issues finding packages, often they may not be on the dev websites that host a deb package, but the main repos contain the general tools, if you need something more “fringe”,independent dev or new, then software.opensuse.Org allows you to turn on a search by community or experimental packages (which would be like Arch AUR and contains a lot of rpms) so you can install directly from the website, it will add the neccessary community repos during the install. Or if you don’t want to pollute your repo list they typically they have the option : Grab Binary directly. Or ther is an OPI ? Package you install that lets you search locally for community packages. For commerical apps like teamviewer , yubikey,webex etc the rpms were all available to support corporate SUSE users. If you still can’t find an rpm, then you run the Alien tool which converts a deb to the RPM installation format. The only issue I had once was the community package had dependencies that were not contained in the users repo so I had to find the dependencies and install those first. That felt like 1990s
No problem. Happy New Year. Also it is .org i mistakenly put .com for the opensuse software site. ( I will edit post) Also I should mention if you dont see an officially supported rpm on the software site, it does not mean it isn’t there, it now seems to mean it is in the main repos. I think years back they would all show, but maybe due to shared binaries with SUSE they don’t bother listing on the software.opensuse.org site. So best to check locally in the YAST or Zypper search of your repos first.
Realistically it’s not super dangerous, and no you probably don’t have a virus just from browsing a few tech support sites, but you do eliminate your last line of defense when you run software as root. As you know, root can read/change/delete anything on your system whereas regular users are generally restricted to their own data. So if there is a security problem in the software, it’s made worse by the fact that you were running it as root.
You are right though that Firefox does still have its own protections - it’s probably one of the most hardened pieces of software on your computer exactly because it connects to the whole wide internet - and those protections are not negated by running as root. However if those protections fail, the attacker has the keys to the kingdom rather than just a sizable chunk of the kingdom.
To put that in perspective though, if there is a Firefox exploit and a hacker gets access to your regular user account, that’s already pretty bad in itself. Even if you run as a regular unprivileged user they would still have have access to things like: your personal documents, your ssh keys, your Firefox profile with your browsing history, your session cookies and your saved passwords, your e-mail, your paypal account, your banking information, …
As root, they could obviously do even more like damage like reading all users’ data, installing a keylogger or screengrabber, installing a rootkit to make themselves undetectable, but for most regular users most of the damage is already done when their own account is compromised.
So when these discussions come up, I always have to think about this XKCD comic:
They might have access to all that data once but a lot of the paths towards making that a persistent threat that doesn’t go away after the next reboot and most of the ones towards installing something even deeper in the system that might even survive a reinstall do require root.
Is it dumb that I only backup my docs and anything else I think is important? I can rebuild fairly quickly if something would happen. I ask since I know that people backup a variety of their directories
Timeshift with BTRFS kicks ass. I have mine set for daily snapshots, retained for a week. Only the changes between snapshots are stored, so the extra disk usage is minimal, and easily justified by the peace of mind in case of fuck-ups or broken updates.
Only the changes between snapshots are stored, so the extra disk usage is minimal
If you want to use a similar approach for backups, Borgbackup is a pretty nice piece of software. I have two backups of my most important files: One on my NAS at home, and one “in the cloud” on a storage VPS (ends up way cheaper than using S3, B2 or anything like that).
I’ve got one with HostHatch that’s 10TB of space for $10/month. It was an offer they had during Black Friday 2020. They had a similar offer during Black Friday 2023 but I think it was around $20/month, paid yearly.
I live in the San Francisco Bay Area and my storage server is in Los Angeles, which is around 10ms round-trip ping time from my home internet connection.
Hetzner is good too. They have relatively cheap “storage boxes” that are a shared environment rather than a VPS. You don’t get proper SSH access, but they do support FTPS, SFTP, Samba, Borgbackup, Restic, rclone, rsync and WebDAV. www.hetzner.com/storage/storage-box
Borgbackup encrypts the backups, so the host won’t be able to actually view your backups.
I remember having this realisation about Mir, but only after we collectively ran it off the cliff wall. The main reason everyone piled on Mir was that it was thought that Canonical would be priming Linux desktop for fragmentation with two competing standards.
But in fact, Mir was providing a solution to the fragmentation Wayland was bringing. Now we have 3, 4, 5 Mir-s, all with slight incompatibilities. Want a feature? Better hope all of them decide to implement the extension after someone proposes it. We know how well that worked in the past.
This is also ironic because the detractors of Xorg constantly talked about the issues with Xorg extensions and how many of them there were. But I never really had to look up which extensions Xorg supported, while I have had to do that with Wayland compositors.
The main reason that I piled on Canonical was that they kept on spreading FUD about Wayland to try to promote / justify Mir rather than discussing in good faith.
As a general best practice, you should never directly login as root on any server, and those servers should be configured to not allow remote connections as the root user. You should always log in as a non-root user and only run commands as root using sudo or similar features offered by your desktop environment. You should be wary of even having an interactive root shell open; usually I would only do so on a VM console, when first setting up a system or debugging it.
By doing this, you not only guard against other people compromising your system, but also against accidentally running commands as root that could damage your system. It’s always best to only run things with the minimum permissions they need, and then only grant them additional permissions on an as-needed basis.
you should never directly login as root on any server, and those servers should be configured to not allow remote connections as the root user. You should always log in as a non-root user and only run commands as root using sudo or similar features
That is commonly recommended but I have yet to see a good solution for sudo authentication in this case that works as well as public key only SSH logins with a passphrase encrypted key and ssh-agent on the client-side. With sudo you constantly have to use passwords anyway which is pretty much unworkable if you work on dozens of servers.
Of course I can store dozens of passwords but if every task that requires a single command to be run automatically on e.g. “every server with pending updates” requires entering each of those passwords that is unworkable.
Then enlighten me, what is the easy way to do tasks that do require some amount of manual oversight? Tasks that can be completely automated are easy of course but with our relatively heterogeneous servers automation a la “do it on this one test system and if it works there run it completely automatically on the 100 identical production systems” is not available to us.
Not my circus, not my monkeys. You’re doing things the hard way and now it’s somehow my responsibility to fix your mess? I’m SUPER glad I don’t work with you.
None of us can tell you the right approach for your specific system and use-case. People are simply pointing out that what you stated you’re doing is insecure and not recommended
And nobody in any of these threads has ever pointed out why it is considered to be insecure. The most probable origin for that idea I have come upon so far is that it is a left-over from pre-SSH days when people thought using the root password with su at something other than the start of their connection would make it harder to sniff. Literally nobody lists even one good reason why sudo should be more secure than direct root login with SSH public keys and password login disabled for full root access (as in not limited to just one or two commands).
It’s not about someone sniffing your passwords, it’s about reducing your attack surface. If you use su then the entire session has root privileges and any piece of software you run could do system level damage if it has a bug. Using sudo limits the privilege escalation to just one command.
You seem to be looking at the issue in black and white. Any reduction in root access is beneficial. Using sudo with password cache lasting an hour is still preferable to signing in as root. As many people have said, it’s about minimizing attack surface
Running fewer commands as root is more significant risk reduction than having an extra user. I won’t be replying further since I’m simply repeating what others have already said. If you sincerely don’t understand, I suggest doing additional research on your own.
You people don’t seem to grasp that I am already not running any commands on the server as root that do not require root. This is all about administrative tasks.
The vast majority of commands when debugging actual issues on the system or performing administrative tasks do require root. Out of the others some give you incomplete results when called as a regular user and 90% of the rest shouldn’t be run on the server in the first place if you can avoid it but directly on your client computer (e.g. looking up documentation).
FreeIPA and your password is the same on every machine: yours. (Make it good)
Service accounts should have either no sudo password or use something like Ansible with vault and keep every one of them scrambled and rotate regularly (which you can do with Ansible itself)
Yes, even if you have 2 VMs and a docker container, this is worth it.
FreeIPA and your password is the same on every machine: yours.
Any network based system like that sucks when you need to fix a system that has some severe issue (network, DNS, disk,…) which is exactly when root access is the most important.
It is. That is the whole point. Why would I make extra unprivileged accounts that can run any command I need to run as root at any time without a password on the system just to avoid it. That just increases the attack surface via any other vector by giving an attacker accounts to choose from to break into.
I am asking why it is considered to be more secure for the use case where you aren’t limiting access to a few commands because it is access meant for all kinds of admin tasks, not just one specific one (as in access for the people who need to fix unexpected problems among other things).
Realistically, there is only a trivial pure security difference between logging in directly to root vs sudo set up to allow unrestricted NOPASS access to specific users: the attacker might not know the correct username when trying to brute force. That doesn't matter in the slightest unless you have password auth enabled with trivial passwords.
But there is a difference in the ability to audit what happened after the fact if you have any kind of service storing system logs remotely or in a tamper-proof way. If there's more than one admin user on a service, that is very very important. Knowing where the compromise happened is absolutely essential to make things safe.
If there's only ever going to be one administrative user (personal machine), logging in directly as root for manual administrative tasks is fine: you already know who the user is. If there's any chance there might be more administrative users later (small but growing business), you should consider doing it right from the start.
I was aware of the login UID for auditd logging as a difference but as you say, that is only really helpful if the logs are shipped somewhere else or tampering with them is otherwise prevented for admin users. It is not quite the same but the auth.log entries sshd produces on login also contain the key fingerprint used to login these days so on a more limited scale you can at least tell who logged in when from those (or whose key but that is no different than whose account for the sudo approach).
you should consider doing it right from the start.
Do you have any advice on how to use the sudo approach without having a huge slow down in every automated process that requires ssh user@host calls for manual password entry? I am aware of Ansible but I am honestly very sceptical of Python tools since they tend to break easily and often from my past experiences and I would like to avoid using additional ones for critical tasks. Plus Ansible in particular seemed to be very late with their Python 3 transition, as I recall I uninstalled it when it was one of the last tools left that did not work with Python 3.
Well, my recommendations for anything semi-automated would be Ansible and Fabric/Invoke. Fabric is also a Python tool (though it's only used on the controlling side, unlike Ansible), so if that's a no-go, I'm afraid I don't have much to offer.
On a typical home user desktop linux setup, there’s virtually no difference between your regular user and root.
Access to your data, emails, passwords, installing software (in /home), access to LAN and so on are already possible without root permissions, so there really is not a whole lot that an attacker cannot do even without root.
And then, if you use sudo or su (or whatever) to switch to root with a password, escalating to root privileges is basically trivial for an attacker. An attacker can divert your PATH to compromised binaries. They could just replace “sudo” with their own little script that steals your password.
Yes. Running anything as root is potentially dangerous. And a browser is a complex and big piece of software with many security issues that can be (potentially) triggered remotely. So it’s bad because of two reasons.
Btw a desktop environment also is a complex and big piece of software with potential issues. Running the whole desktop as root is another thing you wouldn’t do for extra security.
The proper way is to just create a user account and run the desktop and browser as a user. Open a terminal and ‘su’ or ‘sudo’ to limit root rights to the operations that actually need those permissions.
Just running everything as root certainly works. But you do away with all the extra layers of security and end up with something as secure as MS-DOS or a Windows in the 90s or early 2000s.
I don’t want to step on your workflow too much since it somehow seems to work for you but your main issue stems from the fact that you clearly don’t work with your server as if it actually was a server.
You shouldn’t really have a desktop interface running there in the first place (let alone as root and then using it as a regular user). You should ask yourself what it actually solves for you and be open to trying different (and more standard) solutions to what you’re trying to achieve.
It’d probably consist of less clicking and using the CLI a bit more, but for stuff like file management you can still easily use https://linux.die.net/man/1/mc.
If you need terminal sessions that keep scrollback and don’t stop when you disconnect you should learn to use tmux or screen or something like that. But then again if you’re running actual software in there then you should probably use a service (daemon) for that.
As for whether it’s a security issue, yeah it most definitely is. Just like it’s a security issue to run literally any networked application as root. Security isn’t black and white and there are trade offs to be made but most people wouldn’t consider what you’re doing a reasonable tradeoff.
I had actually moved from a fully CLI server to one with a full desktop when I upgraded from a single board computer to x86. The issue is that it’s not just a NAS, but I regularly use it to offload long operations (moving, copying, or compressing files, mostly) so I don’t need to use my PC for those. To do that I just remote into it and type in the command, then I can turn my PC off or do whatever without affecting the operation. So in a way it’s a second PC that also happens to be a server for my other machines.
I use screen occasionally, and I used to use it a lot more when it was CLI only, but I find it really unwieldy due to how it manages multiple active terminals where you have to type in the ID of each screen to go back into it, and also because it refuses to scroll even when run in a terminal emulator that supports scrolling, where it just cycles between recent commands when you move the scroll wheel.
Not trying to make excuses, just trying to explain my reasoning. I know it’s bad practice and none of these are things I’d do if I was managing an actual production server, but since it’s only accessible from my LAN I tend to be a lot more lax with it.
I’m wondering if I could benefit from some kind of virtualized setup that separates the server stuff while still letting me remote into a desktop on the same machine for doing stuff, or if I can get away with just remoting into not the root user. Though I’ve never used a hypervisor and have no idea how to so I’m not sure how well that would go, since the well-known open source ones like Xen seem really technical and really feels like something not meant to be used outside an actual data centre.
KVM is awesome. It is the core of Proxmox which is my preferred way to manage VMs and LXC containers now. I used to run debian+KVM+virt-manager or cockpit but Proxmox does all the noodling setup for me and then just works.
Sorry, this is very much a PEBKAC issue. This is a excerpt from my tmux config:
<span style="color:#323232;"># Start windows and panes at 1, not 0
</span><span style="color:#323232;">set -g base-index 1
</span><span style="color:#323232;">setw -g pane-base-index 1
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Use Alt-arrow keys without prefix key to switch panes
</span><span style="color:#323232;">bind -n M-Left select-pane -L
</span><span style="color:#323232;">bind -n M-Right select-pane -R
</span><span style="color:#323232;">bind -n M-Up select-pane -U
</span><span style="color:#323232;">bind -n M-Down select-pane -D
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Shift arrow to switch windows
</span><span style="color:#323232;">bind -n S-Left previous-window
</span><span style="color:#323232;">bind -n S-Right next-window
</span><span style="color:#323232;">
</span><span style="color:#323232;"># No delay for escape key press
</span><span style="color:#323232;">set -sg escape-time 0
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Increase scrollback buffer size from 2000 to 50000 lines
</span><span style="color:#323232;">set -g history-limit 50000
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Increase tmux messages display duration from 750ms to 4s
</span><span style="color:#323232;">set -g display-time 4000
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Bind pane creation keys to reuse current directory
</span><span style="color:#323232;">bind % split-window -h -c "#{pane_current_path}"
</span><span style="color:#323232;">bind '"' split-window -v -c "#{pane_current_path}"
</span>
I hope the comments are self explanatory.
Scrolling works with Ctrl+b Page Up/Down. There are other shortcuts, but this is probably the most obvious. q to quit scrolling.
Ctrl+b d to detach from a session. tmux a to attach. As always, many options are available to have many named sessions running simultaneously, but that is for a later time.
I see. In that case you should really try tmux; I didn’t vibe with screen either but I find tmux quite usable.
For the most part I just open several terminal windows/tabs on my local machine and remote with each one to the server, and I use tmux only when I explicitly need to keep something running. Since that’s usually just one thing I can use like two tmux commands and don’t need anything else.
Oh and for stuff like copying and such I’d use rsync instead of primitive cp so that in case it gets interrupted I only copy what’s needed.
I wouldn’t bother with virtualization and such; you’d only complicate things for yourself. Try to keep it simple but do it properly: learn some command line basics and you’ll see that in a year it’ll become second nature.
Yes, very. This is not specific to Firefox, but anything running as root gets access to everything. Only one thing has to go wrong for the whole system to get busted.
usually logged into KDE Plasma as root.
Please don’t do this! DEs are not tested to be run as root! Millions of lines of code are expected to not have access to anything they shouldn’t have and as such might be built to fail quietly if accessing something they shouldn’t in the first place. Same thing applies to Firefox, really.
Please don’t do this! DEs are not tested to be run as root! Millions of lines of code are expected to not have access to anything they shouldn’t have and as such might be built to fail quietly if accessing something they shouldn’t in the first place. Same thing applies to Firefox, really.
Could you elaborate on this? I’m genuinely surprised because Fedora just asks you if you want to have the option to log into root from KDE during installation, so I always just assumed that it’s intended to be used that way.
I don’t know the specifics on Fedora’s installer, but normally that question is about disabling root account, not logging into a DE.
Not sure what else to elaborate here. There’s a bunch of code that is not tested to be run as root. A whole class of exploits becomes unavailable, if you stick to an unprivileged user.
Say there’s some exploit that allows some component of KDE to be used to read a file. If it’s running under an unprivileged user - it sucks. Everything in user’s homedir becomes fair game. But if it runs as root - it’s simply game over. Everything on the system is accessible. All config, all bad config, files of all applications (databases come to mind). Everything.
Say there’s some exploit that allows some component of KDE to be used to read a file. If it’s running under an unprivileged user - it sucks. Everything in user’s homedir becomes fair game. But if it runs as root - it’s simply game over. Everything on the system is accessible. All config, all bad config, files of all applications (databases come to mind). Everything.
This is also something I’m thinking about: All the hard drives mounted on the server is accessible to the only regular user as that is what my other computers use to access them. I’m the only one with access to the server so everything is accessible under one user. The data on those drives is what I want to protect, so wouldn’t a vulnerability in either KDE or Firefox be just as dangerous to those files even running as the regular user?
Also, since my PC has those drives mounted through the server and accessible to the regular user that I use my PC as, wouldn’t a vulnerability in a program running as the regular user of my PC also compromise those files even if the server only hosted the files and did absolutely nothing else? Going back to the Firefox thing, if I had a sandbox breach on my PC, it would still be able to read the files on the server right? Wouldn’t that be just as bad as if I had been running Firefox as root on the server itself? Really feels like the only way to 100% keep those files safe is to never access them from an internet accessible computer, and everything else just falls short and is just as bad as the worst case scenario, though maybe I’m missing something. Am I just being paranoid about the non-root scenarios?
You never log in as root. On every new VM/LXC I create, I delete the root password after setting it up so that my regular user can use sudo.
Run as your regular user and sudo the commands that need privileges.
Also if these are servers, run them headless. There’s no need for a GUI or a browser (use wget or curl for downloads, use your local browser for browsing)
You keep your files safe by having backups. Multiple copies. Set up the backups to gets copied to another server or other system your regular user doesn’t have access to. Ideally, you follow the 3-2-1 backup standard if the files are important. That is 3 copies, on 2 different media, and 1 offsite. There are many ways of accomplishing that and its up to you to figure out what works best.
That’s a link to the most recent release of Firefox and the security vulnerabilities that were fixed.
You’ll notice the first one listed says, “This issue could allow an attacker to perform remote code execution and sandbox escape.”
So if you visited a site that exploited that bug, it escaped the sandbox and ran whatever code it wanted to. Since you were running as root it could do anything it wants. Your device is now the property of someone else. Potentially all your data has been stolen. You probably didn’t even notice.
Now. Realistically. You probably didn’t get exploited. Your device may not be vulnerable to that particular bug. But new bugs are found, and fixed, and created every day. Can you be sure you weren’t exploited?
Let’s look at it a different way. Think of it like driving a car with no seatbelt or airbags. As long as you don’t crash, you’re fine. The car still works fine without seatbelts and you have more freedom to move your arms around.
Let’s look at it a different way. Do you ever lock the door to your home/apartment? Heck do you even close the door? Why not leave it wide open?
At the end of the day security is about layers and the trade offs for convenience. You can run KDE as root, and you can run Firefox as root. You’ll probably be fine. It’s like driving without a seatbelt or leaving your front door wide open, but you can do it. If you do drive with a seatbelt and at least close your front door, you can probably run KDE and Firefox as a regular user.
linux
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.