linux

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

arjache, in Is it actually dangerous to run Firefox as root?

As a general best practice, you should never directly login as root on any server, and those servers should be configured to not allow remote connections as the root user. You should always log in as a non-root user and only run commands as root using sudo or similar features offered by your desktop environment. You should be wary of even having an interactive root shell open; usually I would only do so on a VM console, when first setting up a system or debugging it.

By doing this, you not only guard against other people compromising your system, but also against accidentally running commands as root that could damage your system. It’s always best to only run things with the minimum permissions they need, and then only grant them additional permissions on an as-needed basis.

taladar,

you should never directly login as root on any server, and those servers should be configured to not allow remote connections as the root user. You should always log in as a non-root user and only run commands as root using sudo or similar features

That is commonly recommended but I have yet to see a good solution for sudo authentication in this case that works as well as public key only SSH logins with a passphrase encrypted key and ssh-agent on the client-side. With sudo you constantly have to use passwords anyway which is pretty much unworkable if you work on dozens of servers.

ElderWendigo,

Whose letting you run dozens of servers if managing dozens of passwords is “pretty much unworkable” for you?

taladar,

Of course I can store dozens of passwords but if every task that requires a single command to be run automatically on e.g. “every server with pending updates” requires entering each of those passwords that is unworkable.

ElderWendigo,

Sounds like you’re doing things the hard way, making you believe that you are being forced into choosing between security and convenience.

taladar,

Then enlighten me, what is the easy way to do tasks that do require some amount of manual oversight? Tasks that can be completely automated are easy of course but with our relatively heterogeneous servers automation a la “do it on this one test system and if it works there run it completely automatically on the 100 identical production systems” is not available to us.

ElderWendigo,

Not my circus, not my monkeys. You’re doing things the hard way and now it’s somehow my responsibility to fix your mess? I’m SUPER glad I don’t work with you.

taladar,

You are the one who insists that there is a better way to do things but refuse to say what that better way is.

bluespin,

None of us can tell you the right approach for your specific system and use-case. People are simply pointing out that what you stated you’re doing is insecure and not recommended

taladar,

And nobody in any of these threads has ever pointed out why it is considered to be insecure. The most probable origin for that idea I have come upon so far is that it is a left-over from pre-SSH days when people thought using the root password with su at something other than the start of their connection would make it harder to sniff. Literally nobody lists even one good reason why sudo should be more secure than direct root login with SSH public keys and password login disabled for full root access (as in not limited to just one or two commands).

MyNameIsRichard,
@MyNameIsRichard@lemmy.ml avatar

It’s not about someone sniffing your passwords, it’s about reducing your attack surface. If you use su then the entire session has root privileges and any piece of software you run could do system level damage if it has a bug. Using sudo limits the privilege escalation to just one command.

taladar,

That is only really true of you use sudo with a zero second password caching timeout.

bluespin,

You seem to be looking at the issue in black and white. Any reduction in root access is beneficial. Using sudo with password cache lasting an hour is still preferable to signing in as root. As many people have said, it’s about minimizing attack surface

taladar,

Any reduction in root access is beneficial.

Such as having fewer users who are allowed to use sudo to become root and whose compromise can thus lead to a root compromise?

bluespin,

Running fewer commands as root is more significant risk reduction than having an extra user. I won’t be replying further since I’m simply repeating what others have already said. If you sincerely don’t understand, I suggest doing additional research on your own.

taladar,

You people don’t seem to grasp that I am already not running any commands on the server as root that do not require root. This is all about administrative tasks.

MyNameIsRichard,
@MyNameIsRichard@lemmy.ml avatar

Not true. While you won’t always have to enter your password, not every command will have elevated rights.

taladar,

The vast majority of commands when debugging actual issues on the system or performing administrative tasks do require root. Out of the others some give you incomplete results when called as a regular user and 90% of the rest shouldn’t be run on the server in the first place if you can avoid it but directly on your client computer (e.g. looking up documentation).

4am,

FreeIPA and your password is the same on every machine: yours. (Make it good)

Service accounts should have either no sudo password or use something like Ansible with vault and keep every one of them scrambled and rotate regularly (which you can do with Ansible itself)

Yes, even if you have 2 VMs and a docker container, this is worth it.

taladar,

FreeIPA and your password is the same on every machine: yours.

Any network based system like that sucks when you need to fix a system that has some severe issue (network, DNS, disk,…) which is exactly when root access is the most important.

exu,

You could implement NOPASS for the specific commands you need for a service user. Still better than just using root.

taladar,

In what way would that be more secure? That would just allow anyone with access to the regular account to run those commands at any time.

4am,

I thought your passwordless passphrase passkey ssh connection that is superior to passwords was secure. Is it not?

taladar,

It is. That is the whole point. Why would I make extra unprivileged accounts that can run any command I need to run as root at any time without a password on the system just to avoid it. That just increases the attack surface via any other vector by giving an attacker accounts to choose from to break into.

exu,

You can allow only specific commands and options. See my config for example.

gitea.exu.li/exu/configs/src/commit/…/wheel

You can limit this to a specific user as well.

Anyone who hacks into the account can now only run those tightly defined commands and no others. Compared to root, who can run anything.

taladar,

I am well aware that sudo can limit which commands you run but so can force_command in authorized_keys if you really need that functionality.

bluespin,

Are you asking why it’s more secure to surface a few commands without password rather than all of them…?

taladar,

I am asking why it is considered to be more secure for the use case where you aren’t limiting access to a few commands because it is access meant for all kinds of admin tasks, not just one specific one (as in access for the people who need to fix unexpected problems among other things).

chameleon,
@chameleon@kbin.social avatar

Realistically, there is only a trivial pure security difference between logging in directly to root vs sudo set up to allow unrestricted NOPASS access to specific users: the attacker might not know the correct username when trying to brute force. That doesn't matter in the slightest unless you have password auth enabled with trivial passwords.

But there is a difference in the ability to audit what happened after the fact if you have any kind of service storing system logs remotely or in a tamper-proof way. If there's more than one admin user on a service, that is very very important. Knowing where the compromise happened is absolutely essential to make things safe.

If there's only ever going to be one administrative user (personal machine), logging in directly as root for manual administrative tasks is fine: you already know who the user is. If there's any chance there might be more administrative users later (small but growing business), you should consider doing it right from the start.

taladar,

I was aware of the login UID for auditd logging as a difference but as you say, that is only really helpful if the logs are shipped somewhere else or tampering with them is otherwise prevented for admin users. It is not quite the same but the auth.log entries sshd produces on login also contain the key fingerprint used to login these days so on a more limited scale you can at least tell who logged in when from those (or whose key but that is no different than whose account for the sudo approach).

you should consider doing it right from the start.

Do you have any advice on how to use the sudo approach without having a huge slow down in every automated process that requires ssh user@host calls for manual password entry? I am aware of Ansible but I am honestly very sceptical of Python tools since they tend to break easily and often from my past experiences and I would like to avoid using additional ones for critical tasks. Plus Ansible in particular seemed to be very late with their Python 3 transition, as I recall I uninstalled it when it was one of the last tools left that did not work with Python 3.

chameleon, (edited )
@chameleon@kbin.social avatar

Well, my recommendations for anything semi-automated would be Ansible and Fabric/Invoke. Fabric is also a Python tool (though it's only used on the controlling side, unlike Ansible), so if that's a no-go, I'm afraid I don't have much to offer.

UnityDevice, in Thoughts on this?

I remember having this realisation about Mir, but only after we collectively ran it off the cliff wall. The main reason everyone piled on Mir was that it was thought that Canonical would be priming Linux desktop for fragmentation with two competing standards.

But in fact, Mir was providing a solution to the fragmentation Wayland was bringing. Now we have 3, 4, 5 Mir-s, all with slight incompatibilities. Want a feature? Better hope all of them decide to implement the extension after someone proposes it. We know how well that worked in the past.

This is also ironic because the detractors of Xorg constantly talked about the issues with Xorg extensions and how many of them there were. But I never really had to look up which extensions Xorg supported, while I have had to do that with Wayland compositors.

Jordan_U,

The main reason that I piled on Canonical was that they kept on spreading FUD about Wayland to try to promote / justify Mir rather than discussing in good faith.

The worst part about Mir was always Canonical.

liforra, in Friendly reminder
@liforra@endlesstalk.org avatar

When u realize you dont even use a backup software rn

TimeSquirrel, (edited )
@TimeSquirrel@kbin.social avatar

I just copy my whole-ass home dir to an external drive every so often like a caveman.

reallyzen, (edited )
@reallyzen@lemmy.ml avatar

Which you should absolutely do even if you snapshot the eff out of your system. What about hardware failure, eh? Can’t snap that nvidia shit can you?

antsu, in Friendly reminder

Timeshift with BTRFS kicks ass. I have mine set for daily snapshots, retained for a week. Only the changes between snapshots are stored, so the extra disk usage is minimal, and easily justified by the peace of mind in case of fuck-ups or broken updates.

dan, (edited )
@dan@upvote.au avatar

Only the changes between snapshots are stored, so the extra disk usage is minimal

If you want to use a similar approach for backups, Borgbackup is a pretty nice piece of software. I have two backups of my most important files: One on my NAS at home, and one “in the cloud” on a storage VPS (ends up way cheaper than using S3, B2 or anything like that).

NanoooK,

Which storage VPS have you selected? I’m looking at Hetzner atm.

dan, (edited )
@dan@upvote.au avatar

I’ve got one with HostHatch that’s 10TB of space for $10/month. It was an offer they had during Black Friday 2020. They had a similar offer during Black Friday 2023 but I think it was around $20/month, paid yearly.

I live in the San Francisco Bay Area and my storage server is in Los Angeles, which is around 10ms round-trip ping time from my home internet connection.

Hetzner is good too. They have relatively cheap “storage boxes” that are a shared environment rather than a VPS. You don’t get proper SSH access, but they do support FTPS, SFTP, Samba, Borgbackup, Restic, rclone, rsync and WebDAV. www.hetzner.com/storage/storage-box

Borgbackup encrypts the backups, so the host won’t be able to actually view your backups.

jelloeater85,
@jelloeater85@lemmy.world avatar

Just make sure you back up your stuff to a second disk❤️

Extrasvhx9he, in Friendly reminder

Is it dumb that I only backup my docs and anything else I think is important? I can rebuild fairly quickly if something would happen. I ask since I know that people backup a variety of their directories

CalicoJack,

If you’re comfortable, you’re fine. Anything more would just be to speed up the rebuild, so it’s less important if you don’t mind taking the time.

SpaceCadet, (edited ) in Is it actually dangerous to run Firefox as root?
@SpaceCadet@feddit.nl avatar

Realistically it’s not super dangerous, and no you probably don’t have a virus just from browsing a few tech support sites, but you do eliminate your last line of defense when you run software as root. As you know, root can read/change/delete anything on your system whereas regular users are generally restricted to their own data. So if there is a security problem in the software, it’s made worse by the fact that you were running it as root.

You are right though that Firefox does still have its own protections - it’s probably one of the most hardened pieces of software on your computer exactly because it connects to the whole wide internet - and those protections are not negated by running as root. However if those protections fail, the attacker has the keys to the kingdom rather than just a sizable chunk of the kingdom.

To put that in perspective though, if there is a Firefox exploit and a hacker gets access to your regular user account, that’s already pretty bad in itself. Even if you run as a regular unprivileged user they would still have have access to things like: your personal documents, your ssh keys, your Firefox profile with your browsing history, your session cookies and your saved passwords, your e-mail, your paypal account, your banking information, …

As root, they could obviously do even more like damage like reading all users’ data, installing a keylogger or screengrabber, installing a rootkit to make themselves undetectable, but for most regular users most of the damage is already done when their own account is compromised.

So when these discussions come up, I always have to think about this XKCD comic:

https://imgs.xkcd.com/comics/authorization_2x.png

taladar,

They might have access to all that data once but a lot of the paths towards making that a persistent threat that doesn’t go away after the next reboot and most of the ones towards installing something even deeper in the system that might even survive a reinstall do require root.

SpaceCadet,
@SpaceCadet@feddit.nl avatar

That’s what I said yes.

lemmyvore, (edited ) in Friendly reminder

I’m on 545 and I have no issues. But I’m also not using Ubuntu…

Maybe it’s the distro that’s the problem not the backup. I mean I’d rather have a distro with smooth updates than one that makes me need snapshots.

What’s even the point with such a distro, ok so I restore previous working state, then what, I can’t do updates anymore? Living in fear of official updates sounds terrible.

ReversalHatchery, (edited )

I think it’s just dumb to not make a backup before large updates. There’s so many things happening, a lot can go wrong, especially if you have added 3rd party repos and have customized core parts of the system, not just through config files but let’s say you switched to latest kde plasma from the one your distro ships.

And what happens if you have to restore the backup?
You can look up what’s the solution to your problem in peace while everything is still working. If it was a server, all the services are still available, if it was your desktop you don’t have to use a live linux usb that’s without all your configs to find the solution

lemmyvore,

You make a good point. Ubuntu gives you so many ways to shoot yourself in the foot that it’s pretty much a given that it will get messed up eventually. So you have to use snapshots.

On Arch based distros the updates just work. I’ve never had to snapshot anything. But having just one single community repo (AUR) contributes to that a great deal.

MiddledAgedGuy,

I don’t like Ubuntu, and I do like Arch’s philosophy. But I think Arch is the more prone to breakage of the two.

null,

Except that time a year and a bit ago where an Arch update broke Grub for a huge number of users.

No distro is immune to breakage.

lemmyvore,

And a filesystem snapshotting tool would help you restore bootloader how?..

null,

So you agree, Arch can also break by updating.

lemmyvore, (edited )

Of course it can. And your PC can also fall off the desk. I’m saying a snapshot tool is a really poor solution for distro problems, it’s really a bandaid for a problem that shouldn’t exist.

Use a decent distro, take proper backups, and use snapshots for what they were intended — recovering small mistakes with personal files, not for system maintenance.

null,

Use a decent distro

That’s the point – your claim about deb-based distros is just anecdotal.

The example here is Nvidia updates borking the system. I’ve have that happen to me numerous times on Arch-based systems.

I’ve run deb-based distros on some boxes over years of updates with no issues. On the other hand I’ve had updates cause breakages on Arch-based systems pretty much every time I’ve run them.

Which is to say anecdotes are useless, updates can break systems, and being able to immediately roll back to a working system and deal with updating later is a simple, nice thing to have with no downsides.

TrickDacy,
@TrickDacy@lemmy.world avatar

So you’re implying there exists a distro that is perfect and never breaks anything? Sounds like denial. Having time shift in place is risk management and says nothing about the distro, which btw all are imperfect and may break eventually. Kinda confused how one can run Linux and be unaware of how complex and fallible ALL software is.

lemmyvore,

No, I’m implying that official updates breaking the system is insane and should not be accepted as the norm to the point you casually need to use snapshots just to keep your system working.

TrickDacy,
@TrickDacy@lemmy.world avatar

That’s not an accurate portrayal of anything though. You’re implying risk management means you’re accepting that the system sucks and the only way to keep it running is to have a backup system.

That’s not what anyone is doing. They are acknowledging the flawed nature of software and humanity in general and guarding against the consequences of this. It’s being smart. Name one distro that has never broken with updates? You cannot because there obviously couldn’t exist one.

lemmyvore,

A major distro breaking your system is the equivalent of a flower pot falling on your head walking down the street. Does it happen? Sure. Do I want to spend my life wearing a motorcycle helmet and looking up all the time? No.

I’m not saying distros can’t crap on you, I’m saying stop tolerating it. Raise a stink or switch distro. There are distros out there where you don’t have to live in constant fear and where nothing happens if you don’t have snapshots.

I do have backups, precisely because shit happens. But there’s a difference between a helmet and health insurance.

TrickDacy,
@TrickDacy@lemmy.world avatar

Okay what distros are we talking about? What’s an example of an unreliable one and what’s a reliable one?

I think it’s much more complicated than this. Honestly there’s a reason it took decades for Linux distros in general to get as stable as they are today. It’s really really hard to build an operating system.

I think a better analogy for installing updates on linux would be riding a motorcycle. Accidents happen all the time. They’re bound to, be prepared. Just because you can ride for 30 years unscathed doesn’t mean you can take that for granted.

lemmyvore,

.deb distros are doomed from the start if you need to use third-party repos (which you do, for a desktop system) because they always end up undermining the stability of the packages from the core repos in the long run.

Try an Arch-based distro, they’re super stable because their compatibility model is more robust, and there are options depending on how much hand-holding you want — ranging from vanilla Arch to Endeavour to Manjaro.

TrickDacy,
@TrickDacy@lemmy.world avatar

Ya know, this is super interesting you mention Arch. The only person I’ve known IRL who uses and loves Arch champions it hardcore but with the caveat that you have to be okay with things breaking due to the rolling release model. Due to his guidance I have avoided arch specifically. I’ve been running Ubuntu based distros a couple of years and only had issues with updates breaking things like 2 times… Both of which didn’t require a wipe or anything.

lemmyvore,

Packages can break, not the distro. Packages can break at any time because there’s thousands of them and nobody can check all of them thoroughly. A rolling distro gets you both the bugs and the fixes faster.

Non-apt and non-rolling will limit your options considerably.

TrickDacy,
@TrickDacy@lemmy.world avatar

I might be confused. I thought that the distro itself was made up of packages and that’s what all updates did: update various packages bundled with the distro (plus any you installed yourself)

BCsven,

My 2 cents, OpenSUSE Leap is stable as hell. lots of QA happening with their automated testing, and keeping in lockstep with SUSE releases (now sharing same binaries). Every distro upgrade has gone flawlessly, but when I have had the urge to tinker and do stupid things inf config files the built in btrfs snapshots have been a godsend.

TrickDacy,
@TrickDacy@lemmy.world avatar

What you wrote and what I’m reading about openSuse leap sound excellent. My only concern is support. Just about all the apps and tools I use are well supported on Ubuntu based systems but I don’t recall seeing nearly as much support for rpm based oses, including open suse . What’s your experience? If you go to install just any random software you just heard about, how well does it typically go getting it installed/working?

BCsven, (edited )

I haven’t had issues finding packages, often they may not be on the dev websites that host a deb package, but the main repos contain the general tools, if you need something more “fringe”,independent dev or new, then software.opensuse.Org allows you to turn on a search by community or experimental packages (which would be like Arch AUR and contains a lot of rpms) so you can install directly from the website, it will add the neccessary community repos during the install. Or if you don’t want to pollute your repo list they typically they have the option : Grab Binary directly. Or ther is an OPI ? Package you install that lets you search locally for community packages. For commerical apps like teamviewer , yubikey,webex etc the rpms were all available to support corporate SUSE users. If you still can’t find an rpm, then you run the Alien tool which converts a deb to the RPM installation format. The only issue I had once was the community package had dependencies that were not contained in the users repo so I had to find the dependencies and install those first. That felt like 1990s

TrickDacy,
@TrickDacy@lemmy.world avatar

This is a great response and I appreciate it a lot. I’m saving it for later!

BCsven, (edited )

No problem. Happy New Year. Also it is .org i mistakenly put .com for the opensuse software site. ( I will edit post) Also I should mention if you dont see an officially supported rpm on the software site, it does not mean it isn’t there, it now seems to mean it is in the main repos. I think years back they would all show, but maybe due to shared binaries with SUSE they don’t bother listing on the software.opensuse.org site. So best to check locally in the YAST or Zypper search of your repos first.

TrickDacy,
@TrickDacy@lemmy.world avatar

Thanks again! And a happy new year to you too

knexcar,

Windows?

TrickDacy,
@TrickDacy@lemmy.world avatar

Lol wtf. Windows has broken in catastrophic ways after updates on several occasions for me. Some of the many reasons I avoid it. But nice GoTcHa tho

NOOBMASTER,

nvidia drivers have often done weird shit to my system. so, I would probably also lean towards amd in the future.

lemmyvore,

Yeah that’s why I use Nvidia, because AMD drivers are super stable.

orcrist, in Linux tablet?

If the main thing is PDF reading, buy an Android tablet. Of course research which one.

lemmyvore, in Is it actually dangerous to run Firefox as root?

You seriously need to stop what you’re doing. Log in with ssh only. If you need multiple terminals use multiple ssh sessions, or screen/tmux. If you need to search something do it on your desktop system.

The server should not have Firefox installed, or KDE, or anything related to desktop apps. There’s no point and nothing good can come of it.

Dirk,
@Dirk@lemmy.ml avatar

This. Thread should have officially ended here.

Falcon,

Yeah there’s a bit of scope to review what op is doing here.

Why is there even a DE on a server if it’s headless. If it’s not headless why not write up some Dockerfiles and manage it from a non-root account?

Are the services running as root?

Also, is it being accessed via wireguard/ovpn? It would be unwise to run a server as root with an open port.

desmosthenes,
@desmosthenes@lemmy.world avatar

came to say this

bismuthbob, (edited ) in Linux tablet?
@bismuthbob@sopuli.xyz avatar

At that price range, be sure to carefully check compatibility for your favorite distribution and for any hardware that you intend to use.

For what it’s worth, I have an old HP Stream 7 that currently runs Debian Bookworm. I think that it cost about $100 new. I can use it as a pdf reader and to sync files, but there are plenty of tradeoffs due to the 1gb of RAM, the weak Atom processor, the small amount of built-in storage, the mediocre touchscreen, and the general poor quality of touchscreen interfaces among low-resource window managers. Neither camera works and several distributions can’t support the built-in audio. Screen rotation is a crapshoot. Forget about low-power standby. Some of these issues are unique to my tablet, but some of them are problems that people tend to run into when they try to shoehorn linux into a tablet that wasn’t built with linux in mind. Something like a Pinetab would be a better bet.

I saw another person suggest an aftermarket Surface. If you go this route, carefully research the exact model number to verify that the hardware supports linux and that there is a clean way of installing your preferred distribution.

Another thing worth mentioning. Installing linux can be a special kind of hell. Most distributions don’t have a touchscreen-friendly installer. For my cheap tablet, this meant cobbling together a flash drive, a powered USB hub, a USB keyboard, a USB ethernet adapter, and a USB-OTG cable for the single micro-usb port on the tablet. Then, I had to race the decade-old tablet battery to the finish line during the install process. Plus something about a 32-bit EFI bootloader combined with a 64-bit processor.

ThankYouVeryMuch, (edited ) in Is it actually dangerous to run Firefox as root?
@ThankYouVeryMuch@kbin.social avatar

I just wanted to add that you can run gui applications through ssh with x11 forwarding, options -X or -Y (untrusted/trusted but at least in Debian back in the day they behaved the same). So if you wanted a gui file manager you run it in the ssh session on the remote server, sudo if you need but NEVER logged as root, and the window will pop on your local DE instead of having to run an entire desktop on each server

SuperSpruce, in Linux reaches new high 3.82%

Windows 11 has irked me on my main laptop. I still use it due to various applications (not just games) that require Windows, but the slowness of the OS and the tracking drive me away from it. I installed Linux on another drive on the laptop.

Additionally, I purchased a desktop from my friend, and completely wiped Windows from it to install Linux (KDE Neon). I realized there is nothing that I’d want from that desktop, possibly aside from a couple of games my more powerful laptop can run, that Linux cannot run.

N0x0n, in Linux Containers From Scratch in C

Looks like a cool project ! But uuhg, i’m already struggling with native docker sometimes, and having hard times with bash scripts…

Makes me feel sad… learning something like C must make you feel like a super hero! but totally out of my league !

If only I had read the books my father bought back then… rather than playing counter-strike 1.5 xD

thejml, in What's the best way to have a .bashrc that I can use throughout systems?

As long as you’re not going to store sensitive data in there, I’ve just been using GitHub. I’ve got a Private Repository setup with my configs (.bashrc as well as WM configs and other dot files) and I just commit/push it up and heave an update script pull it down elsewhere. Then it’s also version controlled.

gary_host_laptop,
@gary_host_laptop@lemmy.ml avatar

I think I maybe phrased it horribly, my question was more like, what do I need to learn in order to modify myself the .bashrc by myself instead of using a programme. Does it make sense?

deanso,

Use chatgpt. Take the first line of your bashrc file and ask it to explain it. Than the second line etc. Won’t be always perfect but for bashrc it shouldn’t be a problem and you can learn a lot from it.

ShaunaTheDead, (edited )
@ShaunaTheDead@kbin.social avatar

You need to learn bash scripting. Also, there are a few default files that the .bashrc uses which can be helpful to compartmentalize the custom things you do to it so that it's easier to undo if you screw something up. To do that, just add this to the bottom of your .bashrc

if [ -f ~/.bash_custom ]; then
    . ~/.bash_custom
fi

What that will do is check if the .bash_custom file exists and then run the .bash_custom file in your home directory and apply anything in there. Also, you can call the file whatever you like, but bash does have some defaults that it will check for and run them without editing the .bashrc at all. It's kind of hard to find a list of the the files that it automatically checks for, but I know that .bash_aliases is one of them, and I think it checks .bash_commands as well, but I'm not entirely sure. Either way, you can force it to check your custom one by using the code above.

Then you can create the file and add any custom things in there that you like. For example, I like to frequently update through the terminal but running sudo apt update && sudo apt upgrade && sudo apt autoremove && flatpak upgrade was a bit tedious and I wanted a bit less feedback so I made a custom alias for my personal use.

alias update='echo "Updating packages..."; sudo apt update -y &> /dev/null; echo "Packages updated."; echo "Upgrading packages..."; sudo apt upgrade -y &> /dev/null; echo "Packages upgraded."; echo "Cleaning up packges..."; sudo apt autoremove -y &> /dev/null; echo "Packages cleaned up."; echo "Updating flatpaks..."; flatpak update -y &> /dev/null; echo "Flatpaks updated."'

Which hides most of the text from updating and just gives me feedback on what it's currently doing if I don't really care to know all of the details. So now I just run update in the terminal and plug in my password and it updates and upgrades everything in a human readable way.

There's a lot that can be done with bash scripting, like editing files, iterating over files and directories, setting environment variables. It's basically a full programming language so the limits are mostly your imagination.

tiny, in What's the best way to have a .bashrc that I can use throughout systems?

If you are using nixos try home manager. Otherwise Ansible is nice for plopping templates and files into your own home directory

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 20975616 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4210688 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 38