I’m on 545 and I have no issues. But I’m also not using Ubuntu…
Maybe it’s the distro that’s the problem not the backup. I mean I’d rather have a distro with smooth updates than one that makes me need snapshots.
What’s even the point with such a distro, ok so I restore previous working state, then what, I can’t do updates anymore? Living in fear of official updates sounds terrible.
I think it’s just dumb to not make a backup before large updates. There’s so many things happening, a lot can go wrong, especially if you have added 3rd party repos and have customized core parts of the system, not just through config files but let’s say you switched to latest kde plasma from the one your distro ships.
And what happens if you have to restore the backup?
You can look up what’s the solution to your problem in peace while everything is still working. If it was a server, all the services are still available, if it was your desktop you don’t have to use a live linux usb that’s without all your configs to find the solution
You make a good point. Ubuntu gives you so many ways to shoot yourself in the foot that it’s pretty much a given that it will get messed up eventually. So you have to use snapshots.
On Arch based distros the updates just work. I’ve never had to snapshot anything. But having just one single community repo (AUR) contributes to that a great deal.
Of course it can. And your PC can also fall off the desk. I’m saying a snapshot tool is a really poor solution for distro problems, it’s really a bandaid for a problem that shouldn’t exist.
Use a decent distro, take proper backups, and use snapshots for what they were intended — recovering small mistakes with personal files, not for system maintenance.
That’s the point – your claim about deb-based distros is just anecdotal.
The example here is Nvidia updates borking the system. I’ve have that happen to me numerous times on Arch-based systems.
I’ve run deb-based distros on some boxes over years of updates with no issues. On the other hand I’ve had updates cause breakages on Arch-based systems pretty much every time I’ve run them.
Which is to say anecdotes are useless, updates can break systems, and being able to immediately roll back to a working system and deal with updating later is a simple, nice thing to have with no downsides.
So you’re implying there exists a distro that is perfect and never breaks anything? Sounds like denial. Having time shift in place is risk management and says nothing about the distro, which btw all are imperfect and may break eventually. Kinda confused how one can run Linux and be unaware of how complex and fallible ALL software is.
No, I’m implying that official updates breaking the system is insane and should not be accepted as the norm to the point you casually need to use snapshots just to keep your system working.
That’s not an accurate portrayal of anything though. You’re implying risk management means you’re accepting that the system sucks and the only way to keep it running is to have a backup system.
That’s not what anyone is doing. They are acknowledging the flawed nature of software and humanity in general and guarding against the consequences of this. It’s being smart. Name one distro that has never broken with updates? You cannot because there obviously couldn’t exist one.
A major distro breaking your system is the equivalent of a flower pot falling on your head walking down the street. Does it happen? Sure. Do I want to spend my life wearing a motorcycle helmet and looking up all the time? No.
I’m not saying distros can’t crap on you, I’m saying stop tolerating it. Raise a stink or switch distro. There are distros out there where you don’t have to live in constant fear and where nothing happens if you don’t have snapshots.
I do have backups, precisely because shit happens. But there’s a difference between a helmet and health insurance.
Okay what distros are we talking about? What’s an example of an unreliable one and what’s a reliable one?
I think it’s much more complicated than this. Honestly there’s a reason it took decades for Linux distros in general to get as stable as they are today. It’s really really hard to build an operating system.
I think a better analogy for installing updates on linux would be riding a motorcycle. Accidents happen all the time. They’re bound to, be prepared. Just because you can ride for 30 years unscathed doesn’t mean you can take that for granted.
.deb distros are doomed from the start if you need to use third-party repos (which you do, for a desktop system) because they always end up undermining the stability of the packages from the core repos in the long run.
Try an Arch-based distro, they’re super stable because their compatibility model is more robust, and there are options depending on how much hand-holding you want — ranging from vanilla Arch to Endeavour to Manjaro.
Ya know, this is super interesting you mention Arch. The only person I’ve known IRL who uses and loves Arch champions it hardcore but with the caveat that you have to be okay with things breaking due to the rolling release model. Due to his guidance I have avoided arch specifically. I’ve been running Ubuntu based distros a couple of years and only had issues with updates breaking things like 2 times… Both of which didn’t require a wipe or anything.
Packages can break, not the distro. Packages can break at any time because there’s thousands of them and nobody can check all of them thoroughly. A rolling distro gets you both the bugs and the fixes faster.
Non-apt and non-rolling will limit your options considerably.
I might be confused. I thought that the distro itself was made up of packages and that’s what all updates did: update various packages bundled with the distro (plus any you installed yourself)
My 2 cents, OpenSUSE Leap is stable as hell. lots of QA happening with their automated testing, and keeping in lockstep with SUSE releases (now sharing same binaries). Every distro upgrade has gone flawlessly, but when I have had the urge to tinker and do stupid things inf config files the built in btrfs snapshots have been a godsend.
What you wrote and what I’m reading about openSuse leap sound excellent. My only concern is support. Just about all the apps and tools I use are well supported on Ubuntu based systems but I don’t recall seeing nearly as much support for rpm based oses, including open suse . What’s your experience? If you go to install just any random software you just heard about, how well does it typically go getting it installed/working?
I haven’t had issues finding packages, often they may not be on the dev websites that host a deb package, but the main repos contain the general tools, if you need something more “fringe”,independent dev or new, then software.opensuse.Org allows you to turn on a search by community or experimental packages (which would be like Arch AUR and contains a lot of rpms) so you can install directly from the website, it will add the neccessary community repos during the install. Or if you don’t want to pollute your repo list they typically they have the option : Grab Binary directly. Or ther is an OPI ? Package you install that lets you search locally for community packages. For commerical apps like teamviewer , yubikey,webex etc the rpms were all available to support corporate SUSE users. If you still can’t find an rpm, then you run the Alien tool which converts a deb to the RPM installation format. The only issue I had once was the community package had dependencies that were not contained in the users repo so I had to find the dependencies and install those first. That felt like 1990s
No problem. Happy New Year. Also it is .org i mistakenly put .com for the opensuse software site. ( I will edit post) Also I should mention if you dont see an officially supported rpm on the software site, it does not mean it isn’t there, it now seems to mean it is in the main repos. I think years back they would all show, but maybe due to shared binaries with SUSE they don’t bother listing on the software.opensuse.org site. So best to check locally in the YAST or Zypper search of your repos first.
Nothing honestly. Couldn’t find a music player that doesn’t look like a file manager, has good search and queue features and doesn’t make strong assumptions about how music is organized. Tried to run Musicolet through waydroid but it doesn’t support Nvidia gpus
Logitech Media Server, followed by strawberry, quod libet, rhythmbox
Quod libet starts to act funny with 50,000 flac collections. Rhythmbox too. LMS is still chugging at 100k and I can get it on any room in the house, across 2 clients on computers, 2 on raspberry pi and my android phone. If I want to listen to 24/96+, Strawberry can handle it all although I haven’t warmed up to the interface. Volumio sucks, it’s way too slow.
I just have my music collection in Playlist and use Audacious to play them. All the music in the Playlist are saved in relative format so I can just copy the folders and keep the same Playlists
I ended up writing a perl script to generate a .m3u from a root music directory that shuffles all the subdirs so I can listen to full albums in random order instead of just tracks.
I did something similar except I wrote a C# program and used AvaloniaUI to build a cross-platform GUI. It was a project to learn C#. I have to make some updates to that now that I think about it…
As a general best practice, you should never directly login as root on any server, and those servers should be configured to not allow remote connections as the root user. You should always log in as a non-root user and only run commands as root using sudo or similar features offered by your desktop environment. You should be wary of even having an interactive root shell open; usually I would only do so on a VM console, when first setting up a system or debugging it.
By doing this, you not only guard against other people compromising your system, but also against accidentally running commands as root that could damage your system. It’s always best to only run things with the minimum permissions they need, and then only grant them additional permissions on an as-needed basis.
you should never directly login as root on any server, and those servers should be configured to not allow remote connections as the root user. You should always log in as a non-root user and only run commands as root using sudo or similar features
That is commonly recommended but I have yet to see a good solution for sudo authentication in this case that works as well as public key only SSH logins with a passphrase encrypted key and ssh-agent on the client-side. With sudo you constantly have to use passwords anyway which is pretty much unworkable if you work on dozens of servers.
Of course I can store dozens of passwords but if every task that requires a single command to be run automatically on e.g. “every server with pending updates” requires entering each of those passwords that is unworkable.
Then enlighten me, what is the easy way to do tasks that do require some amount of manual oversight? Tasks that can be completely automated are easy of course but with our relatively heterogeneous servers automation a la “do it on this one test system and if it works there run it completely automatically on the 100 identical production systems” is not available to us.
Not my circus, not my monkeys. You’re doing things the hard way and now it’s somehow my responsibility to fix your mess? I’m SUPER glad I don’t work with you.
None of us can tell you the right approach for your specific system and use-case. People are simply pointing out that what you stated you’re doing is insecure and not recommended
And nobody in any of these threads has ever pointed out why it is considered to be insecure. The most probable origin for that idea I have come upon so far is that it is a left-over from pre-SSH days when people thought using the root password with su at something other than the start of their connection would make it harder to sniff. Literally nobody lists even one good reason why sudo should be more secure than direct root login with SSH public keys and password login disabled for full root access (as in not limited to just one or two commands).
It’s not about someone sniffing your passwords, it’s about reducing your attack surface. If you use su then the entire session has root privileges and any piece of software you run could do system level damage if it has a bug. Using sudo limits the privilege escalation to just one command.
You seem to be looking at the issue in black and white. Any reduction in root access is beneficial. Using sudo with password cache lasting an hour is still preferable to signing in as root. As many people have said, it’s about minimizing attack surface
Running fewer commands as root is more significant risk reduction than having an extra user. I won’t be replying further since I’m simply repeating what others have already said. If you sincerely don’t understand, I suggest doing additional research on your own.
You people don’t seem to grasp that I am already not running any commands on the server as root that do not require root. This is all about administrative tasks.
The vast majority of commands when debugging actual issues on the system or performing administrative tasks do require root. Out of the others some give you incomplete results when called as a regular user and 90% of the rest shouldn’t be run on the server in the first place if you can avoid it but directly on your client computer (e.g. looking up documentation).
FreeIPA and your password is the same on every machine: yours. (Make it good)
Service accounts should have either no sudo password or use something like Ansible with vault and keep every one of them scrambled and rotate regularly (which you can do with Ansible itself)
Yes, even if you have 2 VMs and a docker container, this is worth it.
FreeIPA and your password is the same on every machine: yours.
Any network based system like that sucks when you need to fix a system that has some severe issue (network, DNS, disk,…) which is exactly when root access is the most important.
It is. That is the whole point. Why would I make extra unprivileged accounts that can run any command I need to run as root at any time without a password on the system just to avoid it. That just increases the attack surface via any other vector by giving an attacker accounts to choose from to break into.
I am asking why it is considered to be more secure for the use case where you aren’t limiting access to a few commands because it is access meant for all kinds of admin tasks, not just one specific one (as in access for the people who need to fix unexpected problems among other things).
Realistically, there is only a trivial pure security difference between logging in directly to root vs sudo set up to allow unrestricted NOPASS access to specific users: the attacker might not know the correct username when trying to brute force. That doesn't matter in the slightest unless you have password auth enabled with trivial passwords.
But there is a difference in the ability to audit what happened after the fact if you have any kind of service storing system logs remotely or in a tamper-proof way. If there's more than one admin user on a service, that is very very important. Knowing where the compromise happened is absolutely essential to make things safe.
If there's only ever going to be one administrative user (personal machine), logging in directly as root for manual administrative tasks is fine: you already know who the user is. If there's any chance there might be more administrative users later (small but growing business), you should consider doing it right from the start.
I was aware of the login UID for auditd logging as a difference but as you say, that is only really helpful if the logs are shipped somewhere else or tampering with them is otherwise prevented for admin users. It is not quite the same but the auth.log entries sshd produces on login also contain the key fingerprint used to login these days so on a more limited scale you can at least tell who logged in when from those (or whose key but that is no different than whose account for the sudo approach).
you should consider doing it right from the start.
Do you have any advice on how to use the sudo approach without having a huge slow down in every automated process that requires ssh user@host calls for manual password entry? I am aware of Ansible but I am honestly very sceptical of Python tools since they tend to break easily and often from my past experiences and I would like to avoid using additional ones for critical tasks. Plus Ansible in particular seemed to be very late with their Python 3 transition, as I recall I uninstalled it when it was one of the last tools left that did not work with Python 3.
Well, my recommendations for anything semi-automated would be Ansible and Fabric/Invoke. Fabric is also a Python tool (though it's only used on the controlling side, unlike Ansible), so if that's a no-go, I'm afraid I don't have much to offer.
Yeah, put me down for Strawberry too. I used to use Rhythmbox up until mid 2023, I started to get into high res music and I got a tidal subscription, so switched to Strawberry.
I feel this. If you could right click to interact with the text objects, then this combo would basically feel like foobar2000 for linux. I’m old enough to have missed how great foobar2000 felt after WinAmp started to get bloated (back before I got my hands on some Linux ISOs), so MPD + ncmpcpp just felt so refreshingly stripped down and a little nostalgic. I just fucking hate having to memorize a bunch of non-intuitive hotkey combos to do anything. Probably the same reason I’ve never bothered to properly learn Vim.
I'm an Emacs graybeard, so complex keybindings don't scare me. My problem with ncmpcpp is twofold:
It relies on MPD which is always a PITA to properly configure. Pulseaudio always managed to make it not work on a fresh system. Hopefully with Pipewire it'll be better.
The config format make no sense whatsoever. Especially the one with keybindings. It's so cryptic I just stopped trying to understand it. Again, I'm an Emacs graybeard, to stress it as a point of reference.
I'm aware but thank you. I've tried it before and didn't like it. Maybe I'll give it another shot, though I don't see much benefit in tying my music player to Emacs.
Definitely look for portable SSDs rather than flash drives. Different technology, usually significantly larger (physically). Easily saturates a USB 2.0 connection, so look for USB 3.0.
Back when Microsoft supported Windows To Go, they had a short list of verified drives to use. Surely outdated now but might be a good starting point.
FWIW I used to run Windows 10 off a Samsung T5. It worked fine, except that it would always shut down when I tried to suspend. Still works as far as I know, I just haven’t used it in a long time.
It indeed doesn’t, its purpose is to show the differences and clarify why/where OP might have heard you need special care for portable installs on USB sticks.
All the guides and tutorials out there are overwhelmingly written with regular USB sticks in mind and not M.2 enclosures over USB. So they’ll tell you to put as much stuff on tmpfs as possible and avoid all unnecessary reads and writes.
Stability and configuration options. I already used Jellyfin but for me is not stable. It often crashes and configuration options are a mess at the moment.
Well, i got some feedback, most creative people don’t find gimp good, they won’t switch.
Well dunno if it’s because gimp lacks good tool that ease up their workflow or because we teached them adobe suite.
During my art course it was : adobe suite and autocad with 3d max.
But i knew blender, gimp and scribus way before entering art school because i disagree with adobe’s licensing system and found it very expensive.
Imho, the current best creative software on linux is Blender. There is also Darktable and Rawtepee for light, contrast.
For inkscape, krita, i can’t compare, i never used adobe illustrator, nor corel drawer.
Scribus is good, almost perfect but it lacks a very important feature that i can’t replicate. Adobe Indesign is far more easier because of the guideline that tell ya this item is correctly aligned and has the same size.
Kdenlive, well featured but i find adding video effect easier on adobe premiere pro. And kdenlive had a lot stability issue, i lost my work several time and that’s how i learned to setup automated save.
Autocad easily outmatched freecad, there were a huge difference in functionnalities. I don’t know if it has changed since 10 years. It probably improved a lot.
I know this is probably tongue-in-cheek, but if you wanted the serious answer:
GIMP:
Non-destructive Editing (it’s coming real soon!)
Vector shapes, not bitmap
Smart objects
Full CMYK support
Full PSD support (for collaboration purposes), hahaha
KILL ALL FLOATING SELECTIONS
Kdenlive:
Well, I actually do use Kdenlive. I’m fine with Lightworks too, and Resolve on macOS. But it’s lacking finer color grading controls, the interface is inefficient (being fixed in a future release), hardware-based decoding/encoding needs to either exist or be improved.
And the other big reason is collaboration with other Adobe users.
Personally I don’t want people to switch to linux without caring about software freedom. I mean it might be nice to run adobe software in linux but I will not use it, and such softwares have same problems like “windows” which we are switching away from. They are proprietary programs from corporations which doesn’t even satisfy freedom 0.
Well yes. I agree reccomending linux to others. But if the only reason someone isn’t switching linux is because some proprietary app doesnt support it, i don’t see they will care about free software later on. Also not everyone are like you and me, and may use linux without caring about software freedom at all.(I have a friend who uses google chrome AND edge)
I guess part of software freedom, for me, is that I don’t care what other people choose to do, I just use and recommend Linux and other open source software wherever I can.
Absolutely wild that you’d purity test people and recommend against them using Linux just because they wouldn’t be using it for the reason you want them to…
I am not against people using linux for some other reason but I don’t want to promote linux just for people to use proprietary software. They could, but i am not interested in them and does feel useless if its not for software freedom. (That doesnt mean i am against people using them)
Btw if you dont know, software freedom is not about using whatever software you need. Its about a software that gives you the four essential freedoms
I mean If you have all thoose proprietary apps availiable in linux, you probably wouldnt be introduced to foss apps. You probably keep on using the proprietary software you used in windows
Let’s just hope it’s better than the music brains tagger itself. It’s been some years since I’ve tried it. I’ll admit. The mess it made the last time that has made me reluctance to give it another chance despite generally supporting what they do. I may just be a little OCD about my collection sometimes lol. But if it can actually get the right artist information, etc. Allow me to store stuff in a particular directory structure relatively easily and get cover art. It might stand a chance. I will give the AUR a check here in a bit to see if it has it.
Edit I will give it a little bit more try. But I haven’t found any way to configure the data that it’s pulling etc. Which is really going to limit it for my purposes. I have a lot of different things. That it’s just not getting correctly. I tried only a few albums. But the data it pulled was for a different release with much fewer tracks.
For what it’s worth, I have this problem sometimes when an album has multiple releases and you can choose which release to pull tags from via the context menu in Picard. There’s also a pretty powerful scripting language that you can use to specify the directory and file re-naming structure as well. It took me a while to get my structure set up properly but once I did it’s been a life saver in keeping my files organized.
If there’s something in particular you’re trying to achieve that’s not working I’d be happy to try and help!
I hadn’t seen that yet. Although unfortunately, my experimenting with the tool ended abruptly last night when the LCD panel on the system went out. I may install it on a different system and see if I can figure out how to select releases that should solve the issue.
Quite the statement that Gentoo has survived for so long compiling from source but, even with ever advancing processor speeds, they’ve finally gone "Nah… Takes to long. ".
I mean, I don’t blame them. Yesterday I left my machine building a PyTorch package for 4 hours on a 12 core processor.
As a long-time Gentoo user the only packages where compile times (and RAM usage) really bother me are all the myriad of forks of that shitty Chrome browser engine (webkit-gtk, QtWebEngine, chromium,…) and LLVM and clang.
My beef tends to be with software out of FANGs. Big teams and huge codebase to match. Completely inpetetrable for the rest of us and, I suspect, far more code then there should be.
Chrome takes so much longer than the kernel somehow. There’s also the occasional package that makes you build single-threaded because nobody has fixed some race condition in the build process.
More importantly Chrome takes so much longer than Firefox even though they essentially do the same things (or 95% the same things if you are nitpicky).
Yes, but Chromium is very easy to embed in applications. Mozilla has a history of creating and then abandoning embedding APIs every few years or so (and right now I think they have none).
It seems very hard to embed it anywhere considering everyone doing so forks the whole codebase. Besides, my point was about compile times, embedding APIs shouldn’t take significantly longer to compile.
To be fair USB sticks and SD cards seem to fail when you stare at them a bit too intensely. I think it has been at least a decade since I bought a USB stick for OS installations that lasted for more than three installs (each a few months apart at least since the need does not arise that often).
my isp supports ipv6 but disables it in openwrt config.
i found a way to get root access to it and re-enabled it, seems to work just fine. the configuration is kinda fucked up but kinda works (dhcpv6, no slaac)
Dbus attempts to provide a standardized system for inter process communication (IPC) without the complexity of managing individual connections between each and every process that needs to share certain types of data.
But the implemenation has it’s fair share of issues and the attempted reimplementation even more so, because it wants to leverage everything important to Systemd instead.
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.