linux

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

dinckelman, in NixOS is better because...

The appeal of it, to me, is the same as why Docker containers are really good. You write your definition, save it to git, for example, and if you ever need to setup your computer from scratch, if you restore that config, it’ll setup your computer exactly like it was before. But even besides that, being able to roll back if something goes wrong, is a big plus

wwwgem,
@wwwgem@lemmy.ml avatar

That’s what I keep reading and why I would like to give it a try. For now I’m still confused how this is easier/more efficient than sharing your list of packages, restoring a backup, or using downgrade in Arch. I’m really interested because I like to try new stuff, especially if they bring something of interest.

I really have hard time to see the difference for now after my first setup in a VM but also because imaging my full Arch system on a new machine 2 years ago only took me an hour and less than 10 command lines.

Again, I’m genuinely trying to understand what I’m missing. From my reading NixOS seems to be the only distro I could switch to.

BCsven,

my thoughts which may have inaccuracies: in NiXOS The package declares the exact version of dependencies needed. when you update nixos it takes up quite a bit of space because you may have some links to one library but another app uses something else and both are stored on drive, and your old install is still there to roll back to. On other distros a package lists dependencies, but during updates a single dependency may have a bug fix point release, and upRev. so the behaviour of that app you added may change depending on all it subparts changing. So when you install non nix today or 6 months that package also determines how it may function. if Dependencies updated in the meantime your install may act different. NiX prevents this since you have a repeatable install.

wwwgem,
@wwwgem@lemmy.ml avatar

Thanks for taking time to share this detailed thought. That’s an interesting point I forgot because I didn’t experience any related issues over 15 years with Arch but that’s still a nice approach. I can certainly see why this is a big plus for NixOS.

BCsven,

I haven’t had issues with my OpenSUSE Leap install in 7 years either, there is careful curating, and automated QA testing, and roll back snapshotting if you break something while messing about. But I have a NixOS machine also. It provides a nice way of configuring a repeatble system, which is probably a huge bebefit for folks making / deploying linux devices that are 100% repeatable.

wwwgem, (edited )
@wwwgem@lemmy.ml avatar

Right, I totally agree. If I would have to deploy my config on several machines or create dedicated config using a common base then I would have been convinced. I’m still not convinced from a dummy single user point of view but I still believe in this distro and like its approach so I’ll continue experimenting with it and we’ll see where my journey leads me.

At least for now I’m glad to have a new toy I can mess up with. With my Arch system I was getting this weird feeling where I was happy to have an efficient and stable machine while at the same time being bored to have nothing to test/tweak/destroy and rebuild. I mean I love to learn and discover new things so I experiment a bunch of applications and parameters I will never need anyway but it becomes harder and harder to find something that keeps me entertained for more than a day.

BCsven,

I hear you. My openSUSE Leap has been so stable that I got bored with nothing to tweak. Their MicroOS has an immutable system with config file setup capability, and sombody built this for it to make config file creation simple opensuse.github.io/fuel-ignition/editso that was fun for a while. But NixOS was a nice distraction also

sashanoraa,

Because your Nix config also configures your software, not just installs it. Admittedly, with base NixOS that’s more true with server software than desktop. But with the addition of home manager you can also configure many desktop apps in your Nix config.

wwwgem,
@wwwgem@lemmy.ml avatar

Thank you for this addition. I very much appreciate the fediverse community who is helping people to understand things, share their knowledge, and acting nicely (if we exclude some rare people who are clearly not used to live within a sane community). I’ve seen home manager but this raised one more question to me: what’s the added value compared to stow for example? Thanks again for sharing your thoughts.

sashanoraa,

I’ve never used stow so I can’t speak to it specifically. Home manager is nice for two reasons. If you’re already using NixOS you can have one unified config for your whole system. And because Nix is a programming language generation these configs, you may be able to do thing you wouldn’t otherwise. It also has some nice defaults that you may not get without.

wwwgem,
@wwwgem@lemmy.ml avatar

Due to the still early development of NixOS, Home manager is in some ways very similar to nix-env and flakes is still highly experimental. Also, the configuration parameters are changing quite significantly with the distro development. I’m sure this will all settle down when the distro will become more mature but to be honest that’s also what attracts me. I like chaos ^^ Seriously, this shows me some potential for great achievements. I will continue testing NixOS but for now I didn’t find THE reason to leave Arch yet. If I would have to deploy my config on several machines or create dedicated config using a common base then I would have been convinced. Will see where my journey leads me.

Auli,

Sure but not everything can be defined in the Nix config. Firewalls have an issue, some options for packages are not implemented yet. For example systemd networkd doesn’t have all the features implemented.

sashanoraa,

NixOS in it’s current form does have it’s limitations but it’s ever improving. I personally have never had issues doing what I needed to firewall wise, but I’ve not done much of anything complex. Mostly just opening ports and a little port redirecting.

Evil_Shrubbery, in Unity’s Open-Source Double Standard: the ban of VLC

Who tf bans something so good, benevolent, and upstanding as VLC?

bloopernova, in Your favorite linux projects for weekend
@bloopernova@programming.dev avatar

Create a dotfiles repo in git. Gives you a way to track changes to your .bashrc or .zshrc

indigomirage,

That is the next item on my to-do list. I’ve already installed my own gitea container to run at home. Yes, I could use a public repo (set private) but I wanted I learn how to do this and besides, I wanted to cast a wider net for which files to store but not worry about inadvertently publishing something with passwords embedded…

bloopernova,
@bloopernova@programming.dev avatar

With extra bonus: write an installer script that symlinks the files to the correct place. Use Ansible, plain old Bash, or Python depending on your preference.

RanceMcGrew,

rcm

github.com/thoughtbot/rcm

rcm will do symlinking for you and is pretty awesome. Been using it for this purpose for years

Joker,

Or GNU stow.

indigomirage,

I’m waffling between that or just setting up a bare git repo. Am prepping a VM or two to explore the pros/cons of each approach and to dive into the implications.

It’s funny - this project idea seems to free bubbling up everywhere this past week. I’m sure I’m seeing the consequences of search algorithms, but on Lemmy, it’s nice to see what is a definite and pleasant coincidence.

lemmyvore,

When in doubt always do a git init . and a git add, git commit every once in a while. You’ll never regret it.

krash,

I didn’t really see the benefit of this besides having a snapshot or backup of my home folder for my use case (I don’t have that many config/text files that needs tracking), but I can recommend chezmoi for those interested.

troyunrau, in KDE 6 Megarelease - Release Candidate 1
@troyunrau@lemmy.ca avatar

Not sure if it’s still the same as it was back in my day, but KDE’s “release candidate” nomenclature was always a bit of a misnomer. You’d never see RC1 actually released as final. What it really means is that the alpha “feature refinement” beta “bug fixing” phase is over, and it’s the final testing phase for showstoppers. However, the definition of showstopper seemed always to be very wide. Thus, a lot of bugs still get reported and fixed during this phase, and RC really means “beta, but towards the end of the pipeline”.

Which is in contrast to the Linux kernel where a RC can be declared ship-ready and simply get renamed.

Admittedly there’s a fairly large impact difference between kernel level bugs, and say a bug in Okular…

simple,

The nomenclature is actually correct here, and a lot of other software use it, at least from everything I’ve seen. Release candidate means it’s stable and (usually) feature complete but could have bugs and needs testing before they launch it.

Prototype --> Alpha --> Beta --> Release Candidate --> Release

Oisteink,

It’s still a misuse of the word - if your software needs testing it’s not a candidate you would release unless you’re a multi-billion gaming company or Cisco

FooBarrington,

Wiktionary: (software engineering) A version of a program that is nearly ready for release but may still have a few bugs; the status between beta version and release version.

Oxford: a version of a product, especially computer software, that is fully developed and nearly ready to be made available to the public. It comes after the beta version.

I couldn’t find more definitions from “big” dictionaries, but literally no definition I’ve seen agrees with you. I wonder why that is.

troyunrau,
@troyunrau@lemmy.ca avatar

If you’re as old as I am, you’ll recall software using the term “gamma” release instead of “release candidate” for that phase. ;)

hersh,

This is correct, albeit not universal.

KDE has a predefined schedule for “release candidates”, which includes RC2 later this month. So “RC1” is clearly not going to be the final version. See: community.kde.org/…/February_2024_MegaRelease

This is at least somewhat common. In fact, it’s the same way the Linux kernel development cycle works. They have 7 release candidates, released on a weekly basis between the beta period and final release. See: www.kernel.org/category/releases.html

In the world of proprietary corporate software, I more often see release candidates presented as potentially final; i.e. literal candidates for release. The idea of scheduling multiple RCs in advance doesn’t make sense in that context, since each one is intended to be the last (with fingers crossed).

It’s kind of splitting hairs, honestly, and I suspect this distinction has more to do with the transparency of open-source projects than anything else. Apple, for example, may indeed have a schedule for multiple macOS RCs right from the start and simply choose not to share that information. They present every “release candidate” as being potentially the final version (and indeed, the final version will be the same build as the final RC), but in practice there’s always more than one. Also, Apple is hardly an ideal example to follow, since they’ve apparently never even heard of semantic version numbering. Major compatibility-breaking changes are often introduced in minor point releases. It’s infuriating. But I digress.

funkajunk, in Linux tablet?
@funkajunk@lemm.ee avatar

It’s usually pretty easy to find secondhand Microsoft Surface devices for that price point (check craigslist, FB marketplace, and the like), and they absolutely rock when using a Gnome-based distro (like Fedora).

There’s a great project that works on Linux compatibility for Surface devices, you can check it out here: github.com/…/Supported-Devices-and-Features#featu…

The_Sasswagon,

I just installed Fedora with KDE plasma and Wayland last weekend using the surface kernel. Was pretty painless, after abandoning a couple other distros that did not play nice.

The instructions on the GitHub are also very good, though obviously every years surface has its challenges I’m sure.

krash, (edited )

Just make sure about what you’re getting yourself into: features such as IR facial detection and stylus input isn’t supported on all models OOTB by the surface kernel. Also, it is a bit messy with how surface handles booting.

Linux on surface works (I got Ubuntu on both my Go gen 1 and pro 7) but expect to put in time to tinker and manage unexpected issues.

notabot, in Is there such a thing as split-screen grep?

Tmux is a very helpful terminal multiplexer, meaning it can split your terminal into multiple panes. So, create two side by side panes, then one way of doing it is:

  • on the left, run your cmd | tee >(grep ‘denied’ > error.log)
  • on the right, run tail -f error.log

The tee process takes it’s standard in, and writes itbto both standard out, so you see all the lines, and the path it’s been given. The >(…) operator runs the grep in a subprocess, and returns the path to it’s standard input pipe, so grep receives every line, and writes the denied lines to a log file which you display with tail in the other pane.

Rather than using a file for error.log you could also use a named pipe in much the same way.

canadaduane,
@canadaduane@lemmy.ca avatar

Thanks! I’m curious if there is a way to do this as a one-liner?

notabot,

Sorry for th slow answer, I’ve been away. There is a way, if it’s still useful to you:

First, create a named fifo, you only need to do this once:


<span style="color:#323232;">mkfifo logview
</span>

Run your rsync in one pane, with a filtered view in the second:


<span style="color:#323232;">tmux new 'rsync ...options... |& tee logview' ; split-window -h 'grep "denied" logview'
</span>

Replace …options… with your normal rsync command line.

That should give you a split view, with all the normal messages on the left, and only messages containing ‘denied’ on the right.

The |& makes sure we capture both stdout and stderr, tee then writes them to the fifo and displays them. split-window tells tmux to create a second pane, and display the output of grep.

canadaduane,
@canadaduane@lemmy.ca avatar

Thanks!

ook_the_librarian, in Is anyone using awk?
@ook_the_librarian@lemmy.world avatar

awk will always have a soft spot for me, but I can see why not many take the time to learn it. It tends to be needed right there at the border of problem complexity where you are probably better using a full-fledged scripting tool.

But learning awk is great for that “now you’re thinking in pipes” ah-hah moment.

thanks_shakey_snake, in Linux Boomers

Wow edgy.

callyral, (edited ) in Gentoo goes Binary (packages)
@callyral@pawb.social avatar

literally 2 days ago i tried installing gentoo in a vm but gave up because it would take too long to compile… and now this??? guess my timing was pretty bad

if i did use gentoo, i’d probably compile smaller programs from source and bigger things like kernel and web browser i would use as binaries.

GnomeComedy,

Wouldn’t the larger ones be the ones you’d get the most benefit from compiling?

Auli,

Not really I think optimizing it gives you small performance gains.

pastermil,

I think this is the sign from the universe you’ve been waiting for!

bamboo,

Seems kinda pointless to compile most packages unless there are specific performance optimizations or non-default features that can be enabled. I think the way I would use this would be to do binary by default and build only on the occasional instance there is a tangible benefit.

circuitfarmer, in I've started building a TUI for Lemmy
@circuitfarmer@lemmy.sdf.org avatar

I love this. Anything that keeps me in the terminal and out of the browser is a blessing from Tux himself.

pruneaue, (edited ) in NixOS beginner resources

Im on my own NixOS journey right now, so i cant fully vouch for all this yet. Here are my bookmarks for Nix/NixOS learning resources:

Apart from all this, ive learned the most from a mix of looking at other people’s configurations and reading source code. The documentation is super messy, especially if you want to make use of flakes (which i personally recommend from the get go).

torgeir,

+1 for these! I’d also add nixos.asia/en

TheAnonymouseJoker, in Ending support for Windows 10 could send 240 million computers to the landfill. Why not install Linux on them?
@TheAnonymouseJoker@lemmy.ml avatar

LMAO the clickbait delusion… has anybody not learnt for how long people stuck to Windows XP and 7? 10 is incomparably more secure and robust than 7 was, and 11 is almost a meaningless cosmetic upgrade. People that do not want to, will not use Linux, and keep using 10. Comfort and compatibility take precedence over security and privacy. People that do install Linux, however, will still want to keep 10 or 11 separately installed, and Microsoft officially suggests workaround to install 11 on any computers.

victron,
@victron@programming.dev avatar

Exactly. Most people don’t care about linux, why is this so hard to understand?

kurisu, in "Combokeys" instead of hotkeys. [Feature/new command suggestion]
@kurisu@awful.systems avatar

Man I already can’t get my inputs right in games, If I ever whiff a fucking combo to start my browser I’m ending it all.

Rayspekt,

You need to get out of the parking garage from Driver to boot your PC from now on.

library_napper, in SSH protects the world’s most sensitive networks. It just got a lot weaker
@library_napper@monyet.cc avatar

So hardened ssh configs following best practice cipher whitelist are unaffected, cool

mumblerfish, in SSH protects the world’s most sensitive networks. It just got a lot weaker

So you need an MitM situation to even be able to perfom the attack, and the the attack on works on two ciphers? The article says those ciphers are commonly enabled, but are they default or used in relatively modern distributed versions of openssh?

gerdesj, (edited )

A scan performed by the researchers found that 77 percent of SSH servers exposed to the Internet support at least one of the vulnerable encryption modes, while 57 percent of them list a vulnerable encryption mode as the preferred choice.

That means a client could negotiate one or the other on more than half of all internets exposed openssh daemons.

I haven’t got too whizzed up over this, yet, because I have no ssh daemons exposed without a VPN outer wrapper. However it does look nasty.

bouh,

If you need a man in the middle to exploit this, it’s not that nasty.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #