You can also add to the output. I use it frequently to pull a list of files, etc, from another file, and then do something like generate another script from that output. This is a weak example, but one I can think of off my head. Not firing up my work laptop to search for better examples until after the holidays. LOL.
awk {‘print "ls -l "$1’}
And then send that to a file that I can then execute.
Command line is a lot more powerful for a lot of cases. Most CLI programs are written with the idea that the caller might be another program, so they tend to be easy to chain with pipes and redirection. So you have tons of simple tools that you can combine however you need.
Since he doesn't mention it in his 'fantastic' reporting, OpenSSH 9.6 was released Monday that will patch this attack. Also, since he doesn't mention it, if on the Internet, the MITM would have to be installed at both end points (client side and server side) to be effective without the patch.
Since he doesn’t mention it in his ‘fantastic’ reporting, OpenSSH 9.6 was released Monday that will patch this attack.
I am tempted to delete this post just for the article’s stupid clickbait headline, but it still will probably cause some people to go update their OpenSSH installs, so… meh.
Anyone who actually wants to know details of the vulnerability should read the website about it which is obviously much better than this article.
Also, since he doesn’t mention it, if on the Internet, the MITM would have to be installed at both end points (client side and server side) to be effective without the patch.
Huh? No. The attacker doesn’t need to be in two places or even near either end per se, they could be located at any fully on-path position between the client and server.
Rechecked this now, and it’s at about 5% now. The statistics seem a bit weird to me, unless there are some big seasonal changes. Your 12% was recorded in June and July. Maybe with less business activity during these months, the Windows share plummets in favor of home users who are more prone to use Linux.
Percentage is not as easy or helpful to work with, when there are orders of magnitude difference between absolute numbers. It hides the absolute number difference for no other reason than unspoken lying of statistics.
People dont hate on ubuntu cause its inherently bad. They hate on it because its a corporate distro and they do some questionable stuff sometimes. The OS runs fine.
Why not debian unstable? Its better than ubuntu in pretty much every way imo. Somewhat less user friendly i guess.
I prefer software with defaults that are in line with my preferences. I rather have sensible defaults and a nice OOTB experience, instead of fighting my distro and it’s packages.
It’s not actually unstable, more accurately it’s tested and verified as much as Debian stable, meaning it’s fine for desktop use but I wouldn’t use it for a server or critical system I plan on running 24/7 without interruption, both since it may have bugs that develop after long term use and gets more frequent updates which will be missed and render it out of date quickly if it’s running constantly.
It’s unstable in the sense that it doesn’t stay the same for a long time. Stable is the release that will essentially stay the same until you install a different release.
Sid is the kid next door (Iirc) from Toy Story who would melt and mutilate toys for fun. He may have been a different kind of unstable.
It’s relatively alright for something that’s called unstable. There is also testing which is tested for at least 10 days. And you can mix and match, but that’s not recommended either.
I wouldn’t put it on my server. And I wouldn’t recommend it to someone who isn’t okay with fixing the occasional hiccup. But I’ve been using it for years and I like it.
However, mind that it’s not supported and they do not pay attention to security fixes.
I used to run Debian testing on my servers. These days I don’t have much free time to mess with them, so they’re all running the stable release with unattended-upgrades.
However, mind that it’s not supported and they do not pay attention to security fixes.
To be clear, it can still get security updates, but it’s the package maintainer’s responsibility to upload them. Some maintainers are very responsive while others take a while. On the other hand, Debian stable has a security team that quickly uploads patches to all officially supported packages (just the “main” repo, not contrib, non-free, or non-free-firmware).
Thanks for clarifying. Yeah I implied that but didn’t explain all the nuances. I’ve been scolded before for advertising the use of Debian testing. I’m quite happy with it. But since I’m not running any cutting edge things on my server and Docker etc have become quite stable… I don’t see any need to put testing on the server. I also use stable there and embrace the security fixes and stability / low maintenance. I however run testing/unstable on my laptop.
Unstable is pretty damn stable, feels arch-y to me, and arch rarely has issues. If there are issues they’re fixed fast.
Testing is the middle ground. Tested for a bit by unstable peeps but thats it.
Debian unstable is not really unstable, but it’s also not as stable as Ubuntu. I’m told that when bugs appear they are fixed fast.
I ran Debian testing for years. That is a rolling release where package updates are a few weeks behind unstable. The delay gives unstable users time to hit bugs before they get into testing.
When I wanted certain packages to be really up-to-date I would pin those select packages to unstable or to experimental. But I never tried running full unstable myself so I didn’t get the experience to know whether that would be less trouble overall.
Side question on this, why are people suggesting Debian, a stable but “old” distro, but never mention RHEL / Rocky? They are on par with stability (and quite possibly RHEL wins on it). Did you know that you can get a free licence if you register as a developer?
As the other reply said, Fedora and RHEL harbor the same problem as Ubuntu in terms of corporate backing.
They’re all as stable at it gets when it comes to linux distros; all those “server distributions”.
I guess people recommend debian because that’s what they know. It’s got the biggest community, so the most support.
Nothing against Rocky, but i wont recommend it if i’ve never used it.
If we pretend the issue is just the corporate aspects of Ubuntu/Canonical, Red Hat and RHEL have all of those and then some. People just try not to think about that because Fedora is so nice.
As for Rocky: The status of that is pretty much in massive flux since Redhat bounce between tolerating it and wanting it to be even deader than CentOS depending on the day.
Are we really back to the 00s? Are we going to start calling it Micro$hill next?
And “Legally it can’t be stopped” doesn’t really bode well for long term support in the context of contributors and so forth. It won’t prevent me from using Rocky (I actually really like it for servers I will likely re-image sooner than later) but it also means I am not going to recommend it to people looking for a distro.
When looking at the 8.x and 9.x releases Rocky is the most popular distro for enterprise Linux. Even more popular than R hell, and yes I’m still bitter about what they did to centos.
Technically they have to give the code to people who use their product. And the general public is not it. Except I guess the free license one would be problematic. Unless their is something in the license for your use.
You do not have to sign a licensing aggreement when you pull the image from Iron Bank, or spin up cloud VMs. In both of those cases you will get access to their source.
Every website knows what browser you’re using unless you change your user agent to pretend to be a different one (although even that won’t always work). The banner is a little weird, but it really is good advice, chromium based browsers are a huge danger to the open web
I don’t need any of the advanced tools, I just want a cleaner interface for the tools that already exist. The only thing I’m able to do is make header pics for my posts. The 2.x UI is really, really old now. The time for a refresh was a few years back, but I do understand the limitations of a small team. Like others have said, I’ll likely run both and migrate to 3.x when it’s stable. Though, I do like the idea of non-destructive editing :)
Krita has been adding photo manipulation tools faster than GIMP is fixing their UX/UI, so at this point I think Krita will be the first to become the most viable FOSS alternative to Photoshop.
I know this thread is likely to quickly descend into 50 variants of “ew, snap”, but it’s a good write up of what is really a pretty interesting novel approach to the immutable desktop world.
As the article says, it could well be the thing that actually justifies Canonical’s dogged perseverance with snaps in the first place.
I actually don’t understand the issue people have with Snaps. The main gripe seems to be “It’s controlled by Canonical”.
But why is it an issue that Canonical controls a source of software for their own OS? Isn’t that the same with every distro’s repository?
But why is it an issue that Canonical controls a source of software for their own OS? Isn’t that the same with every distro’s repository?
No. You can add any other repository to apt, rpm, Flatpak, etc. You cannot do the same with Snap and that’s by design. Canonical wants to be the sole gatekeeper of Linux software, hoping that all developers have no alternative but to publish software on the Snap store (ideally only there) which works best on Ubuntu.
Exactly. I feel they want to sell it to a big player, but no big player will touch it unless they can fully control it. Hence snap as part of that plan. Ubuntu is a hell no for me.
How would they trap everyone in the ecosystem?
This isn’t Apple, there’s a gajillon other ways of getting software you can use on every single linux distro.
Then I guess it’s a good thing they don’t control all other Linux distros.
But they would to a degree if the Snap Store would actually succeed becoming the Linux app store (like Steam is for games but that’s more because all other vendors don’t care to make a Linux client).
From reading this that’s not the whole story. Someone working at canonical successfully made a version of snap that could use alternative stores, but the default version does not allow it
And honestly at the point of installing that modified version you may as well just install a different package manager anyway
You can; the issue is that you can’t add two snap repositories at once.
This is functionally pretty much the same thing, as nobody is likely to want to use snap while locking themselves out of the main snap repository, but it’s still important to make the distinction.
In theory I guess there’s nothing stopping you setting up a mirror of the main snap repo with automatic package scraping, but nobody’s really bothered exploring it seeing as no distro other than Ubuntu has taken any interest in running snap.
It’s all open source so there’s no reason you couldn’t fork it and add that functionality. Although it’d probably be a fairly involved piece of work; it wouldn’t be a simple one-line change.
It’s not all open source. Canonical merely made available a super simple reference implementation of the Snap server but the actual Snap Store is proprietary.
I was referring to snapd, which is the thing that actually has the hard limit on a single repository. That’s fully open source (and there’s one major fork of it out in the wild, in the form of Ubuntu Touch’s click). The tooling for creating snap packages is also all open source.
The APIs which snapd uses to interact with its repo are also open source. While there’s no turnkey Snap Store code for cloning the existing website, it’s pretty trivial to slap those APIs on a bog standard file server if you just want to host a repo.
Not open-sourcing the website code is a dick move, but there’s nothing about the current set up that would act as an obstacle for anyone wanting to fork snap if that’s what they wanted to do. It’s just with flatpak existing, there’s not a lot of point in doing so right now.
I’m pretty excited about it. It’s a much cleaner solution to the problem immutable OSes are trying to solve. Dare I say it’s better even than the Android model because it covers the whole stack with a single system.
I appreciate that they try, and as much as I dislike some of snap’s design choices I think it has a place. Flatpak appears to be the winner in this race however, and I feel like this is Unity all over. Just as the project gets good they abandon it for the prevailing winds. I’ve been told the snap server isn’t open source, which is a big concern?
Unlike desktop environments where there were equivalent alternatives to Unity, Flatpak isn’t an alternative to Snap that can deliver an equivalent solution. You can’t build an OS on top of Flatpak. This is why I think that if Snap makes the lives of Canonical developers easier, they’ll keep maintaining it. We’ll know if Ubuntu Core Desktop becomes a mainstream flavor or the default one. I think there is a commercial value of it in the enterprise world where tight control of the OS and upgrade robustness are needed. In this kind of a future Snap will have a long and productive life. If it ends up being used only for desktop apps which Flatpak covers, it may fall by the wayside as you suggested.
Absolutely, and I think that’s why snap has a future at all. Immutability is the future, as well as self-contained apps. We saw the explosive growth of Docker as indication that this was the way. If they can make their tooling as easy as a Dockerfile they will win just by reducing the work needed to support it.
I don’t like Canonical pushing snaps as universal apps for all distros, because of issues like sandboxing not working on mainline kernels.
But it’s pretty interesting to see how a fully snap based desktop OS could look like. It might have less limitations than rpm-ostree. Easy access to recent mesa and similar would be awesome.
Snap makes a lot of sense for desktop apps in my opinion. There’s a conceptual difference between system level packages that you install using something like APT, and applications. Applications should be managed at the user layer while the base system should provide all the common libraries and APIs.
It’s also worth noting that this is a similar approach to what MacOS has been doing for ages with .app bundles where any shared libraries and assets are packaged together in the app folder. The approach addresses a lot of the issues you see with shared libraries such as having two different apps that want different versions of a particular library.
The trade off is that you end up using a bit more disk space and memory, but it’s so negligible that the benefits of having apps being self-contained far outweigh these downsides.
The problem here is that for that purpose, Flatpak is better in nearly every way and is far more universal
I think Snap makes the most sense for something like Ubuntu Core, where it has the unique benefit of being able to provide lower level system components (as opposed to Flatpak which is more or less just for desktop GUI apps), but it doesn’t make sense for much else over other existing solutions
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.