@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

If we allow derivatives, I’d say SteamOS despite being Arch. It’s putting Linux in non-technical people’s literal hands and it’s not a locked down and completely different platform that happens to run Linux like Android is. It’s almost designed by Valve to give people a taste of Linux by the addition of its desktop mode, and people that would be modding consoles are now modding SteamOS and learning how much fun an open platform can be. I’ve seen people from sales talk about their Decks on my work Slack.

Otherwise, NixOS, no contest. It’s been a really long time since we’ve last seen a fundamentally different distro that’s got some real potential. For the most part, Arch, Debian and Fedora do similar things with varying degrees of automation and preconfiguring your packages, but they’re still very package oriented. We’ve been mostly slapping tools like Ansible to really configure them to our liking reproducibly, answer files if your package manager has something like that. And then NixOS is like, what if the entire system was derived from evaluating a function, and and the same input will always result in the exact same system? It’s incredibly powerful especially when maintaining machines at scale. Updates are guaranteed to result in the exact same configuration, and they’re atomic too, no halfway updated system the user unplugged the system in the middle of.

Max_P,
@Max_P@lemmy.max-p.me avatar

Internally it’s even stored as a vote of either +1 or -1, so sending an undislike of a like probably also results in the vote’s removal. Lemmy just sums up all the votes and you have the score.

A like and a dislike activity are also contradictory, so even if you don’t unlike something, if you send a dislike it replaces the like as well.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yes but by doing so you’re using the same principles as MBR boot. There’s still this coveted boot sector Windows will attempt to take back every time.

What’s nice about EFI in particular is that the motherboard loads the file from the ESP, and can load multiple of them and add them to its boot menu. Depending on the motherboard, even browse the ESP and manually go execute a .efi from it.

Which in turn makes it a lot less likely to have bootloader fuckups because you basically press F12 and pick GRUB/sd-boot and you’re back in. Previously the only fix would be boot USB and reinstall syslinux/GRUB.

Max_P,
@Max_P@lemmy.max-p.me avatar

Sometimes “ugly” is even “not pretty and wealthy looking”.

Wind turbines aren’t pretty but they’re not any more of an eye sore as overhead power lines or whatever. And at least it’s a symbol of caring about being sustainable.

A lot of people like to move all the “ugly” elsewhere out of their sight and then call those places shitholes. It doesn’t bother them they’re just moving the infrastructure where the less wealthy have to deal with it. They’d rather a coal plant destroy a lower class city in pollution than see wind turbines near their upper class neighbourhood.

Max_P,
@Max_P@lemmy.max-p.me avatar

Both Docker and Podman pretty much handle all of those so I think you’re good. The last aspect about networking can easily be fixed with a few iptables/nftables/firewalld rules. One final addition could be NGINX in front of web services or something dedicated to handling web requests on the open Internet to reduce potential exploits in the embedded web servers in your apps. But other than that, you’ve got it all covered yourself.

There’s all the options needed to limit CPU usage, memory usage or generally prevent using up all the system’s resources in docker/podman-compose files as well.

If you want an additional layer of security, you could also run it all in a VM, so a container escape leads to a VM that does nothing else but run containers. So another major layer to break.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kernel exploits. Containers logically isolate resources but they’re still effectively running as processes on the same kernel sharing the same hardware. There was one of those just last year: blog.aquasec.com/cve-2022-0185-linux-kernel-conta…

Virtual machines are a whole other beast because the isolation is enforced at the hardware level, so you have to exploit hardware vulnerabilities like Spectre or a virtual device like a couple years ago some people found a breakout bug in the old floppy emulation driver that still gets assigned to VMs by default in QEMU.

Max_P,
@Max_P@lemmy.max-p.me avatar

Security comes in layers, so if you’re serious about security you do in fact plan for things like that. You always want to limit the blast radius if your security measures fail. And most of the big cloud providers do that for their container/kubernetes offerings.

If you run portainer for example and that one gets breached, that’s essentially free container escape because you can trick Docker into mounting and exposing what you need from the host to escape. It’s not uncommon for people to sometimes give more permissions than the container really needs.

It’s not like making a VM dedicated to running your containers cost anything. It’s basically free. I don’t do it all the time, but if it’s exposed to the Internet and there’s other stuff on the box I want to be hard to get into, like if it runs on my home server or desktop, then it definitely gets a VM.

Otherwise, why even bother putting your apps in containers? You could also just make the apps themselves fully secure and unbreachable. Why do we need a container for isolation? One should assume the app’s security measures are working, right?

Max_P,
@Max_P@lemmy.max-p.me avatar

Of course it’s a 737 Max.

Boeing’s really been dropping the ball on the 737 Max upgrades, first the Max 8 now the Max 9.

At this point I kind of avoid airlines with Boeing fleets, the Airbus planes are nicer anyway in general.

Max_P,
@Max_P@lemmy.max-p.me avatar

(a) Yes. Instance admins have the ultimate say in what’s on their server. They can delete posts, entire communities, ban remote users and delete remote users. At least they had the decency of notifying you!

Since lemmy.ca owns the post, lemmy.world can’t federate out the removal, so it’s only on lemmy.world.

(b) You have to go appeal to lemmy.world. Each instance have its own independent appeal process.

That’s the beauty of the fediverse: instances can all have their rules to tailor the experience to their users, and it doesn’t have to affect the entire fediverse. Other instances linked to lemmy.ca can still see and interact with your post just fine, just not lemmy.world.

Max_P,
@Max_P@lemmy.max-p.me avatar

Moderation does federate out, but only from the originating instance, the one that owns the post on question.

If someone post spam on lemmy.ca and lemmy.world deletes it, it only deletes on lemmy.world. If a mod or admin on lemmy.ca deletes it however, it federates and everyone deletes it as a result (unless modified to ignore deletions, but by default Lemmy will accept it).

There’s some interoperability problems with some software, notably Kbin where their deletions don’t federate to Lemmy correctly, so those do need to be moderated by every instance. But between Lemmy instances it does federate.

Max_P,
@Max_P@lemmy.max-p.me avatar

I think the best way to visualize it is in terms of who owns what and who has the authority to perform moderator actions.

  • As a user, you own the post, so you’re allowed to delete it no matter what. That always federate.
  • An admin always has full rights on what happens on their instance, because they own the server. The authority ends at their instance, so it may not federate out unless authorized otherwise.
  • An admin can nominate any user from the same instance to moderate any of its communities, local or remote. That authority also ends at that instance. In theory it should work for remote users too, but then it’d be hard to be from lemmy.ml and moderate lemmy.world’s view of a community on lemmy.ca.
  • The instance that owns the community can also do whatever they want even if the post originated from elsewhere, because they own the community. That federates out.
  • The instance that owns the community can nominate anyone from any instance as moderator. They’re authorized to perform mod actions on behalf of the instance that owns the community, therefore it will federate out as well.

From those you can derive what would happen under any scenario involving any combinations of instances.

Max_P,
@Max_P@lemmy.max-p.me avatar

You may disagree with it and may even be right, I didn’t bother watching all those videos. But the thing is, it’s always a potential liability for admins, and we’re at the mercy of what the law says and what a potential judge or jury would rule if brought to court.

And we all know how that goes when underage people are involved: everyone goes “but the children!”. Therefore, admins side with caution, because nobody wants to deal with legal trouble if they don’t have to. Just blur it and make everyone happy.

Plus, in the current AI landscape, the mere availability of nude children imagery even if it’s not sexually suggestive at all means someone can alter it to become so. People have already been arrested for that.

Nothing to do with people being too prude to see naked children. It’s about consent and what nasty people will inevitably do with it. Does that girl really want videos of her naked all over the porn sites even through heroic actions? Probably not.

That’s a very weird hill to blow alts on.

Max_P,
@Max_P@lemmy.max-p.me avatar

Not really different than any other M.2 SSDs, that it’s over USB doesn’t matter.

The only consideration for USB sticks is that they’re usually quite crap, so running a system off it tends to use up the flash pretty quickly.

Max_P,
@Max_P@lemmy.max-p.me avatar

It indeed doesn’t, its purpose is to show the differences and clarify why/where OP might have heard you need special care for portable installs on USB sticks.

All the guides and tutorials out there are overwhelmingly written with regular USB sticks in mind and not M.2 enclosures over USB. So they’ll tell you to put as much stuff on tmpfs as possible and avoid all unnecessary reads and writes.

Max_P,
@Max_P@lemmy.max-p.me avatar

How is it unrelated? Running MongoDB in a container so that it just works and you have a portable/reproducible dev environment is a perfectly valid approach.

Max_P,
@Max_P@lemmy.max-p.me avatar

We have to define what installing software even means. If you install a Flatpak, it basically does the same thing as Docker but somewhat differently. Snaps are similar.

“Installing” software generally means any way that gets the software on your computer semi-permanently and run it. You still end up with its files unpacked somewhere, the main difference with Docker is it ships with the whole runtime environment in the form of a copy of a distro’s userspace.

But fair enough, sometimes you do want to run things directly. Just pointing out it’s not a bad answer, just not the one you wanted due to missing intents from your OP. Some things are so finicky and annoying to get running on the “wrong” distro that Docker is the only sensible way to install it. I run the Unifi controller in a container for example, because I just don’t want to deal with Java versions and MongoDB versions. It just comes with everything it needs and I don’t need to needlessly keep Java 8 around on my main system potentially breaking things that needs a newer version.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kind of but also not really.

Docker is one kind of container, which itself is a set of kinds of Linux namespaces.

It’s possible to run them as if they were a virtual machine with LXC, LXD, systemd-nspawn. Those run an init system and have a whole Linux stack of their own running inside.

Docker/OCI take a different approach: we don’t really care about the whole operating system, we just want apps to run in a predictable environment. So while the container does contain a good chuck of a regular Linux installation, it’s there so that the application has all the libraries it expects there. Usually network software that runs on a specified port. Basically, “works on my machine” becomes “here’s my whole machine with the app on it already configured”.

And then we were like well this is nice, but what if we have multiple things that need to talk to eachother to form a bigger application/system? And that’s where docker-compose and Kubernetes pods comes in. They describe a set of containers that form a system as a single unit, and links them up together. In the case of Kubernetes, it’ll even potentially run many many copies of your pod across multiple servers.

The last one is usually how dev environments go: one of them have all your JS tooling (npm, pnpm, yarn, bun, deno, or all of them even). That’s all it does, so you can’t possibly have a Python library that conflicts or whatever. And you can’t accidentally depend on tools you happen to have installed on your machine because then the container won’t have it and it won’t work, you’re forced to add it to the container. Then that’s used to build and run your code, and now you need a database. You add a MongoDB container to your compose, and now your app and your database are managed together and you can even access the other containers by their name! Now you need a web server to run it in a browser? Add NGINX.

All isolated, so you can’t be in a situation where one project needs node 16 and an old version of mongo, but another one needs 20 and a newer version of mongo. You don’t care, each have a mongo container with the exact version required, no messing around.

Typically you don’t want to use Docker as a VPS though. You certainly can, but the overlay filesystems will become inefficient and it will drift very far from the base image. LXC and nspawn are better tools for that and don’t use image stacking or anything like that. Just a good ol’ folder.

That’s just some applications of namespaces. All of process, network, time, users/groups, filesystems/mount can be independently managed so many namespaces can be in the same network namespace, while in different mount namespaces.

And that’s how Docker, LXC, nspawn, Flatpak, Snaps are kinda all mostly the same thing under the hood and why it’s a very blurry line which ones you consider to be isolation layers, just bundled dependencies, containers, virtual machines. It’s got an infinite number of ways you can set up the namespaces the ranges from seeing /tmp as your own personal /tmp to basically a whole VM.

Max_P,
@Max_P@lemmy.max-p.me avatar

Well, I’m currently using VMware on Ubuntu

Well there’s your mistake: using VMware on a Linux host.

QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.

Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.

Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’ll definitely run Kali well, Windows will be left without hardware acceleration for 2D/3D so it’ll be a little laggy but it’s usable.

VMware has its own driver that converts enough DirectX for Windows to run smoother and not fall back to the basic VGA path.

But VMware being proprietary software, changing distro won’t make it better so it’s either you deal with the VMware bugs or you deal with stable but slow software rendering Windows.

That said on the QEMU side, it’s possible to attach one of your host’s GPUs to the VM, where it will get full 3D acceleration. Many people are straight up gaming in competitive online games, in a VM with QEMU. If you have more than one GPU, even if it’s an integrated GPU + a dedicated one like is common with most Intel consumer non-F CPUs, you can make that happen and it’s really nice. Well worth buying a used GTX 1050 or RX 540 if your workflow depends on a Windows VM running smoothly. Be sure your CPU and motherboard support it properly before investing though, it can be finicky, but so awesome when it works.

Max_P,
@Max_P@lemmy.max-p.me avatar

They mostly don’t exist yet apart from this PR.

On Vista and up, there’s only the Display Only Driver (DOD) driver which gets resolutions and auto resizing to work, but it’s got no graphical acceleration in itself.

Max_P,
@Max_P@lemmy.max-p.me avatar

Distro packages and to some extent Flatpaks, use shared libraries which can be updated independently of your app.

So for example, if a vulnerability is discovered in say, curl, or imagemagick, ffmpeg or whatever library an app is using: for AppImages, this won’t be fixed until you update all of your AppImages. In Flatpak, it usually can be updated as part of a dependency, or distributed as a rebuild and update of the Flatpak. With distro packages, you can usually update the library itself and be done with it already.

AppImages are convenient for the user in that you can easily store them, move them, keep old versions around forever easily. It still doesn’t guarantee it’ll still run in distros a couple years for now, it guarantees that a given version will forever be vulnerable if any of its dependencies are because they’re bundled in, it makes packages that are much much bigger than they need to be, and you have to unpack/repack them if you need library shims.

Different kinds of tradeoffs and goals, essentially. Flatpak happens to be a compromise a lot of people agree on as it provides a set of distro-agnostic libraries while also not shifting the burden entirely onto the app developers. The AppImage developer is intentionally keeping Wayland broken on AppImage because he hates it and wants to fulfil his narrative that Wayland is a broken mess that won’t ever work, while Flatpak developers work hard on sandboxing and security and granular permission systems.

Max_P,
@Max_P@lemmy.max-p.me avatar

It is very unfortunate. It’s fine to point out problems, but then when you become part of the problem, that’s not amazing.

He’s had the same meltdown with fuse2 being deprecated in favor of fuse3 which, guess what, also broke AppImage and we had a huge rant for that too.

Flatpak has a better chance of being forward compatible for the foreseeable future. Linux generally isn’t a very ABI/API compatible platform because for the most part you’re expected to be able to patch and recompile whatever you might want.

Max_P,
@Max_P@lemmy.max-p.me avatar

RAM is the kind of thing you’re better off having too much than not enough. Worst case the OS ends up with a very healthy and large file cache, which frees up your storage and makes things a bit faster/lets it spend the CPU on other things. If anything, your machine is future proofed against the ever increasing RAM hungriness of web apps. But if you run out of it, you get apps killed, hangs or major slowdowns as it hits the swap.

The thing with RAM is that it’s easy for 99% of your workload to fit comfortably, and then there’s one thing you temporarily need a bit more and you’re screwed. My machine usually uses 8-12/32GB of RAM but yet I still ended up needing to add swap to my machine. Just opening up the Lemmy source code and spinning up the Rust LSP can use a solid 8+GB alone. I’ve compiled some AUR packages that needed more than 16GB of RAM. I have 16 cores so compiling anything with -j32 can very quickly bring down a machine to its knees even if each compile thread is only using like 256-512MB each.

Another example: my netbook has 8GB. 99% of the time it’s fine, because it’s a web browsing machine, and I probably average on 4GB usage on a heavy day with lots of tabs open. But if I open up VSCode and use any LSP be it TypeScript or Rust, the machine immediately starts swapping aggressively. I had to log out of my graphical session to compile Lemmy, barely.

RAM is cheap enough these days it’s nice to have more than you need to not ever have to worry about it.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yeah, it’s not really advertised as an init system anymore. It’s an entire system management suite, and when seen from that angle, it’s pretty good at it too. All of it is consistent, it’s fairly powerful, and it’s usually 10-20 lines of unit files to describe what you want. I wanted that for a long time.

I feel like the hate always comes from the people that treat the UNIX philosophy like religion. And even then, systemd is very modular, just also well integrated together: networkd manages my network, resolved manages my DNS, journald manages my logs, timesyncd manages my NTP, logind manages my logins and sessions, homed mounts my users profiles on demand.

Added complexity, yes, but I’ve been using the hell out of it. Start services when a specific peripheral is plugged in? Got it. Automatically assign devices to seats? Logind’s got you covered, don’t even need to mess with xorg configs. VM network? networkd handles it. DNS caching? Out of the box. Split DNS? One command. Don’t want 2000 VMs rotating their logs at exactly midnight and trashing your ceph cluster? Yep just slap a RandomizedDelaySec=24h to the units. Isolate and pin a VM to dedicated cores dynamically? Yep it’ll do that. Services that needs to run on a specific NUMA node to stay close to PCIe peripherals? Yep easy. All very easily configurable with things like Ansible or bash provisioning scripts.

Sure it may not be for everybody, but it solves real problems real Linux admins have to deal with at scale. If you don’t like it, sysvinit still works just fine and I heard good things about runit too. It’s an old and tired argument, it’s been over 10 years, we can stop whining about it and move on. There’s plenty of non-systemd distros to use.

Alright, I'm gonna "take one for the team" -- what is with the "downvote-happy" users lately?

Title. “lmao internet points” and all, but what is the point of participating in a community that sees assumptions and other commonly non-harmful commentaries/posts as “bad” this easily? Do folks in here are really that needy of self-validation, even if it means seeking such from something completely insignificant like...

Max_P,
@Max_P@lemmy.max-p.me avatar

I expected this to be “another one of those” but actually from what my instance has about you, you were indeed correct. Gaming distros with exclusive features lmao.

IMO that’s some of the gamer logic bleeding over in the Linux side, now that Linux gaming is taking off. They’ll do anything including install dubious Linux distros barely hanging together with duct tape for a perceived extra 2 FPS. Download software exclusively distributed on Discord? Hell yeah. I’m sure at least one of them boots with mitigations=off and it’s not clearly indicated that it does.

We’re seeing the same thing on the Windows side with modified Windows ISOs like the whole AtlasOS, that rightfully made some security experts sound the alarm. Some did things like completely strip off the updates, antivirus and firewall. Unless your system is exclusively running Steam and firewalled off the network, this is a certified bad idea.

I’d probably trust Nobara because the guy clearly knows his shit, but some of them really are just some other guy’s riced up Arch snapshot. They may give the impression everything just works at first but I’ve definitely seen examples of it falling apart. Even bigger distros like Pop_OS! had major snafus like the whole Steam uninstalls your DE thing, and Manjaro still fucks up something basic every now and then. I tried some of them in a VM and they didn’t even install or boot correctly. Oh my fault that one only works for NVIDIA graphics cards not AMD, my bad.

It’s not worth arguing, it’s a user base with vastly different goals than I do, just let them have their Bedrock Linux completely blow up in multi package manager hell and soon enough they’ll come running for a saner more reliable distro.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #