@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them šŸ³ļøā€šŸŒˆ

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

The votes are public. Kbin displays them right in the UI. Lemmy semi-hides it, but it’s never been designed to be private in any way.

Changing instance won’t do shit if that’s a concern to you. As an admin I can see them even if my instance isn’t involved with the post at all:

https://lemmy.max-p.me/pictrs/image/6bae7aa5-20a3-497e-9012-dc4c8a869eb4.png

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

It would be if it wasn’t for NVIDIA, as usual. On Intel/AMD, you assign the seats, the displays light up and you’re good to go, pretty much works out of the box, especially on Wayland.

But for NVIDIA yeah maybe a VM is less pain since NVIDIA works well with VFIO.

Linux file transfer speed bottlenecks?

I’m currently watching the progress of a 4tB rsync file transfer, and i’m curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there’s a lot that can effect transfer speeds, so I guess i’m not asking why my transfer itself isn’t going faster....

Max_P,
@Max_P@lemmy.max-p.me avatar

SATA III is gigabit, so the max speed is actually 600MB/s.

What filesystem? For example, on my ZFS pool I had to let ZFS use a good chunk of my RAM for it to be able to cache things enough that rsync would max out the throughput.

Rsync doesn’t do the files in parallel so at such speeds, the process of open files, read chunks, write chunks, close files, repeat can add up. So you want the kernel to buffer as much of it as possible.

If you look at the disk graphs of both disks, you probably see a read spike, followed by a write spike on the target, instead of a smooth maxed out curve. Then the solution is increasing buffers and caching. Depending on the distro there’s a sysctl that may be on by default that limits the size of caches to prevent the ā€œI wrote a 4GB file to my USB stick and now there’s 4GB of RAM used for it and it takes hours after finishing the transfer before it’s flushed to the stickā€.

Max_P,
@Max_P@lemmy.max-p.me avatar

Install from source is fairly likely to work: wiki.ros.org/noetic/Installation/Source

It doesn’t seem to have any outrageously complicated dependencies to work, just C++, Boost and a few other recognizable names, at least at a glance. They also seemingly have an ArchLinux package, which means it’s likely to at least be buildable on latest everything. Mint will fall in between, so the odds it’ll compile are pretty good.

Max_P,
@Max_P@lemmy.max-p.me avatar

For maximum performance you probably want to skip virt-manager, virt-viewer has a hardcoded FPS cap.

If you use QEMU directly and use virtio-gpu paired with the sdl or gtk display, and OpenGL enabled, you can run Ubuntu at 4K144Hz no problem. The VM is near imperceptible, and it works out of the box, that’s not even touching the crazy VFIO stuff.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s not even always necessarily about trust, but risk management as well. I’ve definitely coded a crash handler that exposed my database credentials in it. There’s also the network aspect of it: your ISP/job/coffee shop can see the DNS request and TLS server name from the telemetry ping. That can be used to track you, or maybe you trigger some firewall alarm at work because of the ping.

We’ve kind of just started accepting that most apps will phone home and that there’s constantly some chatter on the network from all those apps. But if you actually start looking at what all your devices and apps are doing in the background with say, a PiHole, it’s pretty shocking.

I’m not that paranoid and would certainly accept some level of telemetry if asked nicely. ā€œHey I’m a small dev, I appreciate receiving detailed crash reports to make the app betterā€. And as a developer, users might be willing to offer way more than what would be reasonable to do in the background. I might even agree to submit a screenshot on crash, but if and only if I’ve been asked before and told what it’s used for, and I get the option to disagree if I’m going to be handling private information and don’t want to risk my data be part of a stack trace.

VPN to home network options

I currently have a server running Unraid as the OS, which has some WireGuard integration built in. Which I’ve enabled and been using to remotely access services hosted on that server. But as I’ve expanded to include things like Octopi running on a Pi3 and NextcloudPi running on a Pi4 (along with AdGuardHome), I’m trying to...

Max_P,
@Max_P@lemmy.max-p.me avatar

Any reason the VPN can’t stay as-is? Unless you don’t want it on the unraid box at all anymore. But going to unraid over VPN then out the rest of the network from there is a perfectly valid use case.

Lemmy instance which has not defederated with any other instance.

Hi everyone. I have found many ghost comments in posts. Like one of the posts has 300+ upvotes and 28 comments but when I opened it, there were no comments. I tried different Lemmy apps and it’s the same in all of them. Which leads me to believe that it has something to do with defederation done by Lemmy.ml. Which instance has...

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

Keep in mind, defederation is bidirectional. You can end up on an instance that doesn’t defederate anybody but is being defederated by some major instances and end up worse off. Also, communities are bound to an instance so even if your instance doesn’t defederate with another, the instance that hosts the community might, which also doesn’t solve anything.

Also lemmy.ml had to restore from backup monday because postgres shat itself, so if the post is from monday or around, it’s possible it was simply lost due to the technical problems.

There’s also some federation problems with 0.19.0 and 0.19.1, so it’s possible it’s been attempted to be delivered to lemmy.ml but failed due to load or whatever.

You didn’t give any details or examples so we can only speculate. We troubleshoot federation by establishing patterns, like from what instance are the missing comments from, what instance hosts the community.

Addendum: I’ve also been experiencing occasional ghost posts, and I’m on my own instance, so there might be some stuff going on that’s unrelated, because I sure didn’t do anything. If they were deleted or retracted I would see them because I’m admin, I see everything.

Max_P,
@Max_P@lemmy.max-p.me avatar

For KDE specifically I think there’s a dbus interface that can be called to switch it. You can find it with QDBusViewer or D-Feet.

I’d imagine XWayland would follow the same since it’s essentially a Wayland client. But if you ran the xmodmap under xwayland, that may have inverted it in xwayland, and it’s already inverted in KWin which would double invert it aka put it back to default.

Otherwise doing it at the evdev level will definitely work. It’s a bit of a nuclear option but if it works…

Max_P,
@Max_P@lemmy.max-p.me avatar

The ads come from an ad network where there is very little visibility into what’s going to be displayed in your app. And bad people also keep managing to get their ads published even though the ad network doesn’t allow them

And it all ties into the whole targeted advertising, where they also make sure very few people get the bad ad, and tries to target people they think may be more susceptible to these kinds of tactics. Depending on the amount of interactivity allowed, the ad can even display two different things if it deems you too savvy to fall for it.

It’s basically unescapable unless you only use apps without ads, or pay for the ad-free versions.

The whole advertising industry is sketchy, more news at 10.

Max_P,
@Max_P@lemmy.max-p.me avatar

If we allow derivatives, I’d say SteamOS despite being Arch. It’s putting Linux in non-technical people’s literal hands and it’s not a locked down and completely different platform that happens to run Linux like Android is. It’s almost designed by Valve to give people a taste of Linux by the addition of its desktop mode, and people that would be modding consoles are now modding SteamOS and learning how much fun an open platform can be. I’ve seen people from sales talk about their Decks on my work Slack.

Otherwise, NixOS, no contest. It’s been a really long time since we’ve last seen a fundamentally different distro that’s got some real potential. For the most part, Arch, Debian and Fedora do similar things with varying degrees of automation and preconfiguring your packages, but they’re still very package oriented. We’ve been mostly slapping tools like Ansible to really configure them to our liking reproducibly, answer files if your package manager has something like that. And then NixOS is like, what if the entire system was derived from evaluating a function, and and the same input will always result in the exact same system? It’s incredibly powerful especially when maintaining machines at scale. Updates are guaranteed to result in the exact same configuration, and they’re atomic too, no halfway updated system the user unplugged the system in the middle of.

Max_P,
@Max_P@lemmy.max-p.me avatar

Internally it’s even stored as a vote of either +1 or -1, so sending an undislike of a like probably also results in the vote’s removal. Lemmy just sums up all the votes and you have the score.

A like and a dislike activity are also contradictory, so even if you don’t unlike something, if you send a dislike it replaces the like as well.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yes but by doing so you’re using the same principles as MBR boot. There’s still this coveted boot sector Windows will attempt to take back every time.

What’s nice about EFI in particular is that the motherboard loads the file from the ESP, and can load multiple of them and add them to its boot menu. Depending on the motherboard, even browse the ESP and manually go execute a .efi from it.

Which in turn makes it a lot less likely to have bootloader fuckups because you basically press F12 and pick GRUB/sd-boot and you’re back in. Previously the only fix would be boot USB and reinstall syslinux/GRUB.

Max_P,
@Max_P@lemmy.max-p.me avatar

Sometimes ā€œuglyā€ is even ā€œnot pretty and wealthy lookingā€.

Wind turbines aren’t pretty but they’re not any more of an eye sore as overhead power lines or whatever. And at least it’s a symbol of caring about being sustainable.

A lot of people like to move all the ā€œuglyā€ elsewhere out of their sight and then call those places shitholes. It doesn’t bother them they’re just moving the infrastructure where the less wealthy have to deal with it. They’d rather a coal plant destroy a lower class city in pollution than see wind turbines near their upper class neighbourhood.

Max_P,
@Max_P@lemmy.max-p.me avatar

Both Docker and Podman pretty much handle all of those so I think you’re good. The last aspect about networking can easily be fixed with a few iptables/nftables/firewalld rules. One final addition could be NGINX in front of web services or something dedicated to handling web requests on the open Internet to reduce potential exploits in the embedded web servers in your apps. But other than that, you’ve got it all covered yourself.

There’s all the options needed to limit CPU usage, memory usage or generally prevent using up all the system’s resources in docker/podman-compose files as well.

If you want an additional layer of security, you could also run it all in a VM, so a container escape leads to a VM that does nothing else but run containers. So another major layer to break.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kernel exploits. Containers logically isolate resources but they’re still effectively running as processes on the same kernel sharing the same hardware. There was one of those just last year: blog.aquasec.com/cve-2022-0185-linux-kernel-conta…

Virtual machines are a whole other beast because the isolation is enforced at the hardware level, so you have to exploit hardware vulnerabilities like Spectre or a virtual device like a couple years ago some people found a breakout bug in the old floppy emulation driver that still gets assigned to VMs by default in QEMU.

Max_P,
@Max_P@lemmy.max-p.me avatar

Security comes in layers, so if you’re serious about security you do in fact plan for things like that. You always want to limit the blast radius if your security measures fail. And most of the big cloud providers do that for their container/kubernetes offerings.

If you run portainer for example and that one gets breached, that’s essentially free container escape because you can trick Docker into mounting and exposing what you need from the host to escape. It’s not uncommon for people to sometimes give more permissions than the container really needs.

It’s not like making a VM dedicated to running your containers cost anything. It’s basically free. I don’t do it all the time, but if it’s exposed to the Internet and there’s other stuff on the box I want to be hard to get into, like if it runs on my home server or desktop, then it definitely gets a VM.

Otherwise, why even bother putting your apps in containers? You could also just make the apps themselves fully secure and unbreachable. Why do we need a container for isolation? One should assume the app’s security measures are working, right?

Max_P,
@Max_P@lemmy.max-p.me avatar

Of course it’s a 737 Max.

Boeing’s really been dropping the ball on the 737 Max upgrades, first the Max 8 now the Max 9.

At this point I kind of avoid airlines with Boeing fleets, the Airbus planes are nicer anyway in general.

Max_P,
@Max_P@lemmy.max-p.me avatar

(a) Yes. Instance admins have the ultimate say in what’s on their server. They can delete posts, entire communities, ban remote users and delete remote users. At least they had the decency of notifying you!

Since lemmy.ca owns the post, lemmy.world can’t federate out the removal, so it’s only on lemmy.world.

(b) You have to go appeal to lemmy.world. Each instance have its own independent appeal process.

That’s the beauty of the fediverse: instances can all have their rules to tailor the experience to their users, and it doesn’t have to affect the entire fediverse. Other instances linked to lemmy.ca can still see and interact with your post just fine, just not lemmy.world.

Max_P,
@Max_P@lemmy.max-p.me avatar

Moderation does federate out, but only from the originating instance, the one that owns the post on question.

If someone post spam on lemmy.ca and lemmy.world deletes it, it only deletes on lemmy.world. If a mod or admin on lemmy.ca deletes it however, it federates and everyone deletes it as a result (unless modified to ignore deletions, but by default Lemmy will accept it).

There’s some interoperability problems with some software, notably Kbin where their deletions don’t federate to Lemmy correctly, so those do need to be moderated by every instance. But between Lemmy instances it does federate.

Max_P,
@Max_P@lemmy.max-p.me avatar

I think the best way to visualize it is in terms of who owns what and who has the authority to perform moderator actions.

  • As a user, you own the post, so you’re allowed to delete it no matter what. That always federate.
  • An admin always has full rights on what happens on their instance, because they own the server. The authority ends at their instance, so it may not federate out unless authorized otherwise.
  • An admin can nominate any user from the same instance to moderate any of its communities, local or remote. That authority also ends at that instance. In theory it should work for remote users too, but then it’d be hard to be from lemmy.ml and moderate lemmy.world’s view of a community on lemmy.ca.
  • The instance that owns the community can also do whatever they want even if the post originated from elsewhere, because they own the community. That federates out.
  • The instance that owns the community can nominate anyone from any instance as moderator. They’re authorized to perform mod actions on behalf of the instance that owns the community, therefore it will federate out as well.

From those you can derive what would happen under any scenario involving any combinations of instances.

Max_P,
@Max_P@lemmy.max-p.me avatar

You may disagree with it and may even be right, I didn’t bother watching all those videos. But the thing is, it’s always a potential liability for admins, and we’re at the mercy of what the law says and what a potential judge or jury would rule if brought to court.

And we all know how that goes when underage people are involved: everyone goes ā€œbut the children!ā€. Therefore, admins side with caution, because nobody wants to deal with legal trouble if they don’t have to. Just blur it and make everyone happy.

Plus, in the current AI landscape, the mere availability of nude children imagery even if it’s not sexually suggestive at all means someone can alter it to become so. People have already been arrested for that.

Nothing to do with people being too prude to see naked children. It’s about consent and what nasty people will inevitably do with it. Does that girl really want videos of her naked all over the porn sites even through heroic actions? Probably not.

That’s a very weird hill to blow alts on.

Max_P,
@Max_P@lemmy.max-p.me avatar

It indeed doesn’t, its purpose is to show the differences and clarify why/where OP might have heard you need special care for portable installs on USB sticks.

All the guides and tutorials out there are overwhelmingly written with regular USB sticks in mind and not M.2 enclosures over USB. So they’ll tell you to put as much stuff on tmpfs as possible and avoid all unnecessary reads and writes.

Max_P,
@Max_P@lemmy.max-p.me avatar

We have to define what installing software even means. If you install a Flatpak, it basically does the same thing as Docker but somewhat differently. Snaps are similar.

ā€œInstallingā€ software generally means any way that gets the software on your computer semi-permanently and run it. You still end up with its files unpacked somewhere, the main difference with Docker is it ships with the whole runtime environment in the form of a copy of a distro’s userspace.

But fair enough, sometimes you do want to run things directly. Just pointing out it’s not a bad answer, just not the one you wanted due to missing intents from your OP. Some things are so finicky and annoying to get running on the ā€œwrongā€ distro that Docker is the only sensible way to install it. I run the Unifi controller in a container for example, because I just don’t want to deal with Java versions and MongoDB versions. It just comes with everything it needs and I don’t need to needlessly keep Java 8 around on my main system potentially breaking things that needs a newer version.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kind of but also not really.

Docker is one kind of container, which itself is a set of kinds of Linux namespaces.

It’s possible to run them as if they were a virtual machine with LXC, LXD, systemd-nspawn. Those run an init system and have a whole Linux stack of their own running inside.

Docker/OCI take a different approach: we don’t really care about the whole operating system, we just want apps to run in a predictable environment. So while the container does contain a good chuck of a regular Linux installation, it’s there so that the application has all the libraries it expects there. Usually network software that runs on a specified port. Basically, ā€œworks on my machineā€ becomes ā€œhere’s my whole machine with the app on it already configuredā€.

And then we were like well this is nice, but what if we have multiple things that need to talk to eachother to form a bigger application/system? And that’s where docker-compose and Kubernetes pods comes in. They describe a set of containers that form a system as a single unit, and links them up together. In the case of Kubernetes, it’ll even potentially run many many copies of your pod across multiple servers.

The last one is usually how dev environments go: one of them have all your JS tooling (npm, pnpm, yarn, bun, deno, or all of them even). That’s all it does, so you can’t possibly have a Python library that conflicts or whatever. And you can’t accidentally depend on tools you happen to have installed on your machine because then the container won’t have it and it won’t work, you’re forced to add it to the container. Then that’s used to build and run your code, and now you need a database. You add a MongoDB container to your compose, and now your app and your database are managed together and you can even access the other containers by their name! Now you need a web server to run it in a browser? Add NGINX.

All isolated, so you can’t be in a situation where one project needs node 16 and an old version of mongo, but another one needs 20 and a newer version of mongo. You don’t care, each have a mongo container with the exact version required, no messing around.

Typically you don’t want to use Docker as a VPS though. You certainly can, but the overlay filesystems will become inefficient and it will drift very far from the base image. LXC and nspawn are better tools for that and don’t use image stacking or anything like that. Just a good ol’ folder.

That’s just some applications of namespaces. All of process, network, time, users/groups, filesystems/mount can be independently managed so many namespaces can be in the same network namespace, while in different mount namespaces.

And that’s how Docker, LXC, nspawn, Flatpak, Snaps are kinda all mostly the same thing under the hood and why it’s a very blurry line which ones you consider to be isolation layers, just bundled dependencies, containers, virtual machines. It’s got an infinite number of ways you can set up the namespaces the ranges from seeing /tmp as your own personal /tmp to basically a whole VM.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • •
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #