@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them šŸ³ļøā€šŸŒˆ

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

The votes are public. Kbin displays them right in the UI. Lemmy semi-hides it, but it’s never been designed to be private in any way.

Changing instance won’t do shit if that’s a concern to you. As an admin I can see them even if my instance isn’t involved with the post at all:

https://lemmy.max-p.me/pictrs/image/6bae7aa5-20a3-497e-9012-dc4c8a869eb4.png

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

It would be if it wasn’t for NVIDIA, as usual. On Intel/AMD, you assign the seats, the displays light up and you’re good to go, pretty much works out of the box, especially on Wayland.

But for NVIDIA yeah maybe a VM is less pain since NVIDIA works well with VFIO.

Linux file transfer speed bottlenecks?

I’m currently watching the progress of a 4tB rsync file transfer, and i’m curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there’s a lot that can effect transfer speeds, so I guess i’m not asking why my transfer itself isn’t going faster....

Max_P,
@Max_P@lemmy.max-p.me avatar

SATA III is gigabit, so the max speed is actually 600MB/s.

What filesystem? For example, on my ZFS pool I had to let ZFS use a good chunk of my RAM for it to be able to cache things enough that rsync would max out the throughput.

Rsync doesn’t do the files in parallel so at such speeds, the process of open files, read chunks, write chunks, close files, repeat can add up. So you want the kernel to buffer as much of it as possible.

If you look at the disk graphs of both disks, you probably see a read spike, followed by a write spike on the target, instead of a smooth maxed out curve. Then the solution is increasing buffers and caching. Depending on the distro there’s a sysctl that may be on by default that limits the size of caches to prevent the ā€œI wrote a 4GB file to my USB stick and now there’s 4GB of RAM used for it and it takes hours after finishing the transfer before it’s flushed to the stickā€.

Max_P,
@Max_P@lemmy.max-p.me avatar

The ads come from an ad network where there is very little visibility into what’s going to be displayed in your app. And bad people also keep managing to get their ads published even though the ad network doesn’t allow them

And it all ties into the whole targeted advertising, where they also make sure very few people get the bad ad, and tries to target people they think may be more susceptible to these kinds of tactics. Depending on the amount of interactivity allowed, the ad can even display two different things if it deems you too savvy to fall for it.

It’s basically unescapable unless you only use apps without ads, or pay for the ad-free versions.

The whole advertising industry is sketchy, more news at 10.

Max_P,
@Max_P@lemmy.max-p.me avatar

Internally it’s even stored as a vote of either +1 or -1, so sending an undislike of a like probably also results in the vote’s removal. Lemmy just sums up all the votes and you have the score.

A like and a dislike activity are also contradictory, so even if you don’t unlike something, if you send a dislike it replaces the like as well.

Max_P,
@Max_P@lemmy.max-p.me avatar

Install from source is fairly likely to work: wiki.ros.org/noetic/Installation/Source

It doesn’t seem to have any outrageously complicated dependencies to work, just C++, Boost and a few other recognizable names, at least at a glance. They also seemingly have an ArchLinux package, which means it’s likely to at least be buildable on latest everything. Mint will fall in between, so the odds it’ll compile are pretty good.

Max_P,
@Max_P@lemmy.max-p.me avatar

For maximum performance you probably want to skip virt-manager, virt-viewer has a hardcoded FPS cap.

If you use QEMU directly and use virtio-gpu paired with the sdl or gtk display, and OpenGL enabled, you can run Ubuntu at 4K144Hz no problem. The VM is near imperceptible, and it works out of the box, that’s not even touching the crazy VFIO stuff.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s not even always necessarily about trust, but risk management as well. I’ve definitely coded a crash handler that exposed my database credentials in it. There’s also the network aspect of it: your ISP/job/coffee shop can see the DNS request and TLS server name from the telemetry ping. That can be used to track you, or maybe you trigger some firewall alarm at work because of the ping.

We’ve kind of just started accepting that most apps will phone home and that there’s constantly some chatter on the network from all those apps. But if you actually start looking at what all your devices and apps are doing in the background with say, a PiHole, it’s pretty shocking.

I’m not that paranoid and would certainly accept some level of telemetry if asked nicely. ā€œHey I’m a small dev, I appreciate receiving detailed crash reports to make the app betterā€. And as a developer, users might be willing to offer way more than what would be reasonable to do in the background. I might even agree to submit a screenshot on crash, but if and only if I’ve been asked before and told what it’s used for, and I get the option to disagree if I’m going to be handling private information and don’t want to risk my data be part of a stack trace.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yes but by doing so you’re using the same principles as MBR boot. There’s still this coveted boot sector Windows will attempt to take back every time.

What’s nice about EFI in particular is that the motherboard loads the file from the ESP, and can load multiple of them and add them to its boot menu. Depending on the motherboard, even browse the ESP and manually go execute a .efi from it.

Which in turn makes it a lot less likely to have bootloader fuckups because you basically press F12 and pick GRUB/sd-boot and you’re back in. Previously the only fix would be boot USB and reinstall syslinux/GRUB.

Lemmy instance which has not defederated with any other instance.

Hi everyone. I have found many ghost comments in posts. Like one of the posts has 300+ upvotes and 28 comments but when I opened it, there were no comments. I tried different Lemmy apps and it’s the same in all of them. Which leads me to believe that it has something to do with defederation done by Lemmy.ml. Which instance has...

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

Keep in mind, defederation is bidirectional. You can end up on an instance that doesn’t defederate anybody but is being defederated by some major instances and end up worse off. Also, communities are bound to an instance so even if your instance doesn’t defederate with another, the instance that hosts the community might, which also doesn’t solve anything.

Also lemmy.ml had to restore from backup monday because postgres shat itself, so if the post is from monday or around, it’s possible it was simply lost due to the technical problems.

There’s also some federation problems with 0.19.0 and 0.19.1, so it’s possible it’s been attempted to be delivered to lemmy.ml but failed due to load or whatever.

You didn’t give any details or examples so we can only speculate. We troubleshoot federation by establishing patterns, like from what instance are the missing comments from, what instance hosts the community.

Addendum: I’ve also been experiencing occasional ghost posts, and I’m on my own instance, so there might be some stuff going on that’s unrelated, because I sure didn’t do anything. If they were deleted or retracted I would see them because I’m admin, I see everything.

Max_P,
@Max_P@lemmy.max-p.me avatar

Moderation does federate out, but only from the originating instance, the one that owns the post on question.

If someone post spam on lemmy.ca and lemmy.world deletes it, it only deletes on lemmy.world. If a mod or admin on lemmy.ca deletes it however, it federates and everyone deletes it as a result (unless modified to ignore deletions, but by default Lemmy will accept it).

There’s some interoperability problems with some software, notably Kbin where their deletions don’t federate to Lemmy correctly, so those do need to be moderated by every instance. But between Lemmy instances it does federate.

Max_P,
@Max_P@lemmy.max-p.me avatar

I think the best way to visualize it is in terms of who owns what and who has the authority to perform moderator actions.

  • As a user, you own the post, so you’re allowed to delete it no matter what. That always federate.
  • An admin always has full rights on what happens on their instance, because they own the server. The authority ends at their instance, so it may not federate out unless authorized otherwise.
  • An admin can nominate any user from the same instance to moderate any of its communities, local or remote. That authority also ends at that instance. In theory it should work for remote users too, but then it’d be hard to be from lemmy.ml and moderate lemmy.world’s view of a community on lemmy.ca.
  • The instance that owns the community can also do whatever they want even if the post originated from elsewhere, because they own the community. That federates out.
  • The instance that owns the community can nominate anyone from any instance as moderator. They’re authorized to perform mod actions on behalf of the instance that owns the community, therefore it will federate out as well.

From those you can derive what would happen under any scenario involving any combinations of instances.

Max_P,
@Max_P@lemmy.max-p.me avatar

(a) Yes. Instance admins have the ultimate say in what’s on their server. They can delete posts, entire communities, ban remote users and delete remote users. At least they had the decency of notifying you!

Since lemmy.ca owns the post, lemmy.world can’t federate out the removal, so it’s only on lemmy.world.

(b) You have to go appeal to lemmy.world. Each instance have its own independent appeal process.

That’s the beauty of the fediverse: instances can all have their rules to tailor the experience to their users, and it doesn’t have to affect the entire fediverse. Other instances linked to lemmy.ca can still see and interact with your post just fine, just not lemmy.world.

Max_P,
@Max_P@lemmy.max-p.me avatar

Sometimes ā€œuglyā€ is even ā€œnot pretty and wealthy lookingā€.

Wind turbines aren’t pretty but they’re not any more of an eye sore as overhead power lines or whatever. And at least it’s a symbol of caring about being sustainable.

A lot of people like to move all the ā€œuglyā€ elsewhere out of their sight and then call those places shitholes. It doesn’t bother them they’re just moving the infrastructure where the less wealthy have to deal with it. They’d rather a coal plant destroy a lower class city in pollution than see wind turbines near their upper class neighbourhood.

Max_P,
@Max_P@lemmy.max-p.me avatar

For KDE specifically I think there’s a dbus interface that can be called to switch it. You can find it with QDBusViewer or D-Feet.

I’d imagine XWayland would follow the same since it’s essentially a Wayland client. But if you ran the xmodmap under xwayland, that may have inverted it in xwayland, and it’s already inverted in KWin which would double invert it aka put it back to default.

Otherwise doing it at the evdev level will definitely work. It’s a bit of a nuclear option but if it works…

Max_P,
@Max_P@lemmy.max-p.me avatar

Of course it’s a 737 Max.

Boeing’s really been dropping the ball on the 737 Max upgrades, first the Max 8 now the Max 9.

At this point I kind of avoid airlines with Boeing fleets, the Airbus planes are nicer anyway in general.

Max_P,
@Max_P@lemmy.max-p.me avatar

Both Docker and Podman pretty much handle all of those so I think you’re good. The last aspect about networking can easily be fixed with a few iptables/nftables/firewalld rules. One final addition could be NGINX in front of web services or something dedicated to handling web requests on the open Internet to reduce potential exploits in the embedded web servers in your apps. But other than that, you’ve got it all covered yourself.

There’s all the options needed to limit CPU usage, memory usage or generally prevent using up all the system’s resources in docker/podman-compose files as well.

If you want an additional layer of security, you could also run it all in a VM, so a container escape leads to a VM that does nothing else but run containers. So another major layer to break.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kernel exploits. Containers logically isolate resources but they’re still effectively running as processes on the same kernel sharing the same hardware. There was one of those just last year: blog.aquasec.com/cve-2022-0185-linux-kernel-conta…

Virtual machines are a whole other beast because the isolation is enforced at the hardware level, so you have to exploit hardware vulnerabilities like Spectre or a virtual device like a couple years ago some people found a breakout bug in the old floppy emulation driver that still gets assigned to VMs by default in QEMU.

Max_P,
@Max_P@lemmy.max-p.me avatar

Security comes in layers, so if you’re serious about security you do in fact plan for things like that. You always want to limit the blast radius if your security measures fail. And most of the big cloud providers do that for their container/kubernetes offerings.

If you run portainer for example and that one gets breached, that’s essentially free container escape because you can trick Docker into mounting and exposing what you need from the host to escape. It’s not uncommon for people to sometimes give more permissions than the container really needs.

It’s not like making a VM dedicated to running your containers cost anything. It’s basically free. I don’t do it all the time, but if it’s exposed to the Internet and there’s other stuff on the box I want to be hard to get into, like if it runs on my home server or desktop, then it definitely gets a VM.

Otherwise, why even bother putting your apps in containers? You could also just make the apps themselves fully secure and unbreachable. Why do we need a container for isolation? One should assume the app’s security measures are working, right?

Max_P,
@Max_P@lemmy.max-p.me avatar

You may disagree with it and may even be right, I didn’t bother watching all those videos. But the thing is, it’s always a potential liability for admins, and we’re at the mercy of what the law says and what a potential judge or jury would rule if brought to court.

And we all know how that goes when underage people are involved: everyone goes ā€œbut the children!ā€. Therefore, admins side with caution, because nobody wants to deal with legal trouble if they don’t have to. Just blur it and make everyone happy.

Plus, in the current AI landscape, the mere availability of nude children imagery even if it’s not sexually suggestive at all means someone can alter it to become so. People have already been arrested for that.

Nothing to do with people being too prude to see naked children. It’s about consent and what nasty people will inevitably do with it. Does that girl really want videos of her naked all over the porn sites even through heroic actions? Probably not.

That’s a very weird hill to blow alts on.

Max_P,
@Max_P@lemmy.max-p.me avatar

Not really different than any other M.2 SSDs, that it’s over USB doesn’t matter.

The only consideration for USB sticks is that they’re usually quite crap, so running a system off it tends to use up the flash pretty quickly.

Max_P,
@Max_P@lemmy.max-p.me avatar

Distro packages and to some extent Flatpaks, use shared libraries which can be updated independently of your app.

So for example, if a vulnerability is discovered in say, curl, or imagemagick, ffmpeg or whatever library an app is using: for AppImages, this won’t be fixed until you update all of your AppImages. In Flatpak, it usually can be updated as part of a dependency, or distributed as a rebuild and update of the Flatpak. With distro packages, you can usually update the library itself and be done with it already.

AppImages are convenient for the user in that you can easily store them, move them, keep old versions around forever easily. It still doesn’t guarantee it’ll still run in distros a couple years for now, it guarantees that a given version will forever be vulnerable if any of its dependencies are because they’re bundled in, it makes packages that are much much bigger than they need to be, and you have to unpack/repack them if you need library shims.

Different kinds of tradeoffs and goals, essentially. Flatpak happens to be a compromise a lot of people agree on as it provides a set of distro-agnostic libraries while also not shifting the burden entirely onto the app developers. The AppImage developer is intentionally keeping Wayland broken on AppImage because he hates it and wants to fulfil his narrative that Wayland is a broken mess that won’t ever work, while Flatpak developers work hard on sandboxing and security and granular permission systems.

VPN to home network options

I currently have a server running Unraid as the OS, which has some WireGuard integration built in. Which I’ve enabled and been using to remotely access services hosted on that server. But as I’ve expanded to include things like Octopi running on a Pi3 and NextcloudPi running on a Pi4 (along with AdGuardHome), I’m trying to...

Max_P,
@Max_P@lemmy.max-p.me avatar

Any reason the VPN can’t stay as-is? Unless you don’t want it on the unraid box at all anymore. But going to unraid over VPN then out the rest of the network from there is a perfectly valid use case.

Max_P,
@Max_P@lemmy.max-p.me avatar

If we allow derivatives, I’d say SteamOS despite being Arch. It’s putting Linux in non-technical people’s literal hands and it’s not a locked down and completely different platform that happens to run Linux like Android is. It’s almost designed by Valve to give people a taste of Linux by the addition of its desktop mode, and people that would be modding consoles are now modding SteamOS and learning how much fun an open platform can be. I’ve seen people from sales talk about their Decks on my work Slack.

Otherwise, NixOS, no contest. It’s been a really long time since we’ve last seen a fundamentally different distro that’s got some real potential. For the most part, Arch, Debian and Fedora do similar things with varying degrees of automation and preconfiguring your packages, but they’re still very package oriented. We’ve been mostly slapping tools like Ansible to really configure them to our liking reproducibly, answer files if your package manager has something like that. And then NixOS is like, what if the entire system was derived from evaluating a function, and and the same input will always result in the exact same system? It’s incredibly powerful especially when maintaining machines at scale. Updates are guaranteed to result in the exact same configuration, and they’re atomic too, no halfway updated system the user unplugged the system in the middle of.

Max_P,
@Max_P@lemmy.max-p.me avatar

RAM is the kind of thing you’re better off having too much than not enough. Worst case the OS ends up with a very healthy and large file cache, which frees up your storage and makes things a bit faster/lets it spend the CPU on other things. If anything, your machine is future proofed against the ever increasing RAM hungriness of web apps. But if you run out of it, you get apps killed, hangs or major slowdowns as it hits the swap.

The thing with RAM is that it’s easy for 99% of your workload to fit comfortably, and then there’s one thing you temporarily need a bit more and you’re screwed. My machine usually uses 8-12/32GB of RAM but yet I still ended up needing to add swap to my machine. Just opening up the Lemmy source code and spinning up the Rust LSP can use a solid 8+GB alone. I’ve compiled some AUR packages that needed more than 16GB of RAM. I have 16 cores so compiling anything with -j32 can very quickly bring down a machine to its knees even if each compile thread is only using like 256-512MB each.

Another example: my netbook has 8GB. 99% of the time it’s fine, because it’s a web browsing machine, and I probably average on 4GB usage on a heavy day with lots of tabs open. But if I open up VSCode and use any LSP be it TypeScript or Rust, the machine immediately starts swapping aggressively. I had to log out of my graphical session to compile Lemmy, barely.

RAM is cheap enough these days it’s nice to have more than you need to not ever have to worry about it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • •
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #