@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them šŸ³ļøā€šŸŒˆ

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

The votes are public. Kbin displays them right in the UI. Lemmy semi-hides it, but it’s never been designed to be private in any way.

Changing instance won’t do shit if that’s a concern to you. As an admin I can see them even if my instance isn’t involved with the post at all:

https://lemmy.max-p.me/pictrs/image/6bae7aa5-20a3-497e-9012-dc4c8a869eb4.png

Linux file transfer speed bottlenecks?

I’m currently watching the progress of a 4tB rsync file transfer, and i’m curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there’s a lot that can effect transfer speeds, so I guess i’m not asking why my transfer itself isn’t going faster....

Max_P,
@Max_P@lemmy.max-p.me avatar

SATA III is gigabit, so the max speed is actually 600MB/s.

What filesystem? For example, on my ZFS pool I had to let ZFS use a good chunk of my RAM for it to be able to cache things enough that rsync would max out the throughput.

Rsync doesn’t do the files in parallel so at such speeds, the process of open files, read chunks, write chunks, close files, repeat can add up. So you want the kernel to buffer as much of it as possible.

If you look at the disk graphs of both disks, you probably see a read spike, followed by a write spike on the target, instead of a smooth maxed out curve. Then the solution is increasing buffers and caching. Depending on the distro there’s a sysctl that may be on by default that limits the size of caches to prevent the ā€œI wrote a 4GB file to my USB stick and now there’s 4GB of RAM used for it and it takes hours after finishing the transfer before it’s flushed to the stickā€.

Lemmy instance which has not defederated with any other instance.

Hi everyone. I have found many ghost comments in posts. Like one of the posts has 300+ upvotes and 28 comments but when I opened it, there were no comments. I tried different Lemmy apps and it’s the same in all of them. Which leads me to believe that it has something to do with defederation done by Lemmy.ml. Which instance has...

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

Keep in mind, defederation is bidirectional. You can end up on an instance that doesn’t defederate anybody but is being defederated by some major instances and end up worse off. Also, communities are bound to an instance so even if your instance doesn’t defederate with another, the instance that hosts the community might, which also doesn’t solve anything.

Also lemmy.ml had to restore from backup monday because postgres shat itself, so if the post is from monday or around, it’s possible it was simply lost due to the technical problems.

There’s also some federation problems with 0.19.0 and 0.19.1, so it’s possible it’s been attempted to be delivered to lemmy.ml but failed due to load or whatever.

You didn’t give any details or examples so we can only speculate. We troubleshoot federation by establishing patterns, like from what instance are the missing comments from, what instance hosts the community.

Addendum: I’ve also been experiencing occasional ghost posts, and I’m on my own instance, so there might be some stuff going on that’s unrelated, because I sure didn’t do anything. If they were deleted or retracted I would see them because I’m admin, I see everything.

Max_P,
@Max_P@lemmy.max-p.me avatar

The ads come from an ad network where there is very little visibility into what’s going to be displayed in your app. And bad people also keep managing to get their ads published even though the ad network doesn’t allow them

And it all ties into the whole targeted advertising, where they also make sure very few people get the bad ad, and tries to target people they think may be more susceptible to these kinds of tactics. Depending on the amount of interactivity allowed, the ad can even display two different things if it deems you too savvy to fall for it.

It’s basically unescapable unless you only use apps without ads, or pay for the ad-free versions.

The whole advertising industry is sketchy, more news at 10.

Max_P,
@Max_P@lemmy.max-p.me avatar

If we allow derivatives, I’d say SteamOS despite being Arch. It’s putting Linux in non-technical people’s literal hands and it’s not a locked down and completely different platform that happens to run Linux like Android is. It’s almost designed by Valve to give people a taste of Linux by the addition of its desktop mode, and people that would be modding consoles are now modding SteamOS and learning how much fun an open platform can be. I’ve seen people from sales talk about their Decks on my work Slack.

Otherwise, NixOS, no contest. It’s been a really long time since we’ve last seen a fundamentally different distro that’s got some real potential. For the most part, Arch, Debian and Fedora do similar things with varying degrees of automation and preconfiguring your packages, but they’re still very package oriented. We’ve been mostly slapping tools like Ansible to really configure them to our liking reproducibly, answer files if your package manager has something like that. And then NixOS is like, what if the entire system was derived from evaluating a function, and and the same input will always result in the exact same system? It’s incredibly powerful especially when maintaining machines at scale. Updates are guaranteed to result in the exact same configuration, and they’re atomic too, no halfway updated system the user unplugged the system in the middle of.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yes but by doing so you’re using the same principles as MBR boot. There’s still this coveted boot sector Windows will attempt to take back every time.

What’s nice about EFI in particular is that the motherboard loads the file from the ESP, and can load multiple of them and add them to its boot menu. Depending on the motherboard, even browse the ESP and manually go execute a .efi from it.

Which in turn makes it a lot less likely to have bootloader fuckups because you basically press F12 and pick GRUB/sd-boot and you’re back in. Previously the only fix would be boot USB and reinstall syslinux/GRUB.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’ll definitely run Kali well, Windows will be left without hardware acceleration for 2D/3D so it’ll be a little laggy but it’s usable.

VMware has its own driver that converts enough DirectX for Windows to run smoother and not fall back to the basic VGA path.

But VMware being proprietary software, changing distro won’t make it better so it’s either you deal with the VMware bugs or you deal with stable but slow software rendering Windows.

That said on the QEMU side, it’s possible to attach one of your host’s GPUs to the VM, where it will get full 3D acceleration. Many people are straight up gaming in competitive online games, in a VM with QEMU. If you have more than one GPU, even if it’s an integrated GPU + a dedicated one like is common with most Intel consumer non-F CPUs, you can make that happen and it’s really nice. Well worth buying a used GTX 1050 or RX 540 if your workflow depends on a Windows VM running smoothly. Be sure your CPU and motherboard support it properly before investing though, it can be finicky, but so awesome when it works.

Max_P,
@Max_P@lemmy.max-p.me avatar

Well, I’m currently using VMware on Ubuntu

Well there’s your mistake: using VMware on a Linux host.

QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.

Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.

Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.

Max_P,
@Max_P@lemmy.max-p.me avatar

It is very unfortunate. It’s fine to point out problems, but then when you become part of the problem, that’s not amazing.

He’s had the same meltdown with fuse2 being deprecated in favor of fuse3 which, guess what, also broke AppImage and we had a huge rant for that too.

Flatpak has a better chance of being forward compatible for the foreseeable future. Linux generally isn’t a very ABI/API compatible platform because for the most part you’re expected to be able to patch and recompile whatever you might want.

Max_P,
@Max_P@lemmy.max-p.me avatar

Distro packages and to some extent Flatpaks, use shared libraries which can be updated independently of your app.

So for example, if a vulnerability is discovered in say, curl, or imagemagick, ffmpeg or whatever library an app is using: for AppImages, this won’t be fixed until you update all of your AppImages. In Flatpak, it usually can be updated as part of a dependency, or distributed as a rebuild and update of the Flatpak. With distro packages, you can usually update the library itself and be done with it already.

AppImages are convenient for the user in that you can easily store them, move them, keep old versions around forever easily. It still doesn’t guarantee it’ll still run in distros a couple years for now, it guarantees that a given version will forever be vulnerable if any of its dependencies are because they’re bundled in, it makes packages that are much much bigger than they need to be, and you have to unpack/repack them if you need library shims.

Different kinds of tradeoffs and goals, essentially. Flatpak happens to be a compromise a lot of people agree on as it provides a set of distro-agnostic libraries while also not shifting the burden entirely onto the app developers. The AppImage developer is intentionally keeping Wayland broken on AppImage because he hates it and wants to fulfil his narrative that Wayland is a broken mess that won’t ever work, while Flatpak developers work hard on sandboxing and security and granular permission systems.

Max_P,
@Max_P@lemmy.max-p.me avatar

RAM is the kind of thing you’re better off having too much than not enough. Worst case the OS ends up with a very healthy and large file cache, which frees up your storage and makes things a bit faster/lets it spend the CPU on other things. If anything, your machine is future proofed against the ever increasing RAM hungriness of web apps. But if you run out of it, you get apps killed, hangs or major slowdowns as it hits the swap.

The thing with RAM is that it’s easy for 99% of your workload to fit comfortably, and then there’s one thing you temporarily need a bit more and you’re screwed. My machine usually uses 8-12/32GB of RAM but yet I still ended up needing to add swap to my machine. Just opening up the Lemmy source code and spinning up the Rust LSP can use a solid 8+GB alone. I’ve compiled some AUR packages that needed more than 16GB of RAM. I have 16 cores so compiling anything with -j32 can very quickly bring down a machine to its knees even if each compile thread is only using like 256-512MB each.

Another example: my netbook has 8GB. 99% of the time it’s fine, because it’s a web browsing machine, and I probably average on 4GB usage on a heavy day with lots of tabs open. But if I open up VSCode and use any LSP be it TypeScript or Rust, the machine immediately starts swapping aggressively. I had to log out of my graphical session to compile Lemmy, barely.

RAM is cheap enough these days it’s nice to have more than you need to not ever have to worry about it.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yeah, it’s not really advertised as an init system anymore. It’s an entire system management suite, and when seen from that angle, it’s pretty good at it too. All of it is consistent, it’s fairly powerful, and it’s usually 10-20 lines of unit files to describe what you want. I wanted that for a long time.

I feel like the hate always comes from the people that treat the UNIX philosophy like religion. And even then, systemd is very modular, just also well integrated together: networkd manages my network, resolved manages my DNS, journald manages my logs, timesyncd manages my NTP, logind manages my logins and sessions, homed mounts my users profiles on demand.

Added complexity, yes, but I’ve been using the hell out of it. Start services when a specific peripheral is plugged in? Got it. Automatically assign devices to seats? Logind’s got you covered, don’t even need to mess with xorg configs. VM network? networkd handles it. DNS caching? Out of the box. Split DNS? One command. Don’t want 2000 VMs rotating their logs at exactly midnight and trashing your ceph cluster? Yep just slap a RandomizedDelaySec=24h to the units. Isolate and pin a VM to dedicated cores dynamically? Yep it’ll do that. Services that needs to run on a specific NUMA node to stay close to PCIe peripherals? Yep easy. All very easily configurable with things like Ansible or bash provisioning scripts.

Sure it may not be for everybody, but it solves real problems real Linux admins have to deal with at scale. If you don’t like it, sysvinit still works just fine and I heard good things about runit too. It’s an old and tired argument, it’s been over 10 years, we can stop whining about it and move on. There’s plenty of non-systemd distros to use.

Alright, I'm gonna "take one for the team" -- what is with the "downvote-happy" users lately?

Title. ā€œlmao internet pointsā€ and all, but what is the point of participating in a community that sees assumptions and other commonly non-harmful commentaries/posts as ā€œbadā€ this easily? Do folks in here are really that needy of self-validation, even if it means seeking such from something completely insignificant like...

Max_P,
@Max_P@lemmy.max-p.me avatar

I expected this to be ā€œanother one of thoseā€ but actually from what my instance has about you, you were indeed correct. Gaming distros with exclusive features lmao.

IMO that’s some of the gamer logic bleeding over in the Linux side, now that Linux gaming is taking off. They’ll do anything including install dubious Linux distros barely hanging together with duct tape for a perceived extra 2 FPS. Download software exclusively distributed on Discord? Hell yeah. I’m sure at least one of them boots with mitigations=off and it’s not clearly indicated that it does.

We’re seeing the same thing on the Windows side with modified Windows ISOs like the whole AtlasOS, that rightfully made some security experts sound the alarm. Some did things like completely strip off the updates, antivirus and firewall. Unless your system is exclusively running Steam and firewalled off the network, this is a certified bad idea.

I’d probably trust Nobara because the guy clearly knows his shit, but some of them really are just some other guy’s riced up Arch snapshot. They may give the impression everything just works at first but I’ve definitely seen examples of it falling apart. Even bigger distros like Pop_OS! had major snafus like the whole Steam uninstalls your DE thing, and Manjaro still fucks up something basic every now and then. I tried some of them in a VM and they didn’t even install or boot correctly. Oh my fault that one only works for NVIDIA graphics cards not AMD, my bad.

It’s not worth arguing, it’s a user base with vastly different goals than I do, just let them have their Bedrock Linux completely blow up in multi package manager hell and soon enough they’ll come running for a saner more reliable distro.

Max_P,
@Max_P@lemmy.max-p.me avatar

C bindings and APIs generally work much better in Rust because the language works a lot more like C than it does C++.

Qt depends a lot on C++ class inheritance, and even does some preprocessing of C++ files to generate code in those classes. That’s obviously not possible when using Rust. And it looks like you need a fair bit of unsafe there and there to use it at all too.

Meanwhile, GTK being a C library, its integration with Rust is much more transparent and nice.

So if you’re making a GUI Rust app, you’re just kind of better off with GTK at the moment. It’s significantly easier and nicer.

Wanting to improve my Linux skills after 17 months of daily driving Linux

I’ve been daily driving Linux for 17 months now (currently on Linux Mint). I have got very comfortable with basic commands and many just works distros (such as Linux Mint, or Pop!_OS) with apt as the package manager. I’ve tried Debian as a distro to try to challenge myself, but have always ran into issues. On my PC, I could...

Max_P,
@Max_P@lemmy.max-p.me avatar

Arch is actually not as bad as many say. It’s pretty stable nowadays, I even run Arch on some servers and I never had any issues.

Not even just nowadays. My desktop is running a nearly 10 year old install. It’s so old, it not only predates the installer, it predates the ā€œtraditionalā€ way and used the old TUI installer. It even predates the sysvinit to systemd switch! The physical computer has been ship of thesis’d twice.

Arch is surprisingly reliable. It’s not ā€œstableā€ as in things change and you have to update some configs or even your own software. But it’s been so reliable I never even felt the need to go look elsewhere. It just works.

Even my Arch servers have been considerably more reliable and maintenance-free than the thousands I manage at work with lolbuntu on them. Arch does so little on its own, there’s less to go wrong. Meanwhile the work boxes can’t even update GRUB noninteractively, every now and then we have a grub update that pops a debconf screen and hangs unattended-upgrades until manually fixed and hoses up apt as a whole.

Max_P,
@Max_P@lemmy.max-p.me avatar

For me what planted the Linux seed is when I tried Mandrake Linux when I was 9-10ish. I didn’t end up sticking with it for all that long, but I absolutely loved trying out all those DEs. I had downloaded the full fat 5 CD version and checked almost everything during setup, so it came jam packed with all sorts of random software to try out. The games were nice, played the shit out of Frozen Bubble. I really liked Konqueror too, coming from Internet Explorer. It was pretty snappy overall. And there’s virtual desktops for more space! People were really helpful on IRC, even though I was asking about installing my Windows drivers in Wine. Unfortunately I kinda wanted games and my friends were getting annoyed we couldn’t play games on my computer.

It stuck with me however, so later on when some of my online friends were trying it out, I wanted to try it out again too. I wasn’t much into games anymore, had started coding a little bit. So on my computer went Kubuntu 7.10, and I’m still on Linux to this day.

But that seed is what taught me there’s more. I didn’t hate Windows, I wasn’t looking to replace it. I hadn’t fallen in love with FOSS yet. It was cool and different and fun. It wasn’t as sterile and as… grey as Windows 98. You could pop up some googly eyes that followed your mouse, because you could. There were all those weird DEs with all sorts of bars and features.

Max_P,
@Max_P@lemmy.max-p.me avatar

The only advice I have is to try to make it interesting for them and not just additional practical information they have to memorize. You don’t want to be the weird dad that insists on using stuff nobody else does, you have to show them what’s cool about it, and also accept maybe they’ll just stick with Windows for now.

I also think the main takeaway they should have out of it is that there’s many ways of doing the same thing and none is ā€œthe correct and only wayā€. They should learn to think critically, navigate unfamiliar user interfaces, learn some more general concepts and connect the dots on how things work, and that computers are logical machines, they don’t just do random things because they’re weird. Teach them the value of being able to dig into how it works even if it doesn’t necessarily benefit them immediately.

Maybe set up a computer or VM with all sorts of WMs and DEs with the express permission to wreck it if they want, or a VM they can set up (even better if they learn they can make their own VMs as well!). Probably have some games on there as well. Maybe tour some old operating systems for the historical context of how we got where we are today. Show them how you can make the computers do things via a terminal and it does the same thing as in the GUI. Show different GUIs, different file managers, different text/document editors, maybe different DE’s, maybe even tiling vs floating. What is a file, how are ways you can organize them, how you can move them around, how some programs can open other program’s files.

Teach them the computer works for them not the other way around. They can make the computer do literally anything they want if they wish so. But it’s okay to use other people’s stuff too.

Max_P,
@Max_P@lemmy.max-p.me avatar

Basically the idea is that if you have a lot of data, HDDs have much bigger capacities for the price, whereas large SSDs can be expensive. SSDs have gotten cheap, but you can get used enterprise drives on eBay with huge capacities for incredibly cheap. There’s 12TB HDDs for like $100. 12TB of SSDs would run you several hundreds.

You can slap bcache on a 512GB NVMe backed by a 8TB HDD, and you get 8TB worth of storage, 512GB of which will be cached on the NVMe and thus really fast. But from the user’s perspective, it’s just one big 8TB drive. You don’t have to think about what is where, you just use it. You don’t have to be like, I’m going to use this VM so I’ll move it to the SSD and back to the HDD when done. The first time might be super slow but subsequent use will be very fast. It also caches writes too, so you can write up to 512GB really fast in this example and it’ll slowly get flushed to the HDD in the background. But from your perspective, as soon as it’s written to the SSD, the data is effectively commited to disk. If the application calls fsync to ensure data is written to disk, it’ll complete once it’s fully written to the SSD. You get NVMe read/write speeds and the space of an HDD.

So one big disk for your Steam library and whatever you play might be slow on the first load but then as you play the game files gets promoted to the NVMe cache and perform mostly at NVMe speeds, and your loading screens are much shorter.

Ubuntu 24.04 LTS Committing Fully To Netplan For Network Configuration (www.phoronix.com)

The Canonical-developed Netplan has served for Linux network configuration on Ubuntu Server and Cloud versions for years. With the recent Ubuntu 23.10 release, Netplan is now being used by default on the desktop. Canonical is committing to fully leveraging Netplan for network configuration with the upcoming Ubuntu 24.04 LTS...

Max_P,
@Max_P@lemmy.max-p.me avatar

Netplan’s been the default since 20.04 on the server side and the article says it’s coming to the desktop release with 24.04.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s been slow for quite a while for me too, and it can’t even be the ads because I have Premium.

It’s usually fine when it’s loaded but it does take quite a while to load for some reason, and I’ve got gigabit fiber and 16 cores to process it.

I heard YouTube falls back to a very slow path on Firefox because it uses features that Chrome implemented and never made it to the standard and something else was adopted instead.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s mostly better, but not in every way. It has a lot of useful features, at a performance cost sometimes. A cost that historically wasn’t a problem with spinning hard drives and relatively slow SATA SSDs but will show up more on really fast NVMes.

The snapshots, it has to keep track of what’s been modified. Depending on the block size, an update of just a couple bytes can end up as a few 4k write because it’s Copy-on-Write and it has to update a journal and it has to update the block list of the file. But at the same time, copying a 50GB file is instantaneous on btrfs because of the same CoW feature. Most people find the snapshots more useful than eeking out every last bit of performance out of your drive.

Even ZFS, often considered to be the gold standard of filesystems, is actually kinda slow. But its purpose isn’t to be the fastest, its purpose is throwing an array of 200 drives at it and trusting it to protect you even against some media degradation and random bit flips in your storage with regular scrubs.

Max_P,
@Max_P@lemmy.max-p.me avatar

Precisely. It’s not just ā€œit worksā€, it’s third-party hardware that Canonical tests, certifies and commits to support as fully compatible. They’ll do the work to make sure everything works perfectly, not just when upstream gets around to it. They’ll patch whatever is necessary to make it work. The use case is ā€œwe bought 500 laptops from Dell and we’re getting a support contract from Canonical that Ubuntu will run flawlessly on it for the next 5 years minimumā€.

RedHat has the exact same: catalog.redhat.com/hardware

Otherwise, most Linux OEMs just focus on first party support for their own hardware. They all support at least one distro where they ensure their hardware runs. Some may or may not also have enterprise support where they commit to supporting the hardware for X years, but for an end user, it just doesn’t matter. As a user, if an update breaks your WiFi, you revert and it’s okay. If you have 500 laptops and an update breaks WiFi, you want someone to be responsible for fixing it and producing a Root Cause Analysis to justify the downtime, lost business and whatnot.

Max_P,
@Max_P@lemmy.max-p.me avatar

Guess I should have said it cost me nothing extra because I already own the server.

Although Oracle’s free tier exists.

Max_P,
@Max_P@lemmy.max-p.me avatar

My instance exists for me and my friends to use. It’s not meant to attract anybody, it’s meant to serve me.

It costs me nothing and I’m permanently in control of my data, and it’ll live however long I want it to live, it updates when I decide I want to update it, if I want features I can just patch them in. When I make a PR, it goes on my instance first to try it out properly. I can post 10GB files from my instance if I want to, I’m the one that will pay for the bandwidth in the end.

I bet if you look at the profile of the admin of those ā€œabandonedā€ instances, you’ll find they’re active on Lemmy. They just have their own private instance just for themselves.

Doesn’t matter if lemmy.world or lemmy.ml or beehaw.org goes down: I still got all the content and they’ll eventually federate out when they come back up.

Max_P,
@Max_P@lemmy.max-p.me avatar

It is buzzword bullshit.

And a fad, probably. Everyone's trying to capitalize on the wow effect of ChatGPT.

Before AI it was neural network, and before that it was machine learning.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • •
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #