Comments

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P, to privacy in The Boost android client for Lemmy is displaying these dark pattern ads pretending to be system notifications. What security/privacy conscious Lemmy clients do you recommend?
@Max_P@lemmy.max-p.me avatar

The ads come from an ad network where there is very little visibility into what’s going to be displayed in your app. And bad people also keep managing to get their ads published even though the ad network doesn’t allow them

And it all ties into the whole targeted advertising, where they also make sure very few people get the bad ad, and tries to target people they think may be more susceptible to these kinds of tactics. Depending on the amount of interactivity allowed, the ad can even display two different things if it deems you too savvy to fall for it.

It’s basically unescapable unless you only use apps without ads, or pay for the ad-free versions.

The whole advertising industry is sketchy, more news at 10.

Max_P, to linux in Why are there so many (rust) GTK apps and so little Qt ones?
@Max_P@lemmy.max-p.me avatar

C bindings and APIs generally work much better in Rust because the language works a lot more like C than it does C++.

Qt depends a lot on C++ class inheritance, and even does some preprocessing of C++ files to generate code in those classes. That’s obviously not possible when using Rust. And it looks like you need a fair bit of unsafe there and there to use it at all too.

Meanwhile, GTK being a C library, its integration with Rust is much more transparent and nice.

So if you’re making a GUI Rust app, you’re just kind of better off with GTK at the moment. It’s significantly easier and nicer.

Max_P, to linux in Fedora 40 Will Enable Systemd Service Security Hardening
@Max_P@lemmy.max-p.me avatar

Yeah, it’s not really advertised as an init system anymore. It’s an entire system management suite, and when seen from that angle, it’s pretty good at it too. All of it is consistent, it’s fairly powerful, and it’s usually 10-20 lines of unit files to describe what you want. I wanted that for a long time.

I feel like the hate always comes from the people that treat the UNIX philosophy like religion. And even then, systemd is very modular, just also well integrated together: networkd manages my network, resolved manages my DNS, journald manages my logs, timesyncd manages my NTP, logind manages my logins and sessions, homed mounts my users profiles on demand.

Added complexity, yes, but I’ve been using the hell out of it. Start services when a specific peripheral is plugged in? Got it. Automatically assign devices to seats? Logind’s got you covered, don’t even need to mess with xorg configs. VM network? networkd handles it. DNS caching? Out of the box. Split DNS? One command. Don’t want 2000 VMs rotating their logs at exactly midnight and trashing your ceph cluster? Yep just slap a RandomizedDelaySec=24h to the units. Isolate and pin a VM to dedicated cores dynamically? Yep it’ll do that. Services that needs to run on a specific NUMA node to stay close to PCIe peripherals? Yep easy. All very easily configurable with things like Ansible or bash provisioning scripts.

Sure it may not be for everybody, but it solves real problems real Linux admins have to deal with at scale. If you don’t like it, sysvinit still works just fine and I heard good things about runit too. It’s an old and tired argument, it’s been over 10 years, we can stop whining about it and move on. There’s plenty of non-systemd distros to use.

Max_P, to asklemmy in Why create an instance if you are not ready to post in it?
@Max_P@lemmy.max-p.me avatar

My instance exists for me and my friends to use. It’s not meant to attract anybody, it’s meant to serve me.

It costs me nothing and I’m permanently in control of my data, and it’ll live however long I want it to live, it updates when I decide I want to update it, if I want features I can just patch them in. When I make a PR, it goes on my instance first to try it out properly. I can post 10GB files from my instance if I want to, I’m the one that will pay for the bandwidth in the end.

I bet if you look at the profile of the admin of those “abandoned” instances, you’ll find they’re active on Lemmy. They just have their own private instance just for themselves.

Doesn’t matter if lemmy.world or lemmy.ml or beehaw.org goes down: I still got all the content and they’ll eventually federate out when they come back up.

Max_P, to linux in As a normal, boring user that does nothing special other than browse the internet and the occasional "casual coding" -- what am I supposed to do with 32GiB of ram?
@Max_P@lemmy.max-p.me avatar

RAM is the kind of thing you’re better off having too much than not enough. Worst case the OS ends up with a very healthy and large file cache, which frees up your storage and makes things a bit faster/lets it spend the CPU on other things. If anything, your machine is future proofed against the ever increasing RAM hungriness of web apps. But if you run out of it, you get apps killed, hangs or major slowdowns as it hits the swap.

The thing with RAM is that it’s easy for 99% of your workload to fit comfortably, and then there’s one thing you temporarily need a bit more and you’re screwed. My machine usually uses 8-12/32GB of RAM but yet I still ended up needing to add swap to my machine. Just opening up the Lemmy source code and spinning up the Rust LSP can use a solid 8+GB alone. I’ve compiled some AUR packages that needed more than 16GB of RAM. I have 16 cores so compiling anything with -j32 can very quickly bring down a machine to its knees even if each compile thread is only using like 256-512MB each.

Another example: my netbook has 8GB. 99% of the time it’s fine, because it’s a web browsing machine, and I probably average on 4GB usage on a heavy day with lots of tabs open. But if I open up VSCode and use any LSP be it TypeScript or Rust, the machine immediately starts swapping aggressively. I had to log out of my graphical session to compile Lemmy, barely.

RAM is cheap enough these days it’s nice to have more than you need to not ever have to worry about it.

Max_P, to privacy in Lemmy instance admin snooping at votes
@Max_P@lemmy.max-p.me avatar

The votes are public. Kbin displays them right in the UI. Lemmy semi-hides it, but it’s never been designed to be private in any way.

Changing instance won’t do shit if that’s a concern to you. As an admin I can see them even if my instance isn’t involved with the post at all:

https://lemmy.max-p.me/pictrs/image/6bae7aa5-20a3-497e-9012-dc4c8a869eb4.png

Max_P, (edited ) to asklemmy in Lemmy instance which has not defederated with any other instance.
@Max_P@lemmy.max-p.me avatar

Keep in mind, defederation is bidirectional. You can end up on an instance that doesn’t defederate anybody but is being defederated by some major instances and end up worse off. Also, communities are bound to an instance so even if your instance doesn’t defederate with another, the instance that hosts the community might, which also doesn’t solve anything.

Also lemmy.ml had to restore from backup monday because postgres shat itself, so if the post is from monday or around, it’s possible it was simply lost due to the technical problems.

There’s also some federation problems with 0.19.0 and 0.19.1, so it’s possible it’s been attempted to be delivered to lemmy.ml but failed due to load or whatever.

You didn’t give any details or examples so we can only speculate. We troubleshoot federation by establishing patterns, like from what instance are the missing comments from, what instance hosts the community.

Addendum: I’ve also been experiencing occasional ghost posts, and I’m on my own instance, so there might be some stuff going on that’s unrelated, because I sure didn’t do anything. If they were deleted or retracted I would see them because I’m admin, I see everything.

Max_P, to linux in Which distro in your opinion is the best for virtualization (Windows 10 on either KVM or VMware), stability, and speed?
@Max_P@lemmy.max-p.me avatar

Well, I’m currently using VMware on Ubuntu

Well there’s your mistake: using VMware on a Linux host.

QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.

Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.

Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.

Max_P, to linuxmemes in Can't relate to be honest, I still use MBR boot
@Max_P@lemmy.max-p.me avatar

Yes but by doing so you’re using the same principles as MBR boot. There’s still this coveted boot sector Windows will attempt to take back every time.

What’s nice about EFI in particular is that the motherboard loads the file from the ESP, and can load multiple of them and add them to its boot menu. Depending on the motherboard, even browse the ESP and manually go execute a .efi from it.

Which in turn makes it a lot less likely to have bootloader fuckups because you basically press F12 and pick GRUB/sd-boot and you’re back in. Previously the only fix would be boot USB and reinstall syslinux/GRUB.

Max_P, to linux in Flatpack, appimage, snaps..
@Max_P@lemmy.max-p.me avatar

Distro packages and to some extent Flatpaks, use shared libraries which can be updated independently of your app.

So for example, if a vulnerability is discovered in say, curl, or imagemagick, ffmpeg or whatever library an app is using: for AppImages, this won’t be fixed until you update all of your AppImages. In Flatpak, it usually can be updated as part of a dependency, or distributed as a rebuild and update of the Flatpak. With distro packages, you can usually update the library itself and be done with it already.

AppImages are convenient for the user in that you can easily store them, move them, keep old versions around forever easily. It still doesn’t guarantee it’ll still run in distros a couple years for now, it guarantees that a given version will forever be vulnerable if any of its dependencies are because they’re bundled in, it makes packages that are much much bigger than they need to be, and you have to unpack/repack them if you need library shims.

Different kinds of tradeoffs and goals, essentially. Flatpak happens to be a compromise a lot of people agree on as it provides a set of distro-agnostic libraries while also not shifting the burden entirely onto the app developers. The AppImage developer is intentionally keeping Wayland broken on AppImage because he hates it and wants to fulfil his narrative that Wayland is a broken mess that won’t ever work, while Flatpak developers work hard on sandboxing and security and granular permission systems.

Max_P, to linux in Bcache is amazing!: Making HDD way faster!
@Max_P@lemmy.max-p.me avatar

Basically the idea is that if you have a lot of data, HDDs have much bigger capacities for the price, whereas large SSDs can be expensive. SSDs have gotten cheap, but you can get used enterprise drives on eBay with huge capacities for incredibly cheap. There’s 12TB HDDs for like $100. 12TB of SSDs would run you several hundreds.

You can slap bcache on a 512GB NVMe backed by a 8TB HDD, and you get 8TB worth of storage, 512GB of which will be cached on the NVMe and thus really fast. But from the user’s perspective, it’s just one big 8TB drive. You don’t have to think about what is where, you just use it. You don’t have to be like, I’m going to use this VM so I’ll move it to the SSD and back to the HDD when done. The first time might be super slow but subsequent use will be very fast. It also caches writes too, so you can write up to 512GB really fast in this example and it’ll slowly get flushed to the HDD in the background. But from your perspective, as soon as it’s written to the SSD, the data is effectively commited to disk. If the application calls fsync to ensure data is written to disk, it’ll complete once it’s fully written to the SSD. You get NVMe read/write speeds and the space of an HDD.

So one big disk for your Steam library and whatever you play might be slow on the first load but then as you play the game files gets promoted to the NVMe cache and perform mostly at NVMe speeds, and your loading screens are much shorter.

Max_P, to linux in Any experience with teaching kids Linux?
@Max_P@lemmy.max-p.me avatar

The only advice I have is to try to make it interesting for them and not just additional practical information they have to memorize. You don’t want to be the weird dad that insists on using stuff nobody else does, you have to show them what’s cool about it, and also accept maybe they’ll just stick with Windows for now.

I also think the main takeaway they should have out of it is that there’s many ways of doing the same thing and none is “the correct and only way”. They should learn to think critically, navigate unfamiliar user interfaces, learn some more general concepts and connect the dots on how things work, and that computers are logical machines, they don’t just do random things because they’re weird. Teach them the value of being able to dig into how it works even if it doesn’t necessarily benefit them immediately.

Maybe set up a computer or VM with all sorts of WMs and DEs with the express permission to wreck it if they want, or a VM they can set up (even better if they learn they can make their own VMs as well!). Probably have some games on there as well. Maybe tour some old operating systems for the historical context of how we got where we are today. Show them how you can make the computers do things via a terminal and it does the same thing as in the GUI. Show different GUIs, different file managers, different text/document editors, maybe different DE’s, maybe even tiling vs floating. What is a file, how are ways you can organize them, how you can move them around, how some programs can open other program’s files.

Teach them the computer works for them not the other way around. They can make the computer do literally anything they want if they wish so. But it’s okay to use other people’s stuff too.

Max_P, to news in Spotify spotted prepping a $19.99/mo 'Superpremium' service with lossless audio, AI playlists and more | TechCrunch
@Max_P@lemmy.max-p.me avatar

It is buzzword bullshit.

And a fad, probably. Everyone's trying to capitalize on the wow effect of ChatGPT.

Before AI it was neural network, and before that it was machine learning.

Max_P, to linux in Why btrfs gets huge perf hit with background IO work?
@Max_P@lemmy.max-p.me avatar

It’s mostly better, but not in every way. It has a lot of useful features, at a performance cost sometimes. A cost that historically wasn’t a problem with spinning hard drives and relatively slow SATA SSDs but will show up more on really fast NVMes.

The snapshots, it has to keep track of what’s been modified. Depending on the block size, an update of just a couple bytes can end up as a few 4k write because it’s Copy-on-Write and it has to update a journal and it has to update the block list of the file. But at the same time, copying a 50GB file is instantaneous on btrfs because of the same CoW feature. Most people find the snapshots more useful than eeking out every last bit of performance out of your drive.

Even ZFS, often considered to be the gold standard of filesystems, is actually kinda slow. But its purpose isn’t to be the fastest, its purpose is throwing an array of 200 drives at it and trusting it to protect you even against some media degradation and random bit flips in your storage with regular scrubs.

Max_P, to linux in Why aren't linux hardware shops on Ubuntu's certified hardware list?
@Max_P@lemmy.max-p.me avatar

Precisely. It’s not just “it works”, it’s third-party hardware that Canonical tests, certifies and commits to support as fully compatible. They’ll do the work to make sure everything works perfectly, not just when upstream gets around to it. They’ll patch whatever is necessary to make it work. The use case is “we bought 500 laptops from Dell and we’re getting a support contract from Canonical that Ubuntu will run flawlessly on it for the next 5 years minimum”.

RedHat has the exact same: catalog.redhat.com/hardware

Otherwise, most Linux OEMs just focus on first party support for their own hardware. They all support at least one distro where they ensure their hardware runs. Some may or may not also have enterprise support where they commit to supporting the hardware for X years, but for an end user, it just doesn’t matter. As a user, if an update breaks your WiFi, you revert and it’s okay. If you have 500 laptops and an update breaks WiFi, you want someone to be responsible for fixing it and producing a Root Cause Analysis to justify the downtime, lost business and whatnot.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #