@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them đŸłïžâ€đŸŒˆ

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

Maybe a Steam Deck if they’re into gaming, boy do people love to tinker with their Decks.

Max_P,
@Max_P@lemmy.max-p.me avatar

The only advice I have is to try to make it interesting for them and not just additional practical information they have to memorize. You don’t want to be the weird dad that insists on using stuff nobody else does, you have to show them what’s cool about it, and also accept maybe they’ll just stick with Windows for now.

I also think the main takeaway they should have out of it is that there’s many ways of doing the same thing and none is “the correct and only way”. They should learn to think critically, navigate unfamiliar user interfaces, learn some more general concepts and connect the dots on how things work, and that computers are logical machines, they don’t just do random things because they’re weird. Teach them the value of being able to dig into how it works even if it doesn’t necessarily benefit them immediately.

Maybe set up a computer or VM with all sorts of WMs and DEs with the express permission to wreck it if they want, or a VM they can set up (even better if they learn they can make their own VMs as well!). Probably have some games on there as well. Maybe tour some old operating systems for the historical context of how we got where we are today. Show them how you can make the computers do things via a terminal and it does the same thing as in the GUI. Show different GUIs, different file managers, different text/document editors, maybe different DE’s, maybe even tiling vs floating. What is a file, how are ways you can organize them, how you can move them around, how some programs can open other program’s files.

Teach them the computer works for them not the other way around. They can make the computer do literally anything they want if they wish so. But it’s okay to use other people’s stuff too.

Max_P,
@Max_P@lemmy.max-p.me avatar

For me what planted the Linux seed is when I tried Mandrake Linux when I was 9-10ish. I didn’t end up sticking with it for all that long, but I absolutely loved trying out all those DEs. I had downloaded the full fat 5 CD version and checked almost everything during setup, so it came jam packed with all sorts of random software to try out. The games were nice, played the shit out of Frozen Bubble. I really liked Konqueror too, coming from Internet Explorer. It was pretty snappy overall. And there’s virtual desktops for more space! People were really helpful on IRC, even though I was asking about installing my Windows drivers in Wine. Unfortunately I kinda wanted games and my friends were getting annoyed we couldn’t play games on my computer.

It stuck with me however, so later on when some of my online friends were trying it out, I wanted to try it out again too. I wasn’t much into games anymore, had started coding a little bit. So on my computer went Kubuntu 7.10, and I’m still on Linux to this day.

But that seed is what taught me there’s more. I didn’t hate Windows, I wasn’t looking to replace it. I hadn’t fallen in love with FOSS yet. It was cool and different and fun. It wasn’t as sterile and as
 grey as Windows 98. You could pop up some googly eyes that followed your mouse, because you could. There were all those weird DEs with all sorts of bars and features.

Max_P,
@Max_P@lemmy.max-p.me avatar

Distro packages and to some extent Flatpaks, use shared libraries which can be updated independently of your app.

So for example, if a vulnerability is discovered in say, curl, or imagemagick, ffmpeg or whatever library an app is using: for AppImages, this won’t be fixed until you update all of your AppImages. In Flatpak, it usually can be updated as part of a dependency, or distributed as a rebuild and update of the Flatpak. With distro packages, you can usually update the library itself and be done with it already.

AppImages are convenient for the user in that you can easily store them, move them, keep old versions around forever easily. It still doesn’t guarantee it’ll still run in distros a couple years for now, it guarantees that a given version will forever be vulnerable if any of its dependencies are because they’re bundled in, it makes packages that are much much bigger than they need to be, and you have to unpack/repack them if you need library shims.

Different kinds of tradeoffs and goals, essentially. Flatpak happens to be a compromise a lot of people agree on as it provides a set of distro-agnostic libraries while also not shifting the burden entirely onto the app developers. The AppImage developer is intentionally keeping Wayland broken on AppImage because he hates it and wants to fulfil his narrative that Wayland is a broken mess that won’t ever work, while Flatpak developers work hard on sandboxing and security and granular permission systems.

Max_P,
@Max_P@lemmy.max-p.me avatar

It is very unfortunate. It’s fine to point out problems, but then when you become part of the problem, that’s not amazing.

He’s had the same meltdown with fuse2 being deprecated in favor of fuse3 which, guess what, also broke AppImage and we had a huge rant for that too.

Flatpak has a better chance of being forward compatible for the foreseeable future. Linux generally isn’t a very ABI/API compatible platform because for the most part you’re expected to be able to patch and recompile whatever you might want.

Wanting to improve my Linux skills after 17 months of daily driving Linux

I’ve been daily driving Linux for 17 months now (currently on Linux Mint). I have got very comfortable with basic commands and many just works distros (such as Linux Mint, or Pop!_OS) with apt as the package manager. I’ve tried Debian as a distro to try to challenge myself, but have always ran into issues. On my PC, I could...

Max_P,
@Max_P@lemmy.max-p.me avatar

Arch is actually not as bad as many say. It’s pretty stable nowadays, I even run Arch on some servers and I never had any issues.

Not even just nowadays. My desktop is running a nearly 10 year old install. It’s so old, it not only predates the installer, it predates the “traditional” way and used the old TUI installer. It even predates the sysvinit to systemd switch! The physical computer has been ship of thesis’d twice.

Arch is surprisingly reliable. It’s not “stable” as in things change and you have to update some configs or even your own software. But it’s been so reliable I never even felt the need to go look elsewhere. It just works.

Even my Arch servers have been considerably more reliable and maintenance-free than the thousands I manage at work with lolbuntu on them. Arch does so little on its own, there’s less to go wrong. Meanwhile the work boxes can’t even update GRUB noninteractively, every now and then we have a grub update that pops a debconf screen and hangs unattended-upgrades until manually fixed and hoses up apt as a whole.

Max_P,
@Max_P@lemmy.max-p.me avatar

RAM is the kind of thing you’re better off having too much than not enough. Worst case the OS ends up with a very healthy and large file cache, which frees up your storage and makes things a bit faster/lets it spend the CPU on other things. If anything, your machine is future proofed against the ever increasing RAM hungriness of web apps. But if you run out of it, you get apps killed, hangs or major slowdowns as it hits the swap.

The thing with RAM is that it’s easy for 99% of your workload to fit comfortably, and then there’s one thing you temporarily need a bit more and you’re screwed. My machine usually uses 8-12/32GB of RAM but yet I still ended up needing to add swap to my machine. Just opening up the Lemmy source code and spinning up the Rust LSP can use a solid 8+GB alone. I’ve compiled some AUR packages that needed more than 16GB of RAM. I have 16 cores so compiling anything with -j32 can very quickly bring down a machine to its knees even if each compile thread is only using like 256-512MB each.

Another example: my netbook has 8GB. 99% of the time it’s fine, because it’s a web browsing machine, and I probably average on 4GB usage on a heavy day with lots of tabs open. But if I open up VSCode and use any LSP be it TypeScript or Rust, the machine immediately starts swapping aggressively. I had to log out of my graphical session to compile Lemmy, barely.

RAM is cheap enough these days it’s nice to have more than you need to not ever have to worry about it.

Alright, I'm gonna "take one for the team" -- what is with the "downvote-happy" users lately?

Title. “lmao internet points” and all, but what is the point of participating in a community that sees assumptions and other commonly non-harmful commentaries/posts as “bad” this easily? Do folks in here are really that needy of self-validation, even if it means seeking such from something completely insignificant like...

Max_P,
@Max_P@lemmy.max-p.me avatar

I expected this to be “another one of those” but actually from what my instance has about you, you were indeed correct. Gaming distros with exclusive features lmao.

IMO that’s some of the gamer logic bleeding over in the Linux side, now that Linux gaming is taking off. They’ll do anything including install dubious Linux distros barely hanging together with duct tape for a perceived extra 2 FPS. Download software exclusively distributed on Discord? Hell yeah. I’m sure at least one of them boots with mitigations=off and it’s not clearly indicated that it does.

We’re seeing the same thing on the Windows side with modified Windows ISOs like the whole AtlasOS, that rightfully made some security experts sound the alarm. Some did things like completely strip off the updates, antivirus and firewall. Unless your system is exclusively running Steam and firewalled off the network, this is a certified bad idea.

I’d probably trust Nobara because the guy clearly knows his shit, but some of them really are just some other guy’s riced up Arch snapshot. They may give the impression everything just works at first but I’ve definitely seen examples of it falling apart. Even bigger distros like Pop_OS! had major snafus like the whole Steam uninstalls your DE thing, and Manjaro still fucks up something basic every now and then. I tried some of them in a VM and they didn’t even install or boot correctly. Oh my fault that one only works for NVIDIA graphics cards not AMD, my bad.

It’s not worth arguing, it’s a user base with vastly different goals than I do, just let them have their Bedrock Linux completely blow up in multi package manager hell and soon enough they’ll come running for a saner more reliable distro.

Max_P,
@Max_P@lemmy.max-p.me avatar

Well, I’m currently using VMware on Ubuntu

Well there’s your mistake: using VMware on a Linux host.

QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.

Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.

Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.

Max_P,
@Max_P@lemmy.max-p.me avatar

They mostly don’t exist yet apart from this PR.

On Vista and up, there’s only the Display Only Driver (DOD) driver which gets resolutions and auto resizing to work, but it’s got no graphical acceleration in itself.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’ll definitely run Kali well, Windows will be left without hardware acceleration for 2D/3D so it’ll be a little laggy but it’s usable.

VMware has its own driver that converts enough DirectX for Windows to run smoother and not fall back to the basic VGA path.

But VMware being proprietary software, changing distro won’t make it better so it’s either you deal with the VMware bugs or you deal with stable but slow software rendering Windows.

That said on the QEMU side, it’s possible to attach one of your host’s GPUs to the VM, where it will get full 3D acceleration. Many people are straight up gaming in competitive online games, in a VM with QEMU. If you have more than one GPU, even if it’s an integrated GPU + a dedicated one like is common with most Intel consumer non-F CPUs, you can make that happen and it’s really nice. Well worth buying a used GTX 1050 or RX 540 if your workflow depends on a Windows VM running smoothly. Be sure your CPU and motherboard support it properly before investing though, it can be finicky, but so awesome when it works.

Max_P,
@Max_P@lemmy.max-p.me avatar

Then just don’t start a community on a small one.

I’m a minuscule instance. That’s fine. I like that I have control over it, how it’s maintained and updated. If I want to convert it to Mbin because I like it more, I can. I know for sure it’s going to live at least as long as I’m interested in the fediverse. Nobody can take it away from me.

Big instances are expensive to run, and in a way, they’re not exactly immune to shutting down and big instances shutting down have a much bigger impact than a small one with few communities when they go poof.

Max_P,
@Max_P@lemmy.max-p.me avatar

How is it unrelated? Running MongoDB in a container so that it just works and you have a portable/reproducible dev environment is a perfectly valid approach.

Max_P,
@Max_P@lemmy.max-p.me avatar

C bindings and APIs generally work much better in Rust because the language works a lot more like C than it does C++.

Qt depends a lot on C++ class inheritance, and even does some preprocessing of C++ files to generate code in those classes. That’s obviously not possible when using Rust. And it looks like you need a fair bit of unsafe there and there to use it at all too.

Meanwhile, GTK being a C library, its integration with Rust is much more transparent and nice.

So if you’re making a GUI Rust app, you’re just kind of better off with GTK at the moment. It’s significantly easier and nicer.

Max_P,
@Max_P@lemmy.max-p.me avatar

Isn’t that kind of AppImage’s whole thing, to behave like Mac apps that you just double click on regardless of where they are, and not have a package manager?

I’d go for the Flatpak if you want it to be managed and updated.

We went from distro packages to Flatpak to bare files and circling back to reinventing the package manager


Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

As an aside, distro doesn’t matter but should make sure realtime is set up properly for the optimal latency. That usually requires the linux-rt kernel. The default one isn’t quite as bad as it used to be, but linux-rt will be able to guarantee low latency processing without dropouts. Also worth tuning/hardcoding latencies in JACK or PipeWire if the audio delay is too big out of the box.

What dock do you use in Wayland?

I moved over to Wayland full time a couple of weeks ago (using KDE on Arch). I have finally rid myself of any X11 hangups apart from one. Latte will NOT respect my primary screen when changing monitor arrangement (ie. turning my projector on and off) and seems to randomly pick a screen to call the primary....

Max_P,
@Max_P@lemmy.max-p.me avatar

Maybe you can set up a KWin window rule to force Latte to be where you want it to be?

Not that Plasma panels work that much better than Latte in that regard, they still sometimes shift monitors just because something is plugged in (not even enabled, just plugged in!)

I really wish we could pin things to the exact monitor via its physical port location or serial number or something from EDID.

Max_P,
@Max_P@lemmy.max-p.me avatar

Not really different than any other M.2 SSDs, that it’s over USB doesn’t matter.

The only consideration for USB sticks is that they’re usually quite crap, so running a system off it tends to use up the flash pretty quickly.

Max_P,
@Max_P@lemmy.max-p.me avatar

Yeah, it’s not really advertised as an init system anymore. It’s an entire system management suite, and when seen from that angle, it’s pretty good at it too. All of it is consistent, it’s fairly powerful, and it’s usually 10-20 lines of unit files to describe what you want. I wanted that for a long time.

I feel like the hate always comes from the people that treat the UNIX philosophy like religion. And even then, systemd is very modular, just also well integrated together: networkd manages my network, resolved manages my DNS, journald manages my logs, timesyncd manages my NTP, logind manages my logins and sessions, homed mounts my users profiles on demand.

Added complexity, yes, but I’ve been using the hell out of it. Start services when a specific peripheral is plugged in? Got it. Automatically assign devices to seats? Logind’s got you covered, don’t even need to mess with xorg configs. VM network? networkd handles it. DNS caching? Out of the box. Split DNS? One command. Don’t want 2000 VMs rotating their logs at exactly midnight and trashing your ceph cluster? Yep just slap a RandomizedDelaySec=24h to the units. Isolate and pin a VM to dedicated cores dynamically? Yep it’ll do that. Services that needs to run on a specific NUMA node to stay close to PCIe peripherals? Yep easy. All very easily configurable with things like Ansible or bash provisioning scripts.

Sure it may not be for everybody, but it solves real problems real Linux admins have to deal with at scale. If you don’t like it, sysvinit still works just fine and I heard good things about runit too. It’s an old and tired argument, it’s been over 10 years, we can stop whining about it and move on. There’s plenty of non-systemd distros to use.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s kind of useless if they won’t let you root it / install your own customized version.

Max_P,
@Max_P@lemmy.max-p.me avatar
  • August: 75GB
  • September: 94GB
  • October: 88GB
  • November: 80GB
Max_P,
@Max_P@lemmy.max-p.me avatar

I’ve never had to restart the Lemmy container and tracking down the reason why is probably a good idea.

Also rule 5, this belongs to !lemmy_support

Max_P,
@Max_P@lemmy.max-p.me avatar

<span style="color:#323232;">sudo machinectl login the-user@localhost
</span>

That will handle all the PAM stuff as if you actually logged in.

Max_P,
@Max_P@lemmy.max-p.me avatar

There’s historically been some privilege escalations, such as cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3


But at the same time, they do offer increased security when they work correctly. It’s like saying we shouldn’t use virtualization anymore because historically some virtual devices have been exploitable in a way that you could escape the VM. Or lately, Spectre/Meltdown. Or a bit of an older one, Rowhammer.

Sometimes, security measures open a hole while closing many others. That’s how software works unfortunately, especially in something as complex as the Linux kernel.

Using namespaces and keeping your system up to date is the best you can do as a user. Or maybe add a layer of VM. But no solution is foolproof, if you really need that much security use multiple devices, ideally airgapped ones whenever possible.

deleted_by_author

  • Loading...
  • Max_P,
    @Max_P@lemmy.max-p.me avatar

    You shouldn’t need sudo for this. What’s probably happening is the Makefile you end up running doesn’t do what you think it does at all and ends up clearing header files to rebuild them then dies.

    Removing sudo will at least give you an indication of what’s going on by the means of permission errors. Find out why it’s trying to modify files it shouldn’t. And it’s also a great example of why you shouldn’t compile anything as root, not even for building packages. Not even building kernel modules requires root, only installing them and loading them.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ‱
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #