@skullgiver@popplesburger.hilciferous.nl
@skullgiver@popplesburger.hilciferous.nl avatar

skullgiver

@skullgiver@popplesburger.hilciferous.nl

Giver of skulls

Verified icon

This profile is from a federated server and may be incomplete. Browse more on the original instance.

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

Source code for the code responsible for this error message:


<span style="color:#323232;">  </span><span style="font-weight:bold;color:#a71d5d;">let</span><span style="color:#323232;"> target_user </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#323232;">LocalUserView::read_person(</span><span style="font-weight:bold;color:#a71d5d;">&mut</span><span style="color:#323232;"> context.</span><span style="color:#62a35c;">pool</span><span style="color:#323232;">(), target_id).await;
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#a71d5d;">if</span><span style="color:#323232;"> target_user.</span><span style="color:#62a35c;">map</span><span style="color:#323232;">(|t| t.local_user.admin) </span><span style="font-weight:bold;color:#a71d5d;">== </span><span style="color:#0086b3;">Ok</span><span style="color:#323232;">(</span><span style="color:#0086b3;">true</span><span style="color:#323232;">) {
</span><span style="color:#323232;">    </span><span style="color:#0086b3;">Err</span><span style="color:#323232;">(LemmyErrorType::CantBlockAdmin)</span><span style="font-weight:bold;color:#a71d5d;">?
</span><span style="color:#323232;">  }
</span>

You can’t block local instance admins. You can ban external admins (those on other servers), and moderators, though.

Blocking admins doesn’t make much sense anyway, because admins can probably remove the block from the database if they wanted to be malicious.

As a workaround, you can try the following (requires Lemmy 0.19.0 or higher):

  1. Go to your account settings
  2. Export your user profile
  3. Add the user you wish to block to the blocked_users list (make sure to stick to the JSON format)
  4. Import your backup

It looks like the code for importing settings does not execute the admin check.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

I agree, I think it was just an oversight. Based on the context, it seems like the code assumes that error handling for anything but “database is gone” is unnecessary. Doing user lookups with those assumptions may be difficult, and I don’t think the Lemmy devs were trying to protect against shenanigans like these.

I don’t really see what you would gain from blocking a local admin (not like the admin is any less powerful to take action against you) so I wouldn’t expect it to be that bad. I’m a little surprised it’s not possible to block admins in the first place.

I can’t be bothered to actually check if the import does indeed allow blocking local instance admins; I just assumed so based on a quick read of the code. This seems like a rather unimportant bug, but if someone cares they should probably file an issue (or, even better, a pull request).

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

ChatGPT tried to generate a PNG image encoded in base64 (a method to encode binary files as text). In many Markdown implementations, this is a perfectly valid method to store images in comments and posts (although I’m not 100% sure about Lemmy?).

However, because ChatGPT is basically “what happens when you hit the first suggested word on your keyboard” but big, it got confused and started repeating itself halfway through. The PNG file starts out correct but the binary data seems to be off. It’s possible the image is still correct, but most of the file is absent.

Someone with even less of a life than me could go through the individual bytes and see if they break the PNG format itself, but I’ll just assume the repetition is because of the AI model messing up.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

A list of advice and solutions I’ve had to come up with in the past:

Use ethernet where possible for the best results.

Put the router in a better spot for better reception. Use better insulation so the neighbour’s AliExpress baby monitor doesn’t wipe out your WiFi signal. Use 5GHz WiFi when possible for better speeds, sometimes even at lower signal to noise ratios.

Do not place WiFi routers behind metal objects or reinforced concrete if possible. Same for plants, not because WiFi causes some kind of cancer, but because plants contain water and water absorbs a lot of radiation.

Don’t buy WiFi hardware that sells itself as “high power” because WiFi that reaches four houses over is useless if your energy efficient phone doesn’t have enough power to actually send data back.

Look for WiFi 6, 6E, or 7 labels on boxes. MIMO is also very useful; it helps with network throughput. Higher AxB numbers are better (i.e. 3x3 is better than 2x1) but beyond 2x2 you’ll need multiple devices at the same time to make use of all that bandwidth. Do not use range extenders wirelessly, plug them into an ethernet cable. If you do use range extenders or mesh networks, don’t place them somewhere where you don’t get any signal, place them at the furthest point where the WiFi is still usable.

Never trust anything measuring in bars. Phones will overestimate the number of bars, WiFi drivers will lie about them to make it sound like their reception is better, and there is no standard indicator for “how well reception is” that translates into the bars in your status icon. Measure dBm if you have to measure something (a negative number, closer to 0 is better).

Disable software that spams your entire WiFi network, such as the software for certain Logitech mice. These things will interrupt WiFi streams to push packets through, waking up the WiFi chip in all of your device’s, draining the battery faster.

If your internet connection is slow, no amount of WiFi improvements will speed it up. Make sure your incoming connection and the cables to your WiFi equipment are good before you try to fix the WiFi signal.

For the best signal, buy decent WiFi access points and don’t rely on a router in a closet somewhere. If you’re somewhat technical, Ubiquity is a decent balance between user friendliness versus WiFi performance. Attach them to your ceiling and hook them up with ethernet for the best results.

Mesh WiFi can help, but if you get it, don’t mix brands. Like with range extenders, put up mesh devices where they can still reach each other well. Mesh WiFi is much better than range extenders, even if the technology seems to be the same, because of differences in how well they’re integrated and how many WiFi antennae are contained within devices.

If you get multiple routers, try to configure the as access points and hook them up with ethernet. Make sure you don’t chain routers behind each other in standard router mode, unless you know what the downsides of double NAT are, or you’ll have all kinds of stupid issues (“only some computers can see the printer”, “my game only works on this WiFi network”, “why does the PlayStation report different”).

Sometimes people blame IPv6 for their issues. IPv6 is very very rarely the cause and disabling it will hide the problem from you but cause issues in the background. If you disable IPv6 on your device, you’ll run into very weird errors (the “my photos app doesn’t start on Tuesdays” kind of weird, because apps don’t expect it to be off), so only do so on the network level. If you disable IPv6 on your network, make sure you (know how to) use an IPv4-only capable DNS server or you’ll get tons of error messages.

Another IPv6 thing: don’t disable IPv6 privacy extensions on your devices, and never disable the IPv6 firewall on your router entirely (you may want to disable it for specific devices, but that’s optional). I highly recommend learning about IPv6 if you haven’t already, because it’s inevitable but there’s still a huge lack of understanding even among the supposed experts.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

If they’re all hard wired, I don’t think you need to worry much about the mesh functionality at all. Wired networks can effectively be placed wherever they’re necessary, the only thing mesh will solve for you is wireless extensibility in the future and automatic switching between access points. I don’t know for sure how the wired backplane of different brands work (and if they all even come with wired interoperability at all) and if the devices can run without being in range of each other at all.

I haven’t needed to deal with this myself, but you may need to create some minor overlap for automatic hand-off to work well, so movable devices like phones automatically connect to the closest access point. You wouldn’t want two networks with equal, bad reception, because there’s a chance devices may flip-flop between access points constantly. I believe manufacturers should have documentation for this stuff if it’s important, though as long as the routers can “see” each other this should be dealt with automatically.

Close vicinity is only important if the routers use a wireless network to connect to each other. Even then the antennae and bands used for the interconnection are likely much more powerful than those of your phone or laptop. You’ll likely have some kind of app or web interface that’ll tell you how well connected the wireless devices are, and that’s probably the best guide to finding the boundaries of the signal.

That said, “close vicinity” can mean anywhere from 5 meters to 50 meters. The exact requirements depend on the interference you get and the obstacles between the different devices. Sometimes moving a mesh router 15 centimeters to the left can be the difference between spotty WiFi and perfect speeds.

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

The kid was an idiot and a dickhead. He extorted companies and sim swapped people for his private gain, and was stupid enough to continue his hacking spree while he was on bail for another hack.

He could’ve made 7 figures, but decided to go down the criminal route again by using Samsung Dex over Miracast (which the news liked to present as some kind of amazing hacking feat).

He’s currently being held in hospital care for an indeterminate amount of time until the mental health tribunal can make up their minds. He’s violent, damaging property and injuring staff.

He’s going to be put away for a long time, hopefully he’ll change for the better over the years. I don’t get what this “he deserves a stellar salary” mentality comes from. This isn’t some high schooler who found a problem and got sued because they tried to get it fixed, this is a criminal who decided to try to take a shortcut to a life of riches.

Now, he will never work in cybersecurity again, and after his release his devices will probably be monitored for some time. Don’t extort companies, kids, companies don’t hire the “legendary hacker” guys if they can’t be trusted.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Social engineering, SIM swapping, and basic data extortion is understood perfectly well, though. It happens all the time, whether it’s North Korean APTs, the Ukrainian Cyber Army or some idiot kid in his basement.

There are plenty of stories about “genius hacker kid tries to do the right thing but gets arrested”, but this isn’t one of them.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Linus Torvalds never extorted anyone. I don’t see the comparison here.

Yes, you can be a massive dick and have power. You generally need to do something useful first, though, or get born into money.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

It’s important to note that the CLA does not take ownership of your commit. To quote the CLA:

(a) You retain ownership of the Copyright in Your Contribution and have the same rights to use or license the Contribution which You would have had without entering into the Agreement.

All the CLA does, is say “I agree that Canonical can use my code under another license if they want to”. Your contribution is still yours.

I would also argue that by enforcing AGPL rather than Apache, the community gets more ownership. I’d rather have seen Canonical not require a CLA so they can sell the software you’re contributing, but AGPL forces everyone but Canonical to open up their custom versions to their customers, which are free to rehost it elsewhere and help bring the changes upstream, of course.

As for ownership: LXD was started by Canonical. The name and trademark is theirs, and they control the upstream project. Like always, you can reject their terms and provide your changes under another license, but as the article states, Oracle is free to take your Apache2 changes and use them for practically any purpose as long as they put a copy of the license next to their code and give you credit. That includes selling the software. Not just Oracle either, of course; IBM/Red Hat and Oracle can do the very same.

The community should own everything.

In a perfect world I agree, but then the community should take over doing the actual work. Right now, almost all of the work on open source is done by companies and organisations such as KDE and Gnome with their own committees and politics.

Software for some products don’t have company-supplied software (i.e. Asahi Linux, although they do have parts of the Darwin kernel for reference) but those devices also often take ages to become usable. There are also plenty of projects funded by charities and such, but most of those form some kind of organisation that owns the copyright by default.

the only REALLY good company for the Linux ecosystem as of right now is Valve

They brought Linux to the mainstream by locking down most of the customisability and by promoting running proprietary software. That’s just the way things need to be, but I’m not sure if the Linux ecosystem should be that happy about these developments.

Canonical fucks us with snap

It takes two minutes to install Flatpak and remove Snap. Their apt-to-snap transition is stupid and annoying, but it’s a very minor issue.

Red Hat kills X11

Red Hat stops maintaining X.org for free now that a better replacement is (almost) ready. The community is free to set up an effort to prolong X11’s life span, of course.

Google is closed

More and more closed (mostly on Android, because as it turned out, the people taking the Android source code and distributing their own forks didn’t contribute back much), but Android and Chrome are still almost completely open. Your alternatives are iOS (closed for all but the reference kernel), Sailfish (Android app support necessary for real world use but closed and paid), Ubuntu Touch (using the Android HAL, so about as open as Android), and the various Linux distros that can barely do power management and often lack features such as “placing a call”; Google still provides some of the best open source software for mobile devices.

Google’s ChromeOS is perhaps the weird exception here, being basically a closed-ish source Linux distribution. All the components that power it (Linux, Chromium, Android) are open source, but the special sauce that makes those useful for end users is closed.

nVidia is closed and buggy

Buggy? Yes, 100%. One day I will be able to use Wayland without random stutters and crashes!

Closed? Not for recent GPUs. Starting at the GTX1650/1660, the drivers are completely open source, with only the firmware being closed like on every other GPU. The closed nature of earlier cards (which still requires something like the GPL condom) still sucks, but they’re clearly improving.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

C and C++ can’t be fixed retroactively because old code must remain compatible.

If you’re going to implement your own C dialect, you may as well just write a new language.

For C++ that’s Rust, for C that’s probably Zig (Zig will just let you import existing C files, which helps with porting). Carbon and experimental languages like Jakt may also work, it all depends on what your priorities are.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

You can load Rust into Python just fine. In fact, several packages have started requiring a Rust compiler on platforms thst don’t get prebuilt binaries. It’s why I installed Rust on my phone.

The build files for Rust are bigger than you may expect, but they’re not unreasonably big. Languages like Python and Java like to put their dependencies in system folders and cache folders outside of their project so you don’t notice them as often, but I find the difference not that problematic. The binaries Rust generates are often huge but if you build in release mode rather than debug mode and strip the debug symbols, you can quickly remove hundreds of megabytes of “executable” data.

Rust can be told to export things in the C FFI, which is how Python bindings are generally accomplished (although you rarely deal with those because of all the helper crates).

Statically compiled code will also load into processes fine, they just take up more RAM than you may like. The OS normally deduplicates dynamically loaded libraries across running processes, but with statically compiled programs you only get the one blob (which itself then gets deduplicated, usually).

Rust can also load and access standard DLLs. The safety assertions do break, because these files are accessed through the C FFI which is marked unsafe automatically, but that doesn’t need to be a problem.

There are downsides and upsides to static compilation, but it doesn’t really affect glue languages like Python or Typescript. Early versions of Rust lacked the C FFI and there are still issues with Rust programs dynamically loading other Rust programs without going through the C FFI, but I don’t think that’s a common issue at all.

I don’t see Rust replace all of C either, because I think Rust is a better replacement for C++ than for C. The C parts it does replace (parsers, drivers, GUIs, complex command line tools) weren’t really things I would write in C in the first place. There are still cars where Rust just fails (it can’t deal with running out of memory, for one) so languages like Zig will always have their place.

[Discussion] How do you feel about age verification on Porn sites? (lemmings.world)

Porn sites Pornhub, XVideos, and Stripchat face stricter requirements to verify the ages of their users after being officially designated as “Very Large Online Platforms” (VLOPs) under the European Union’s Digital Services Act (DSA)....

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

There are various ways in which age verification can be done without sacrificing privacy. Sadly, that requires standardised digital ID combined with several privacy preserving technologies that I’ve only seen used experimentally (I think Yivi comes closest, but its protocol involves sending identifiers between their servers and the consuming service, and requires services to be registered rather than arbitrarily accessible).

In its core, these protocols use digitally signed attributes (could be “SSN” or “birthday”, but more practically this could also just be “is over 18”). With this system, you can simply transmit the token that says “I’m an adult”, the website verifies this, and you’re done. To prevent abuse through infinite reuse, these apps need to be verified occasionally, and the secrets must not be stored somewhere they can be copied from easily, but phones have solved that problem ages ago.

Unfortunately, I don’t think governments who support these restrictions will also invest into these technologies. Perhaps efforts such as eIDAS will allow for some kind of non-profit to create and generate these tokens based on your standard digital government ID, but as of now, there’s nothing.

Any technology can be bypassed by kids (just steal dad’s porn pass!), so there’s a limit to how effective age gates are. On the other hand, anything that requires stealing stealing devices protected with a PIN will be sufficient in most cases.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Perhaps, but too many parents are terrible at their jobs.

Would you argue the same thing with other age restrictions, such as buying alcohol/drugs, driver’s licenses, or child labour?

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Their prices for RAM and storage upgrades are dogshit, but Macbooks do have objectively superior audio quality, and some of the best screens available. You just need to pretend the 256GB/8GB models don’t exist and the lineup suddenly makes a lot of sense.

Apple Silicon showed up to wipe the floor with Intel and AMD. Both now have CPUs that beat the M1/2/3, at the cost of huge power consumption and heat generation. With every non-Apple Macbook competitor, you can pick two out of “screen quality, audio quality, battery life, CPU performance” that perform well, and the rest plain doesn’t compete.

You won’t see me buy one of those things, the price is just soo goddamn high, but if you have the money to waste on these things, they’re excellent products. Especially when you’re a normal consumer and don’t plan on running Linux anyway; macOS may be janky as hell (“what’s window snapping?”) but your alternatives are Windows 11 and ChromeOS.

This is in contrast to the Intel Macbooks, which still had great screens and speakers, but were gimped by awful CPUs, comically insufficient cooling, self destructing keyboards, and so many other design flaws.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

The 8GB models are manufactured e-waste but the usable lineup are great machines. They’re practically unrepairable, but they’re built not to need repairs.

If you care about swapping out the SSD or replacing the RAM, you shouldn’t buy Apple. I promise you, though, that 99% of laptop users don’t, and that includes a significant part of Linux users.

Macs are expensive as balls but there simply aren’t any competitors for them. They’re the “overkill everything” segment that’s too small to target for other manufacturers. There are maybe one or two series of laptops that come close in speaker quality, and one of those consists of gaming laptops designed after 80s scifi spaceships, and the other comes with terrible battery and even worse Linux support, and both of them lack the battery life+performance quality Apple managed to squeeze out of their CPU.

I wish someone would produce Macbooks other than Apple. It’s an awful company that produces great hardware for a competitive price, it you care about all the Macbook has to offer. And to be honest, that’s not because Apple is such an amazing manufacturer, it’s because AMD and Intel are behind the curve (Qualcom even more), and the laptop manufacturers that try to compete with Apple always try to squeeze just that little bit of extra cost cutting out of their models so their shit doesn’t cost more, and preload their top of the line hardware with Windows 11 Home (the one with candy crush pinned to the start menu) and their stupid GAMER software suite that works on three models and stops being maintained after two updates.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

The Optiplex 710 supposedly supports Ubuntu 10 and 11, so booting Linux should be possible. May require installing without UEFI, though.

I know some distros have issues with old Intel GPUs, try booting with nomodeset and the other my-graphics-card-doesnt-work kernel parameters, and figure out what driver options you may need from there. You may need to boot a kernel older than Linux 6.3 for some VERY old GPU hardware to work.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Xorg and Wayland are graphics systems. You may have seen a command line during boot with tons of text scrolling past, that’s all you get without either Xorg or some alternative running.

Xorg runs on X11, which was designed a long time ago during the mainframe era of computers. It used to be the standard for all GUIs on Linux, BSD, and other Unixlikes for ages.

However, modern computers are nothing like the computers X11 was originally based on, and X11 started showing shortcomings. So, years ago people working on Linux decided it was time to design a new system, one that’s designed around our modern computers and operating systems. That new system is Wayland, through Ubuntu sported their own Mir for a while too.

Wayland was designed not to be a network protocol (though you can still run applications on remote computers if you wish). It also has a bunch of security benefits, like “not every application can read your key strokes, copy your password, or record your screen without you noticing”.

This new system doesn’t have the benefit of multiple decades of hard work behind it. As you may imagine, this also broke applications for a while. There’s a tool called XWayland that can run X11 applications on Wayland, so most programs just work, but things like screen recording are severely limited under thst system.

On the other hand, if you’re on a laptop, Linux can now finally reliably use touch pad gestures with more than two fingers through Wayland. You could write scripts and tools to fake them before, but they’re actually part of the UI nlw.

Wayland does have APIs for almost everything now, but not all applications have been updated to use those APIs. For example, Zoom didn’t wait for the screen recording API to be standardised, so it implemented screen sharing under Wayland as “take a thousand screenshots”. Some programs work by listening for keyboard hotkeys (basically processing every key and checking if it’s their special hotkey) but that’s no longer supported unless the program has focus.

There were also issues with drivers (well, almost exclusively Nvidia) but those have mostly been solved. It’s not for everyone yet, but there’s no reason not to try Wayland if you don’t have a full Linux setup already, anymore.

As with any big change to the Linux ecosystem (systemd, anyone?) there’s a lot of fighting between people who want the shiny, better new thing, and people who liked the way things were before. Both sides have good arguments. Big parties in the Linux world, like Gnome and KDE, are moving towards a Wayland-only desktop. At the moment you can run Gnome and KDE on either, but that’ll be harder in the future. Other GUIs, like XFCE, are heavily geared towards X11, and it may take years before you can even run those on Wayland. Some, like i3, have replacements (Sway) that do almost the same thing.

Interestingly, hardware vendors also seem to go with Wayland for their custom Linux projects. For example, the Tizen watches Samsung used to sell run Wayland. The Steam Deck runs Wayland in game mode, using a purpose built Wayland compositor as a GUI, but X11 for desktop mode.

In practice, you shouldn’t need to worry about things too much right now. Some programs will not work on Wayland, others will not work on X11. Some games run a few fps faster on Wayland, others run faster on X11, but the differences aren’t huge. If both Xorg and Wayland are supported in your distro, you can switch between the two using a button on the login screen.

As for Firefox: Firefox has had native Wayland support for a while. It was already capable of using all the Wayland APIs and protocols at least a year ago. However, by default, it ran through XWayland, to remain as compatible as possible, while Wayland was still being tested. Now, with the upcoming release, Firefox goes native.

For Xorg users, nothing will change. For Wayland users with touchpads or touch screens, gestures like pinch to zoom will be smoother and there will be more of them. The only difference is that you don’t need to stuff MOZ_ENABLE_WAYLAND=1 in a script somewhere to activate the new capabilities on Wayland.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Framework has a guide for you. You’ll need to disable either power-profiles-daemon or TLP depending on what CPU you have.

As for hibernation: TL;DR: it’s a mess. Fedora doesn’t support it out of the box. You can make it work with some elbow grease.

However, you shouldn’t need hibernation if your laptop goes to sleep like it should. If you can’t get it to sleep right, or still really want hibernation, here are some pointers:

  • many Linux distros don’t consider hibernation to be a stable feature. The default Fedora setup doesn’t even come with a swap partition by default, which makes hibernation impossible. You’ll need to allocate some swap space before you can hibernate your computer.
  • make sure your swap partition is encrypted if the rest of your laptop is encrypted as well. If you use a swap file, you can make this work, too, but it’ll be slightly more complicated
  • make sure your swap partition is big enough (at least RAM size + the amount of swap in use at the point of hibernation)
  • if you’re fine with disabling zram and using normal, this guide will show you how to hibernate a normal Linux system: www.ctrl.blog/entry/fedora-hibernate.html
  • if you don’t have a partition for swap and don’t want to create one, or if you want to keep using zram (compressed memory, enabled by default on Fedora, probably recommended to keep enabled), then this guide and its comments will tell you how to get a swap file to work. Make sure you read the update with more details too, and there’s also a comment further down specifically about Intel Framework laptops (need to disable a certain Intel driver that breaks hibernation).
  • disable secure boot in your BIOS. Linux doesn’t support the security features that Windows has to validate the state of boot SECURITY (even with custom secure boot keys), so when you’re running in secure boot mode (and the kernel is in lockdown mode), the Linux kernel disables hibernation. Alternatively, there are guides that’ll show you how to patch that check out, but that involves compiling your own kernel and that’s not worth the effort IMO.
  • configure your laptop for suspend-then-hibernate for best performance. I believe hybrid-sleep will also work. The Github gist I linked has details
  • you will probably need to enter your password when resuming from hibernation. This is a security feature. You can configure your laptop to use the TPM to decrypt the disk, skipping the encryption password entirely, it you don’t mind thieves having the ability to access your data when they steal the laptop.

You may be wondering why this is so complicated. A big reason is that Linux wants to be secure, but hibernation comes with unique security challenges. Linux also wants to be fast and efficient (by compressing RAM rather than writing it to disk) but that messes with the presumptions the hibernation system makes. Fedora dorky sorry hibernation out of the box, but they’re working on it, albeit not as fast as you might hope: pagure.io/fedora-workstation/issue/121

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Modern Fedora doesn’t enable swap by default and configures zram instead. Of course, you can’t hibernate to zram, so getting basic hibernation to work involves either disabling zram and configuring swap, or using callbacks to temporarily disable zram and enable swap right before suspending.

Neither is very beginner friendly, unfortunately.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Controversial, but when it comes to hibernating the system, it actually is.

Unfortunately Windows suffers from the s0ix power consumption bugs that’ll drain your battery faster than Linux does, so neither is a very good experience. The current state of modern PC suspend behaviour is just rather terrible.

Is Ubuntu deserving the hate? (lemmy.ml)

Long story short, I have a desktop with Fedora, lovely, fast, sleek and surprisingly reliable for a near rolling distro (it failed me only once back around Fedora 34 or something where it nuked Grub). Tried to install on a 2012 i7 MacBook Air… what a slog!!! Surprisingly Ubuntu runs very smooth on it. I have been bothering all...

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

To give credit where it’s due: Mir was pretty neat, actually. It had features that modern Wayland still lacks or has only recently gained. Ubuntu got an X replacement up and running in record time, but the rest of the ecosystem stuck with Wayland, so they cancelled their solution.

And you know what? Snap does solve some issues in interesting ways that Flatpak doesn’t. Unfortunately, the experience using Snap is rather inferior (and that goddamn lowercase snap folder in my home directory isn’t helping), but on a technical level I’m inclined to give this one to Snap.

Developing and maintaining Ubuntu costs money and unlike Red Hat, Canonical isn’t selling many support contracts. Their stupid Amazon scope and the focus on Snap are part of that, they just want to give businesses a reason to pay Canonical.

They’re trying very hard, but it just doesn’t seem to take off. Their latest move, pushing Ubuntu Pro to everyone, seems like a rather desperate move. I think Ubuntu is collapsing and I think Canonical doesn’t know how to stop it. I don’t know about you, but I’ve never paid for an Ubuntu license and I don’t know anyone who does, either.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

You don’t have to use Snap (except for LXC, I think?). It’s not enabled by default, but you can enable Flatpak and everything will work fine. Flatpak has Firefox and Chrome and all the other applications thst Canonical foolishly moved from their apt repos to their Snap repos.

There are some frustrating things about Snaps (loading all of them at boot time rather than at runtime, for quicker app start but slower boot, for example, and that stupid snap folder that can’t be moved) but honestly I don’t really see what the fuss is about as an end user. Nobody sets up a purely Snap based system anyway.

The problem with Snap is an ideological one. If you don’t care who runs your software store and if you don’t care about having the ability to add more software stores then the default, you’ll be fine with Snap. If you’re ideologically driven towards Linux, you’ll probably dislike the way Snap is set up.

Like it or not, Ubuntu is still one of the best supported distros out there. If you want drivers from any manufacturer, you get to pick between drivers tested for Ubuntu or Fedora. Every other distro repackages those drivers using their own scripts and compatibility layers because nobody over at Intel is going to spend company time specifically getting Garuda to work when its customers don’t sell hardware with it preinstalled.

Software like Discord and VS Code having the “.deb, maybe .rpm, or you figure it out yourself” approach of official distribution is pretty standard, I’d say, for better or for worse. It also helps that a lot of entry level Linux questions and answers online are about Ubuntu. Askubuntu may not be as vast and up to date as the Arch wiki, but at least the askubuntu people aren’t going to tell you off for not knowing advanced Linux stuff.

There are upsides and downsides to any Linux distro. You’re not “supposed” to think anything, try it out, keep an open mind, and pick what works for you.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Open ten tabs in Chrome. Maybe even twelve!

I don’t think you need 32GB of RAM. 16GB should be enough, and 8 will still do for light tasks (though modern apps and websites are starting to push that, which is terrible). Your OS uses any RAM you don’t use to cache files, which speeds up your system, reduces power consumption, and could save you some SSD wear by caching the writes.

If you haven’t already, you can mount a tmpfs over your browsers’ cache directories (a bunch of them in ~/.cache or ~/. config). It used to really speed up browsing back in the HDD days. I doubt it’s still necessary, but hey you’ve got plenty of RAM, right?

If you really don’t do anything but browsing, you could boot your entire OS into RAM and have a 0 SSD latency browsing experience.

You could also use the RAM to run a bunch of VMs or containers. I used to run a separate Pihole VM, for example; virtual machines are nice and isolated, so you don’t risk ruining your /etc directory with a billion different configured services. The big downside of running such stuff on your machine is that you quickly end up with a whole bunch of duplicates (I have like four versions of postgres running on a server somewhere because I’m lazy) but if you have RAM to spare, that doesn’t matter.

One container that may be worth looking at is Waydroid (or Anbox if you’re on X11) to run Android apps on your desktop. I find that a bunch of different services have web interfaces thst just don’t work as well as their apps, and running those can be nice. How much of a difference this makes will depend on the services you use, of course.

Lastly: don’t underestimate the advantages of plenty of RAM when programming. It’ll depend on what language you use, but many compilers will generate a million tiny files that will all be written to disk and read back. SSDs are fast, but random reads are still nowhere close to RAM speed. Your OS will hide most of this overhead, but I definitely felt the difference going from 16GB to 32GB because of file system caching alone.

Is there any future for the GTK-based Desktop Environments? (ludditus.com)

This article was written in the sense of bashing gnome but yet some points seem to be valid. It explains the history of gtk 1 to 4 and the influence of gnome in gtk. I’m not saying gnome is bad here, instead I find this an interesting to read and I’m sharing it.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

I know I’m part of the minority in liking the Gnome 3+ designs, but with so many people lamenting the death of GTK+2, why don’t they fork the toolkit? It’s not as if you’ll break any compatibility by backporting fixes and extending the classic UI components.

Perhaps you’ll need to rename your project (except for the system libraries) to avoid trademark issues, but if all the developers came together, I’m sure you could write a drop-in replacement for the old GTK+2 libraries. Such a project may have some difficult tasks ahead of it (bringing Wayland support and fractional scaling, for example) but they can copy Gnome’s homework, they don’t need to invent everything from scratch.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

The struct returned by stat doesn’t, but statx contains creation time as well as well. I believe ext4 is already tracking the creation time even if stat can’t provide it.

The stat command on modern distros should get you this additional metadata, unless you use an FS that doesn’t track or expose it, of course.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #