@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

It would be if it wasn’t for NVIDIA, as usual. On Intel/AMD, you assign the seats, the displays light up and you’re good to go, pretty much works out of the box, especially on Wayland.

But for NVIDIA yeah maybe a VM is less pain since NVIDIA works well with VFIO.

Max_P,
@Max_P@lemmy.max-p.me avatar

Internally it’s even stored as a vote of either +1 or -1, so sending an undislike of a like probably also results in the vote’s removal. Lemmy just sums up all the votes and you have the score.

A like and a dislike activity are also contradictory, so even if you don’t unlike something, if you send a dislike it replaces the like as well.

Max_P,
@Max_P@lemmy.max-p.me avatar

Moderation does federate out, but only from the originating instance, the one that owns the post on question.

If someone post spam on lemmy.ca and lemmy.world deletes it, it only deletes on lemmy.world. If a mod or admin on lemmy.ca deletes it however, it federates and everyone deletes it as a result (unless modified to ignore deletions, but by default Lemmy will accept it).

There’s some interoperability problems with some software, notably Kbin where their deletions don’t federate to Lemmy correctly, so those do need to be moderated by every instance. But between Lemmy instances it does federate.

Max_P,
@Max_P@lemmy.max-p.me avatar

I think the best way to visualize it is in terms of who owns what and who has the authority to perform moderator actions.

  • As a user, you own the post, so you’re allowed to delete it no matter what. That always federate.
  • An admin always has full rights on what happens on their instance, because they own the server. The authority ends at their instance, so it may not federate out unless authorized otherwise.
  • An admin can nominate any user from the same instance to moderate any of its communities, local or remote. That authority also ends at that instance. In theory it should work for remote users too, but then it’d be hard to be from lemmy.ml and moderate lemmy.world’s view of a community on lemmy.ca.
  • The instance that owns the community can also do whatever they want even if the post originated from elsewhere, because they own the community. That federates out.
  • The instance that owns the community can nominate anyone from any instance as moderator. They’re authorized to perform mod actions on behalf of the instance that owns the community, therefore it will federate out as well.

From those you can derive what would happen under any scenario involving any combinations of instances.

Max_P,
@Max_P@lemmy.max-p.me avatar

(a) Yes. Instance admins have the ultimate say in what’s on their server. They can delete posts, entire communities, ban remote users and delete remote users. At least they had the decency of notifying you!

Since lemmy.ca owns the post, lemmy.world can’t federate out the removal, so it’s only on lemmy.world.

(b) You have to go appeal to lemmy.world. Each instance have its own independent appeal process.

That’s the beauty of the fediverse: instances can all have their rules to tailor the experience to their users, and it doesn’t have to affect the entire fediverse. Other instances linked to lemmy.ca can still see and interact with your post just fine, just not lemmy.world.

Max_P,
@Max_P@lemmy.max-p.me avatar

You may disagree with it and may even be right, I didn’t bother watching all those videos. But the thing is, it’s always a potential liability for admins, and we’re at the mercy of what the law says and what a potential judge or jury would rule if brought to court.

And we all know how that goes when underage people are involved: everyone goes “but the children!”. Therefore, admins side with caution, because nobody wants to deal with legal trouble if they don’t have to. Just blur it and make everyone happy.

Plus, in the current AI landscape, the mere availability of nude children imagery even if it’s not sexually suggestive at all means someone can alter it to become so. People have already been arrested for that.

Nothing to do with people being too prude to see naked children. It’s about consent and what nasty people will inevitably do with it. Does that girl really want videos of her naked all over the porn sites even through heroic actions? Probably not.

That’s a very weird hill to blow alts on.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kind of but also not really.

Docker is one kind of container, which itself is a set of kinds of Linux namespaces.

It’s possible to run them as if they were a virtual machine with LXC, LXD, systemd-nspawn. Those run an init system and have a whole Linux stack of their own running inside.

Docker/OCI take a different approach: we don’t really care about the whole operating system, we just want apps to run in a predictable environment. So while the container does contain a good chuck of a regular Linux installation, it’s there so that the application has all the libraries it expects there. Usually network software that runs on a specified port. Basically, “works on my machine” becomes “here’s my whole machine with the app on it already configured”.

And then we were like well this is nice, but what if we have multiple things that need to talk to eachother to form a bigger application/system? And that’s where docker-compose and Kubernetes pods comes in. They describe a set of containers that form a system as a single unit, and links them up together. In the case of Kubernetes, it’ll even potentially run many many copies of your pod across multiple servers.

The last one is usually how dev environments go: one of them have all your JS tooling (npm, pnpm, yarn, bun, deno, or all of them even). That’s all it does, so you can’t possibly have a Python library that conflicts or whatever. And you can’t accidentally depend on tools you happen to have installed on your machine because then the container won’t have it and it won’t work, you’re forced to add it to the container. Then that’s used to build and run your code, and now you need a database. You add a MongoDB container to your compose, and now your app and your database are managed together and you can even access the other containers by their name! Now you need a web server to run it in a browser? Add NGINX.

All isolated, so you can’t be in a situation where one project needs node 16 and an old version of mongo, but another one needs 20 and a newer version of mongo. You don’t care, each have a mongo container with the exact version required, no messing around.

Typically you don’t want to use Docker as a VPS though. You certainly can, but the overlay filesystems will become inefficient and it will drift very far from the base image. LXC and nspawn are better tools for that and don’t use image stacking or anything like that. Just a good ol’ folder.

That’s just some applications of namespaces. All of process, network, time, users/groups, filesystems/mount can be independently managed so many namespaces can be in the same network namespace, while in different mount namespaces.

And that’s how Docker, LXC, nspawn, Flatpak, Snaps are kinda all mostly the same thing under the hood and why it’s a very blurry line which ones you consider to be isolation layers, just bundled dependencies, containers, virtual machines. It’s got an infinite number of ways you can set up the namespaces the ranges from seeing /tmp as your own personal /tmp to basically a whole VM.

Max_P,
@Max_P@lemmy.max-p.me avatar

We have to define what installing software even means. If you install a Flatpak, it basically does the same thing as Docker but somewhat differently. Snaps are similar.

“Installing” software generally means any way that gets the software on your computer semi-permanently and run it. You still end up with its files unpacked somewhere, the main difference with Docker is it ships with the whole runtime environment in the form of a copy of a distro’s userspace.

But fair enough, sometimes you do want to run things directly. Just pointing out it’s not a bad answer, just not the one you wanted due to missing intents from your OP. Some things are so finicky and annoying to get running on the “wrong” distro that Docker is the only sensible way to install it. I run the Unifi controller in a container for example, because I just don’t want to deal with Java versions and MongoDB versions. It just comes with everything it needs and I don’t need to needlessly keep Java 8 around on my main system potentially breaking things that needs a newer version.

Max_P,
@Max_P@lemmy.max-p.me avatar

They mostly don’t exist yet apart from this PR.

On Vista and up, there’s only the Display Only Driver (DOD) driver which gets resolutions and auto resizing to work, but it’s got no graphical acceleration in itself.

Max_P,
@Max_P@lemmy.max-p.me avatar

Then just don’t start a community on a small one.

I’m a minuscule instance. That’s fine. I like that I have control over it, how it’s maintained and updated. If I want to convert it to Mbin because I like it more, I can. I know for sure it’s going to live at least as long as I’m interested in the fediverse. Nobody can take it away from me.

Big instances are expensive to run, and in a way, they’re not exactly immune to shutting down and big instances shutting down have a much bigger impact than a small one with few communities when they go poof.

Max_P,
@Max_P@lemmy.max-p.me avatar

There’s historically been some privilege escalations, such as cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3…

But at the same time, they do offer increased security when they work correctly. It’s like saying we shouldn’t use virtualization anymore because historically some virtual devices have been exploitable in a way that you could escape the VM. Or lately, Spectre/Meltdown. Or a bit of an older one, Rowhammer.

Sometimes, security measures open a hole while closing many others. That’s how software works unfortunately, especially in something as complex as the Linux kernel.

Using namespaces and keeping your system up to date is the best you can do as a user. Or maybe add a layer of VM. But no solution is foolproof, if you really need that much security use multiple devices, ideally airgapped ones whenever possible.

Max_P,
@Max_P@lemmy.max-p.me avatar

Isn’t that kind of AppImage’s whole thing, to behave like Mac apps that you just double click on regardless of where they are, and not have a package manager?

I’d go for the Flatpak if you want it to be managed and updated.

We went from distro packages to Flatpak to bare files and circling back to reinventing the package manager…

What dock do you use in Wayland?

I moved over to Wayland full time a couple of weeks ago (using KDE on Arch). I have finally rid myself of any X11 hangups apart from one. Latte will NOT respect my primary screen when changing monitor arrangement (ie. turning my projector on and off) and seems to randomly pick a screen to call the primary....

Max_P,
@Max_P@lemmy.max-p.me avatar

Maybe you can set up a KWin window rule to force Latte to be where you want it to be?

Not that Plasma panels work that much better than Latte in that regard, they still sometimes shift monitors just because something is plugged in (not even enabled, just plugged in!)

I really wish we could pin things to the exact monitor via its physical port location or serial number or something from EDID.

Max_P,
@Max_P@lemmy.max-p.me avatar

And a fifth, complete nobody instance! It’s almost like you can just slap Lemmy on a server and you can immediately join in the fun everywhere!

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

I don’t know, it’s going to depend a lot on usage pattern and cache hit ratio. It will probably do a lot more writes than normal to the cache drive as it evicts older stuff and replaces it. Everything has tradeoffs in the end.

Another big tradeoff depending on the cache mode (ie. writeback mode) if the SSD dies, you can lose a fair bit of data. Not as catastrophic as a RAID0 would but pretty bad. And you probably want writeback for the fast writes.

Thus I had 2 SSDs and 2 HDDs in RAID1, with the SSDs caching the HDDs. But it turns out my SSDs are kinda crap (they’re about as fast as the HDDs for sequential read/writes) and I didn’t see as much benefit as I hoped so now they’re independent ZFS pools.

Ubuntu 24.04 LTS Committing Fully To Netplan For Network Configuration (www.phoronix.com)

The Canonical-developed Netplan has served for Linux network configuration on Ubuntu Server and Cloud versions for years. With the recent Ubuntu 23.10 release, Netplan is now being used by default on the desktop. Canonical is committing to fully leveraging Netplan for network configuration with the upcoming Ubuntu 24.04 LTS...

Max_P,
@Max_P@lemmy.max-p.me avatar

If you’re just using DHCP, you won’t. What Netplan does is take a YAML input file and renders it as a systemd-networkd or NetworkManager configuration file. It’s a very quick and easy way to configure your network, and even have a try command that auto reverts in case you get kicked out of your SSH session.

It seems like what they’re doing for the desktop is hacking up NetworkManager so that it saves back its config as Netplan configs instead of regular NetworkManager configs. That’s the part I’m confused about, because NetworkManager is huge and Netplan doesn’t support close to every option. Their featuresets are wildly different. And last time I checked, the NetworkManager renderer was the least polished one, with the systemd-networkd one being close to a 1:1 match and more reliable.

It made a lot more sense when it was one way only. Two way sounds like an absolute mess.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kind of but not really? You’d have to federate out every vote individually. There’s no upvotes totals anywhere, there’s a vote table that contains who voted up/down on what, and it’s counted as needed. So if you want to send out 1000 votes, you need 1000 valid users and also send 1000 different activities to at least one instance.

You can make it display 100000 votes on your own instance if you want, but it’s not going to alter the rating on other instances because they run their own tally.

If you really want this to work long term, you need a credible looking instance with credible looking users that are ideally actually subscribed to the target community, and credible activity patterns too. Otherwise, the community can detect what you’re doing and defederate you and purge all the activities from your instance, and also revert all those votes as a side effect.

Remember, all votes are individual activities, and all votes are replicated individually to every instance. On Kbin, you can even see all the votes right from the UI, they don’t even hide it! You can count them yourself if you want. So anyone with the dataset can analyze it and sound the alarm. And each instance can potentially have its own algorithm for that, so instead of having just one target to game, like Reddit and a subreddit, you have hundreds of instances to fool. There’s so many signals I could use to fight spam: instance age, instance user growth, the frequency and timing of the votes, are the users seemingly active 24/7, what other communities those users post into, what are they voting for, do they all vote in agreement with each other, and on and on.

So, you technically can manipulate votes but it takes a lot of effort and care to make it as hard as possible to detect in practice. We play the same cat and mouse game as Reddit, but distributed and with many more eyes on it.

Max_P,
@Max_P@lemmy.max-p.me avatar

People have been running Ext4 systems for decades pretending that if Ext4 does not see the bitrot, the bitrot does not exist. (then BTRFS picks up a bad checksum and people scold it for being a bad filesystem)

ZFS made me discover that my onboard SATA controller sucks and returns bad data occasionally under heavy load. My computer cannot complete a ZFS scrub without finding errors, every single time.

Ext4, bcache and mdadm never complained about it, ever. There was never any sign that something was wrong at all.

100% worth it if you care about your data. I can definitely feel the hit on my NVMe but it’s also doing so much more.

Max_P,
@Max_P@lemmy.max-p.me avatar
  • Flip phone
  • HTC Legend
  • Galaxy Nexus
  • HTC One M8
  • Nexus 5
  • Alcatel OneTouch Idol 3 (boy that one sucked)
  • HTC One M8 (same device, just finally got S-OFF on it to use it with my carrier despite “incompatibility”)
  • Galaxy S7
  • OnePlus 8T
Max_P,
@Max_P@lemmy.max-p.me avatar

Because then you need to take care everywhere to decode it as needed and also make sure you never double-encode it.

For example, do other servers receive it pre-encoded? What if the remote instance doesn’t do that, how do you ensure what other instances send you is already encoded correctly? Do you just encode whatever you receive, at risk of double encoding it? And generally, what about use cases where you don’t need it, like mobile apps?

Data should be transformed where it needs it, otherwise you always add risks of messing it up, which is exactly what we’re seeing. That encoding is reversible, but then it’s hard to know how many times it may have been encoded. For example, if I type & which is already an entity, do you detect that and decode it even though I never intended to because I’m posting an HTML snippet?

Right now it’s so broken that if you edit a post, you get an editor… with escaped HTML entities. What happens if you save your post after that? It’s double encoded! Now everyone and every app has to make sure to decode HTML entities and it leads to more bugs.

There is exactly one place where it needs to encode, and that’s in web clients, more precisely, when it’s being displayed as HTML. That’s where it should be encoded. Mobile apps don’t care they don’t even render HTML to begin with. Bots and most things using the API don’t care. They shouldn’t have to care because it may be rendered as HTML somewhere. It just creates more bugs and more work for pretty much everyone involved. It sucks.

Now we have an even worse problem is that we don’t know what post is encoded which way, so once 0.19 rolls out and there’s version mismatches it’s going to be a shitshow and may very well lead to another XSS incident.

Max_P,
@Max_P@lemmy.max-p.me avatar

It still leads to unsolvable problems like, what is expected when two instances federate content with eachother? What if you use a web app to use a third party instance and it spits out unsanitized data?

If you assume it’s part of the API contract, then an evil instance can send you unescaped content and you got an exploit. If you escape it you’ll double escape it from well behaved instances. This applies to apps too: now if Voyager for example starts expecting pre-sanitized data from the API, and it makes an API call to an evil instance that doesn’t? Bam, you’ve got yourself potential XSS. There’s nothing they can do to prevent it. Either it’s inherently unsafe, or safe but will double-escape.

You end up making more vulnerabilities through edge cases than you solve by doing that. Now all an attacker needs to do is find a way to trick you into thinking they have sanitized data when it’s not.

The only safe transport for user data is raw. You can never assume any user/remote input is pre-sanitized. Apps, even web ones, shouldn’t assume the data is sanitized, they should sanitize it themselves because only then you can guarantee that it will come out correctly, and safely.

This would only work if you own both the server and the UI that serves it. It immediately falls apart when you don’t control the entire pipeline from submission to display, and on the fediverse with third party clients and apps and instances, you inherently can’t trust anything.

Max_P,
@Max_P@lemmy.max-p.me avatar

There will be loss in the process so you should go a little above. You also need to account for the efficiency curve of your power supply: is it best efficient at 80% load? 90% load? Can it handle 120% momentarily in case of a spike?

CV power supplies are the standard: constant voltage. It outputs say 12V, and trips when overcurrent. A CC supply would limit current to say, 20A. It does so by dynamically adjusting the voltage output to match that target. That’s a lot less common and usually used for battery charging or testing/troubleshooting. So, I guess, don’t plug it on a battery charger.

It should come with specs as to what input it can take. Follow the recommendations. If it says DC give it DC unless you’re absolutely sure of the circuit in there. The presence of a rectifier and caps doesn’t tell you much given it’s an amplifier, it could be part of the amp circuit for the MOSFETs and not its power supply.

Max_P,
@Max_P@lemmy.max-p.me avatar

Apart from Debian, I guess Alpine. It's quite popular in containers for its small size. Even Arch will be much bigger in that case because the packages are much less granular and install development libraries and headers for about everything.

Max_P,
@Max_P@lemmy.max-p.me avatar

First step would probably be to decouple healthcare from being company, so people realize how expensive their health plans are and how much they pay for stuff most people don't end up needing. Pretty sure for most people it's more expensive than their single yearly checkup would be out of pocket.

Then, make state-wide and state-owned insurance plans that are capped in profits, so the rates have to match the true cost of things.

Let it simmer for a bit, get people to get used to the idea that the government provided service is actually good and cheaper for once.

Then make it mandatory for every state resident to be covered by it.

The big problem with universal healthcare in the US is the strong individualistic mindset, those that go "but I don't want to pay for other people's hospital bills". Ease all those people that think they'll suddenly be paying way more to subsidize other people's health care into realizing it ends up cheaper because the costs are amortized over way more people. It needs to be spun up as a benefit to them, they're getting a better deal on their health insurance. Because they simply don't care about other people's problems.

One thing that struck me living in the US is just how much distrust there is for anything government operated, even though it's usually the companies they love so much that nickel and dime them. Although seeing how the politics are going right now, I kind of understand that sentiment. And pretty much every company does try to squeeze you out of your money, which makes people want to screw the companies over. Land of the fees.

Max_P,
@Max_P@lemmy.max-p.me avatar

My network randomly drops. A restart fixes but I can't even download Cyberpunk with my 1GB connection before it crashes. Klogs showed something about the network manager successfully shutting down but I can't find much else.

Share the output of sudo dmesg logs as well as sudo journalctl -u NetworkManager | cat. The first is the kernel logs about what's going on with your connection, and the second one is from the utility that manages networking on most systems (there's alternatives but pretty sure Manjaro uses NM). It should give us more info as to the reason of the disconnections.

No Radeon software. I sometimes need to record clips/ stream so relive is nice but the biggest problem is my second 1080p monitor I Super Resolution to fit more programs on it. I can't find a way to replicate that functionality. I also do not know how to control Radeon anti-lag, chill, Smart Memory Access, etc.

Most of these things are more deeply integrated on Linux, so you don't need to worry about them for the most part. Some of them are also buzzwords for marketing purposes for features that really should be default on, which on Linux, when it's reasonable, do default to on. For example, you don't turn Smart Memory Access on: if it can use it, it will use it. Same with VRR, at least on Wayland: just on by default on KDE.

  • ReLive: you can use any screen recorder that will work on any GPU. Right now with the Wayland transition it's a bit weird and OBS is the better choice there, but on an Xorg session you can just use something like Simple Screen Recorder. On KDE, Spectacle, the default screenshot utility also has the ability to record short video clips but it can be a little buggy.
  • Super Resolution: just set the monitor's scaling to less than 100% in the display settings. It's technically probably better than Super Resolution for apps that supports <100% scaling, because instead of making a fake 4K display for example, it'll render everything at 1080p still but instead cause apps to render smaller, achieving the same result but with the potential of remaining pixel perfect. It won't be doing any AI scaling though, so YMMB.
  • Anti-lag: it's kind of a hack, and on Linux we're trying to get things right for the graphics stack with Wayland. But if you're running Wayland, KWin is already doing what it can to reduce lag on the desktop, and individual applications have to implement similar methods if they want to. Have you run into specific things where it's noticeable? Linux is generally pretty good when it comes to input lag already.
  • Chill: you can run games in Valve's gamescope wrapper to limit framerate. That's exactly how they do it on the Steam Deck. You can also use CoreCtrl to underclock the GPU.
  • Smart Memory Access: it's just marketing for Resizable BAR, and it's on by default. You can check with sudo dmesg | grep BAR=, if it's greater than 256M and equal to your GPU's memory size, it's working.

<span style="color:#323232;">[    7.139260] [drm] Detected VRAM RAM=8176M, BAR=8192M
</span><span style="color:#323232;">[    7.576782] [drm] Detected VRAM RAM=4096M, BAR=4096M
</span>

HDR controls. Nothing in the display settings so I'm lost

Yeah that one's still WIP unfortunately. It's technically possible on Xorg but you have to run everything HDR all the time and things break. It's coming along fairly well!

Alternative Software I haven't spent a lot of time looking but things like wallpaper engine, rainmeter, powertoys.

  • Wallpaper Engine -> KDE's desktop backgrounds have a lot of options to do similar stuff including animated wallpapers. Go to change your wallpaper, there's a button to download new modules and new backgrounds. For example: store.kde.org/p/1413010
  • rainmeter -> Conky, or KDE's desktop widgets. Right click on your desktop, add graphical component.
  • powertoys -> A lot of those have built-in and better equivalents. Fancy zones: we've had that as standard for a good decade here. You can also fairly easily make your own or use other people's KWin scripts, which lets you manipulate the desktop however you want. Here's some examples: store.kde.org/browse?cat=210&amp;ord=latest

You can even download desktop effects, if you like your windows to burn down or have a glitch effect or whatever: store.kde.org/browse?cat=209&amp;ord=latest


It takes some time to adjust, but welcome abord! Depending on how much you customize, you may find it difficult to go back to Windows!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #