Comments

This profile is from a federated server and may be incomplete. Browse more on the original instance.

IAm_A_Complete_Idiot, (edited ) to selfhosted in External email server vs port forwarding/vpn

It’s not complicated until your reputation drops for a multitude of reasons, many not even directly your fault.

Neighboring bad acting IPs, too many automated emails sent out while you were testing, compromised account, or pretty much any number of things means everyone on your domain is hosed. And email is critical.

IAm_A_Complete_Idiot, (edited ) to linux in 7 Ways to Tweak Sudo Command in Linux

True, but that doesn’t necessarily matter if I can compromise the privileged app instead. I could replace it, modify it on disk, or really any number of things in order to get myself a hook into a privileged position.

Just injecting code in some function call which launches malware.exe would do the trick. Ofc signature checks and the like can help here - but those aren’t a given. There’s any number of ways you can elevate yourself on a system based off of user security if your threat model is malicious processes. Linux (and windows) will stop users from accessing each other’s crap by default, but not processes.

Or: supply chain attacks. Now your official app without any modifications is malicious.

IAm_A_Complete_Idiot, to linux in 7 Ways to Tweak Sudo Command in Linux

Yep! You can also get pretty far even without containers. At the end of the day containers are just sandboxing using namespaces, and systemd can expose that pretty trivially for services, and tools like bubble wrap / flatpak let you do it for desktop apps. In an ideal world every package would only use the namespaces it needs, and stuff like this would largely not be a concern.

IAm_A_Complete_Idiot, (edited ) to linux in 7 Ways to Tweak Sudo Command in Linux

The idea is malware you installed would presumably run under your user account and have access. You could explicitly give it different UIDs or even containerize it to counteract that, but by default a process can access everything it’s UID can, which isn’t great. And even still to this day that’s how users execute a lot of processes.

Windows isn’t much better here, though.

IAm_A_Complete_Idiot, to linux in Am I wrong to assume that docker is perfect for single board computers that relies on low life expectancy drives (microsd)?

Containers don’t typically have inits, your process is the init - so no extra processes are started for things other than what you care about.

IAm_A_Complete_Idiot, to linux in 7 Ways to Tweak Sudo Command in Linux

The proper way to handle issues like these is process level permissions (i.e. capability systems), instead of user level. Linux CGroups, namespaces, etc. are already moving that way, and in effect that’s the way windows is trying to head too. (Windows has its own form of containerization called AppContainers, which UWP apps use. Windows also has its own capability system).

IAm_A_Complete_Idiot, to linux in The cost of maintaining Xorg

As a third party, my understanding is that both the implementation and the protocol are really hard, if not next to impossible to iterate on. Modern hardware doesn’t work like how it did when X did, and X assumes a lot of things that made sense in the 90s that don’t now. Despite that, we cram a square peg into the round hole and it mostly works - and as the peg becomes a worse shape we just cram it harder. At this point no one wants to keep working on X.

And I know your point is that it works and we don’t need too, but we do need too. New hardware needs to support X - at least the asahi guys found bugs in the X implementation that only exists on their hardware and no one who wants to fix them. Wayland and X are vastly different, because X doesn’t make sense in the modern day. It breaks things, and a lot of old assumptions aren’t true. That sucks, especially for app devs that rely on those assumptions. But keeping around X isn’t the solution - iterating on Wayland is. Adding protocols to different parts of the stack with proper permission models, moving different pieces of X to different parts of the stack, etc. are a long term viable strategy. Even if it is painful.

IAm_A_Complete_Idiot, to linux in Navigating around in your shell

manpages aren’t guides though - they don’t help much in learning new tools, especially complicated ones. They’re comprehensive references, some can literally span hundreds of pages. Useful when you know what you’re doing and what you’re looking for, not great for learning new tools.

IAm_A_Complete_Idiot, (edited ) to linux in Fedora 40 Change Proposal: Systemd Security Hardening

This is good for precisely the single user case - potentially malicious services on your system can’t view things they otherwise would be able to, or access resources they don’t need. Even if it’s under the same user.

IAm_A_Complete_Idiot, (edited ) to linux in What happens when Linus dies/retires?

Linus has stepped away from kernel development before, and probably will again. Life continues on.

IAm_A_Complete_Idiot, to linux in Linus Torvalds Announces First Linux Kernel 6.7 Release Candidate

Second person excited for bcachefs, I’m planning on swapping over as soon as it supports scrubbing.

IAm_A_Complete_Idiot, (edited ) to linux in Firefox Development Is Moving From Mercurial To Git

Right, but squashed commits don’t scale for large PRs. You could argue that large PRs should be avoided, but sometimes they make sense. And in the case where you do have a large PR, a commit by commit review makes a lot of sense to keep your history clean.

Large features that are relatively isolated from the rest of the codebase make perfect sense to do in a different branch before merging it in - you don’t merge in half broken code. Squashing a large feature into one commit gets rid of any useful history that branch may have had.

IAm_A_Complete_Idiot, (edited ) to linux in Firefox Development Is Moving From Mercurial To Git

Yeah, but phabricator and Gerrit are entirely separate workflows from GitHub, and a lot of people prefer that workflow because it leads to encouraging better histories and reviews. It helps you in getting rid of the “fixed typos” type of commits, while still letting you make larger PRs.

GitHub obviously does let you keep a clean git history, but the code review workflow in GH just doesn’t encourage reviewing commits.

IAm_A_Complete_Idiot, to linux in Firefox Development Is Moving From Mercurial To Git

How much of that is what GitHub encourages and how much of that is what Users prefer? Plenty of users seem to enjoy phabricator / Gerrit for code review in practice precisely because of their workflows.

IAm_A_Complete_Idiot, to linux in Firefox Development Is Moving From Mercurial To Git

Also, GitHub PRs atleast to me feel like they encourage reviewing changes by the total diff of the entire PR, and not each commit. I don’t want a slog of commits that don’t add any value - it just makes doing things like reverts more annoying. Stuff like Gerrit and phabricator enforce reviews by making you review individual commits / changes / whatever you want to call them and not branch diffs.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #