@Atemu@lemmy.ml avatar

Atemu

@Atemu@lemmy.ml

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Atemu,
@Atemu@lemmy.ml avatar

One “hammer” mitigation to most threats could conceivably face when self-hosting is to never expose your services to the internet using a firewall. “Securing” your services against a small circle of guests/friends/family members in your home network is a lot simpler than securing against the entire world.
If you need to access your services remotely, there are ways to achieve that without permanently opening a single port to the internet such as Tailscale or ZeroTier.

Otherwise, commonly used tools in self-hosting such as Docker or VMs usually offer quite decent separation even if a service is compromised.

Nothing replaces good security hygiene though. Keep your stuff up-to-date. Use secure methods of authentication such as hard to guess passwords or better. Make frequent backups (3-2-1). The usual.

Atemu,
@Atemu@lemmy.ml avatar

Yeah, I’ve noticed the PayPal issue aswell.

Atemu,
@Atemu@lemmy.ml avatar

In my case I have a number of sockets from spotify, and steam listening on port 0.0.0.0. I would assume, that these are only available to connections from the LAN?

That’s exactly the kind of thing I meant :)

These are likely for things like in-house streaming, LAN game downloads and remote music playing, so you may even want to consider explicitly allowing them through the firewall but they’re also potential security holes of applications running under your user that you have largely no control over.

I'm an idiot (arm)

EDIT: Putting this at the top because not everyone is seeing what I actually need. I can unpack the rar archive just fine. What I can’t do (on arm) is add to/update the files in the rar archive. I have unrar already installed. What I can’t install is the rar package to create/update rar archives....

Atemu,
@Atemu@lemmy.ml avatar

Damn rat files…

I just opened a nix-shell with unrar in it on aarch64-linux and am able to execute it, so yes, it can be made to work.

I'm so frustrated rn.

I have been distro hopping for about 2 weeks now, there’s always something that doesn’t work. I thought I would stick with Debian and now I haven’t been able to make my printer work in it, I think I tried in another distro and it just worked out of the box, but there’s always something that’s broken in every distro....

Atemu,
@Atemu@lemmy.ml avatar

As an example, users of Debian are reporting tons of KDE Plasma bugs that was already fixed, but because they are running an ancient version, they still have the bugs.

The idea is that those bug fixes would be backported as patches; old feature version + new security/bug fixes.

In practice, that’s really expensive to do, so often times bug fixes simply aren’t backported and I don’t even want to know the story of security fixes though I’d hope they do better there.

Atemu,
@Atemu@lemmy.ml avatar

Debian has an effective Rolling distribution through testing than can get ahead of Arch.

I wouldn’t call a distro “branch” where maintainers say “don’t use this, it’s not officially supported and may even be insecure” an “effective” distribution. I’d consider it a test bed.

Debian tends to align its release with LTS Kernel and Mesa releases so there have been times the latest stable is running newer versions than Ubuntu

  • Ubuntu LTS.

Ubuntu’s regular channel releases every 6 months, similar to Fedora or NixOS. That in itself is already a “stable” distro, just not long-time stable (LTS).
So Debian can for a short span of time after release be about as fresh as stable distros which is …kinda obvious? I would not consider a month or so every 2 years to be significant to even mention though, especially if you consider that Debian users aren’t the kind to jump onto a new release early on.

For some the priority to run software that won’t have major bugs, that is what Debian, Ubuntu LTS and RHEL offer.

That’s not the point of those distros at all. The point is to have the same features aswell as bugs for longer periods of time. This is because some functionality the user wants could depend on such bugs/unintended behaviour to be present.

The fact that huge regressions have to be weeded out more carefully before release in LTS is obvious if you know that it’d be expected for those “bugs” to remain present throughout the release’s support window.

Atemu,
@Atemu@lemmy.ml avatar

I was worried about possibly needing to change license.

I’d rather ask the contributors to consent to licensing their code under the new license. You don’t need the copyright in the hand of one entity to change license, it’s enough if all copyright holders agree.

The situation is made seemingly complicated by the possible need to use copylefted images

WDYM by “images”?

As in art assets? I’m not sure those would even be infectious. I think it’s possible to even use non-free assets in a GPL’d application. It may be better to treat them as such to keep the licensing simple though.

Even then, it’s usually possible to “upgrade” permissively licensed code (such as Apache 2.0) to a copyleft license as long as the original license’s conditions are still met which usually involves denoting which parts of the code is also available under the permissive license.

Atemu,
@Atemu@lemmy.ml avatar

I’ll let you in on a little secret: Fstab gets converted to mount units anyways.

Atemu,
@Atemu@lemmy.ml avatar

Why does it need to be public-facing? There may be solutions that don’t require exposing it to billions of people.

Security is always about layers. The more independent layers there are, the fewer the chances someone will break through all of them. There is no one technology that will make your hosting reasonably secure, it’s the combination of multiple.

You’ve already mentioned software ran inside an unprivileged sandbox.

There’s also:

  • Sandbox ran unprivileged inside a VM
  • VM ran inside unprivileged sandbox
  • Firewall only allowing applications to open certain ports
  • Server running all of that hosted by someone else on their network with their own abstractions
Atemu,
@Atemu@lemmy.ml avatar

Not really. It was publicly available information. It’s, by definition, not private.

Atemu,
@Atemu@lemmy.ml avatar

You gave them an irrevocable license to basically use your content in any way they see fit. Them not showing posts you deleted is just them being nice, not being obligated to do so. They could simply ignore your request or restore posts later.

You should have thought about that when you gave them that license to your content.

Atemu,
@Atemu@lemmy.ml avatar

If you need to set up a special dedicated subvolume, might aswell set up a partition instead; it’s just simpler.

With a swapfile you also can’t do multi-device setups which is a limitation I personally couldn’t live with.

Atemu,
@Atemu@lemmy.ml avatar

I haven’t used channels in years, but doesn’t that just refer to the running system, not using Nix to build projects?

I have no idea what you’re trying to say here.

Atemu,
@Atemu@lemmy.ml avatar

How do you compose Guix projects?

Is there such a thing as split-screen grep?

I want to run a command and see all of its output on the left hand side, while simultaneously searching/grepping for particular lines on the right hand side. In other words, I want a temporary vertically split screen in my CLI, ideally with scrollback on each side of the split, but where I expect the left hand side to be...

Atemu, (edited )
@Atemu@lemmy.ml avatar

That’s not at all grep-like. Grep is a line filter, not a character sequence highlighter.

Atemu,
@Atemu@lemmy.ml avatar

The backend is the real interesting part, and the only way that we can be sure that “they cannot read the emails”

While I’d still prefer it, OSS can’t really help with that because what’s really required here is remote attestation.
That is an unsolved problem to my knowledge; there is no way to know which software they’re actually running. Even if they published the source code, they could trivially apply a patch in their deployment that stores all incoming email somewhere and you’d be none the wiser.

Even if they published source code and could somehow prove to you that they’re running a version derived from it, you would still not be safe from surveillance as one could simply MITM all connections. See i.e. notes.valdikss.org.ru/jabber.ru-mitm/.

That’s likely one of the reasons they do everything they can to make PGP accessible to every user.

imap/smtp can be toggled with a warning, if that’s really their concern

It’s plain and simply not how their service works. They’d have to build most of their service a second time but unencrypted.

It’s like asking Signal to build in support for IRC; it does not make sense for them to do that in any way without malicious intent needed.

no IMAP = no easy migration to somewhere else

You have IMAP access via the bridge. That’s what it’s for.

Atemu,
@Atemu@lemmy.ml avatar

Note that bcache and bcachefs are different things. The latter is extremely new and not ready for “production” yet. This post is about bcache.

Atemu,
@Atemu@lemmy.ml avatar

Why is it that GrapheneOS/CalyxOS always seem to attract these kinds of people?

Atemu,
@Atemu@lemmy.ml avatar

I’m still in the process of optimizing stuff around Linux (e.g. media drive filesystem)

What do you mean by that?

Atemu,
@Atemu@lemmy.ml avatar

Minor version bumps should be mostly trivial: Change version and hash, package that into commit+PR (ckeck guidelines on that!) and that’s it most of the time.

The harder part is QA; ensuring it still works as expected. Therefore, even just testing update PRs as they come in would be a great help.
If the code change is trivial and a user of the package said it still works for them, a commiter coming along is likely convinced of the PR’s quality and just merges it.

It’s super easy to contribute to Nixpkgs in a meaningful manner :)

Atemu,
@Atemu@lemmy.ml avatar

You don’t.

No, seriously. Let the distros package your software; they know how to do that best.

Atemu,
@Atemu@lemmy.ml avatar

I homebrew the ROM on my personal phone and I can tell you from first hand experience that you need the vendor dirs extracted from the OEM ROM. You can read up on that on the wiki pages for building any device ROM.

You can also come to that conclusion the other way around: How else would you (or LOS maintainers) get your hands on proprietary blobs full of secret sauce that vendors sometimes even try to actively block access to?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #