@Atemu@lemmy.ml avatar

Atemu

@Atemu@lemmy.ml

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

Nixpkgs committer.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Atemu,
@Atemu@lemmy.ml avatar

Depends. There was that one F2P COD clone which used TCP and IIRC it did fine?

Atemu,
@Atemu@lemmy.ml avatar

It’s unkown whether he improved his temper or whether he just built a very good mail filter for himself though.

Atemu,
@Atemu@lemmy.ml avatar

I’d highly recommend setting up a swap partition instead.

Atemu,
@Atemu@lemmy.ml avatar

There’s the WIP NixOS-based SnowflakeOS that aims to make NixOS approachable for mere mortals but that’s still declarative configuration and of course still NixOS under the hood.

There’s a bunch of immutable distros out there that use OStree or some other imperatively managed snapshotting mechanism such as Fedora Silverblue or VanillaOS.

Atemu, (edited )
@Atemu@lemmy.ml avatar

ifconfig.me. Can also be be curl’d.

Easier to remember is to just search for what is my ip in clear net DuckDuckGo (or Kagi if you have it).

they all ask for CAPTYA which is an obvious attempt to obtain ones true IP.

How exactly is a CAPTCHA supposed to discover your “true IP”?

Also note that your IP address is by far not the only thing used to fingerprint you. See abrahamjuliot.github.io/creepjs/ and browserleaks.com.

Use TOR browser if you want your starting conditions to be reasonably anonymous.

Even more critical for fingerprinting is user behaviour though.

What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)

So I have a nearly full 4 TB hard drive in my server that I want to make an offline backup of. However, the only spare hard drives I have are a few 500 GB and 1 TB ones, so the entire contents will not fit all at once, but I do have enough total space for it. I also only have one USB hard drive dock so I can only plug in one...

Atemu,
@Atemu@lemmy.ml avatar

I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.

This will happen in any case unless you had enough capacity for redundancy.

What is in this 4TB drive? A Linux installation? A bunch of user data? Both? What kind of data?

The first step to this is to separate your concerns. If you had e.g. a 20GiB Linux install, 10GiB of loose home files, 1TiB of Movies, 500GiB of photos, 1TiB of games and 500GiB of Music for example, you could back each of those up separately onto separate drives.

Now, it’s likely that you’d still have more data of one category than what fits on your largest external drive (movies are a likely candidate).

For this purpose, I use git-annex.branchable.com. It’s a beast to get into and set up properly with plenty of footguns attached but it was designed to solve issues like this elegantly.
One of the most important things it does is separate file content from file metadata; making metadata available in all locations (“repos”) while data can be present in only a subset, thereby achieving distributed storage. I.e. you could have 4TiB of file contents distributed over a bunch of 500GiB drives but in each one of those repos you’d have the full file tree available (metadata of all files + content of present files) allowing you to manage your files in any place without having all the contents present (or even any). It’s quite magical.

Once configured properly, you can simply attach a drive, clone the git repo onto it and then run a git annex sync --content and it’ll fill that drive up with as much content as it can or until each “file”'s numcopies or other configured constraints are reached.

Librewolf but like... for chromium?

My main browser is Librewolf but I keep a chromium browser just in case. Previously used brave but their flatpak is shit. Ungoogled chromium seems ok but it looks like they don’t change much from upstream chromium. Any good chromium browsers which harden their browsers like librewolf does for more privacy?

Atemu,
@Atemu@lemmy.ml avatar

Why bother with such micro optimisations when the purpose is to be used extremely infrequently for compatibility reasons?

Atemu,
@Atemu@lemmy.ml avatar

That and ease of deployment.

If you as a developer wanted a non-technical user to test a thing you fixed for them, you could ask them to try an AppImage from your CI pipeline and they would easily be able to install it. They’re great for that.

Also, trying out a package can leave unwanted system state around in traditional imperative system package managers. AppImages OTOH are self-contained and user-installable.

Atemu,
@Atemu@lemmy.ml avatar

And now you get to be the only one who breaks your system on a regular basis ;)

Atemu,
@Atemu@lemmy.ml avatar

No, not obviously.

People new to Nix/NixOS always seem to think that flakes are some kind of fundamental shift or something and if you don’t use flakes, you’re not going to be ready for the future or whatever.
No, they’re not. They’re “just” a standardised method of composing separate Nix projects.

In the most common NixOS case (and especially when starting out) you have exactly one external Nix project you depend on and that’s Nixpkgs. Flakes provide very little (if any) benefit in this specific case.

If you’re starting out, you don’t need to care one bit about flakes, experimental features and the documentation of features that are not intended to be commonly used yet (especially not for beginners).

Atemu,
@Atemu@lemmy.ml avatar

systemd-boot discovers windows automatically, no need for configuration.

Atemu,
@Atemu@lemmy.ml avatar

Detecting extensions using web accessible resources is not possible on Firefox as Firefox extension ID’s are unique for every browser instance. Therefore the URL of the extension resources cannot be known by third parties.

and also for Chrome:

in manifest v3 extensions will be able to enable ‘use_dynamic_url’ option, which will change the resource URL for each session (browser restart). This will render this detection method unusable.

Though it should be noted that this method isn’t the only way to detect extensions.

Atemu,
@Atemu@lemmy.ml avatar

I use NixOS but I don’t bother with automatic deployment or even automatic formatting. I don’t feel it’s necessary in a homelab setting as hardware failure rarely happens at such small scale and the manual steps left aren’t that significant.

Proton Mail CEO Calls New Address Verification Feature 'Blockchain in a Very Pure Form' (tech.slashdot.org)

Proton Mail, the leading privacy-focused email service, is making its first foray into blockchain technology with Key Transparency, which will allow users to verify email addresses. From a report: In an interview with Fortune, CEO and founder Andy Yen made clear that although the new feature uses blockchain, the key technology...

Atemu,
@Atemu@lemmy.ml avatar

So PM claims it has on the order of 10^8 users. Let’s assume each user has one email address with one public ed25519 key, both of which are likely false.

Each key is 32Byte; 32B * 10^8 = 3.2GB.

Could someone do the math how much fiat it’d take to store such an enormous amount of data on the Ethereum or monero blockchains?

Atemu,
@Atemu@lemmy.ml avatar

nobody’s made a solution that is simple and effective

This one isn’t that either by the looks of it but it’s certainly a problem where something like blockchain could provide a solution.

Atemu,
@Atemu@lemmy.ml avatar

except for hdds without cache

The “cache” on HDDs is extremely tiny. Maybe a few seconds worth of sequential access at max. It does not exist to cache significant amounts of data for much longer than that.

At the sizes at which bcache is used, you could permanently hold almost all of your performance-critical data on flash storage while having enough space for tonnes of performance-uncritical data; all in the same storage “package”.

Atemu, (edited )
@Atemu@lemmy.ml avatar

AMD platform support is coming to coreboot in the next few years, consumer platforms much later and even there I’m doubtful it’d come to your laptop in particular.

Get a Frame.work with Intel chip if you want coreboot on a modern laptop soon-ish. I know the guy working on that port ;)

Atemu,
@Atemu@lemmy.ml avatar

In regular FHS distros, an upgrade to libxyz can be done without an update to its dependants a, b and c. The libxyz.so is updated in-place and newly run processes of a, b and c will use the new shared object code.

In Nix’ model, changing a dependency in any way changes all of its dependants too. The package a that depends on libxyz 1.0.0 is treated as entirely different from the otherwise same package a that depends on libxyz 1.0.1 or libxyz 1.0.0 with a patch applied/new dependency/patch applied to the compiler/anything.

Nix encodes everything that could in any way influence a package’s content into that package’s “version”. That’s the hash in every Nix store path (i.e. /nix/store/5jlfqjgr34crcljr8r93kwg2rk5psj9a-bash-interactive-5.2-p15/bin/bash). The version number in the end is just there to inform humans of a path’s contents; as far as Nix is concerned, it’s just an arbitrary name string.

Therefore, any update to “core” dependencies requires a rebuild of all dependants. For very central core packages such as glibc, that means almost all packages in existence. Because those packages are “different” from the packages on your system without the update, you must download them all again and, because they have different hashes, they will be in separate paths in your Nix store.

This is what allows Nix to have parallel “installation” of any version of any package and roll back your entire config to a previous state because your entire system is treated as a “package” with the same semantics as described above.

Unless you have harsh data caps, extremely slow connections or are extremely tight on disk space, this isn’t much of a concern though.
Additionally, you can always “garbage collect” old paths that are no longer referenced and Nix can deduplicate whole files that are 1:1 the same across the whole Nix store.

Atemu,
@Atemu@lemmy.ml avatar

Any distro that ships relatively recent libraries and kernels.

With the exception of Debian, RHEL, SLES and the like, pretty much everything.

Atemu, (edited )
@Atemu@lemmy.ml avatar

Guix might also be able to do this but I don’t think the others can.

This relies on NixOS’ declarative configuration which Silverbluae and the like do not have; they are configured imperatively.

Atemu,
@Atemu@lemmy.ml avatar

Post the journal after wakeup, not before.

Atemu, (edited )
@Atemu@lemmy.ml avatar

I dont want weird archives or anything, just to copy my filesystem to another drive.

For proper backups, you do want “weird archives” with integrity checks, versioning, deduplication and compression. Regular files cannot offer that (at least not efficiently so).

What are the rules of buying used storage?

I am currently expanding my Homelab setup, and want to buy a 10TB drive, for media storage. It’s a Seagate Ironwolf disk, so perfect for the job. But, it’s second hand. It was originally bought in 2019, but stopped being used after 2022. Only used for static storage, it’s been booted less than 50 times. I can get it for...

Atemu,
@Atemu@lemmy.ml avatar

Original price doesn’t matter, you need to compare it against current new offerings. A drive like that, I’d buy for 8-10€/TB at max. because current new HDD pricing is 15€/TB at the low end.

What you also need is SMART output. Watch out for high uncorrectable errors, writes and whatever. I’d never buy a drive without having seen its SMART data.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #