Comments

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Atemu, to linux in Cleanest way to maintain AppImage installations?
@Atemu@lemmy.ml avatar

That and ease of deployment.

If you as a developer wanted a non-technical user to test a thing you fixed for them, you could ask them to try an AppImage from your CI pipeline and they would easily be able to install it. They’re great for that.

Also, trying out a package can leave unwanted system state around in traditional imperative system package managers. AppImages OTOH are self-contained and user-installable.

Atemu, to linux in what caused you to get into Linux?
@Atemu@lemmy.ml avatar

And now you get to be the only one who breaks your system on a regular basis ;)

Atemu, to privacy in Proton Mail CEO Calls New Address Verification Feature 'Blockchain in a Very Pure Form'
@Atemu@lemmy.ml avatar

So PM claims it has on the order of 10^8 users. Let’s assume each user has one email address with one public ed25519 key, both of which are likely false.

Each key is 32Byte; 32B * 10^8 = 3.2GB.

Could someone do the math how much fiat it’d take to store such an enormous amount of data on the Ethereum or monero blockchains?

Atemu, to privacy in Proton Mail CEO Calls New Address Verification Feature 'Blockchain in a Very Pure Form'
@Atemu@lemmy.ml avatar

nobody’s made a solution that is simple and effective

This one isn’t that either by the looks of it but it’s certainly a problem where something like blockchain could provide a solution.

Atemu, to linux in Bcache is amazing!: Making HDD way faster!
@Atemu@lemmy.ml avatar

except for hdds without cache

The “cache” on HDDs is extremely tiny. Maybe a few seconds worth of sequential access at max. It does not exist to cache significant amounts of data for much longer than that.

At the sizes at which bcache is used, you could permanently hold almost all of your performance-critical data on flash storage while having enough space for tonnes of performance-uncritical data; all in the same storage “package”.

Atemu, to privacy in Gitlab now requires phone number/credit card verification
@Atemu@lemmy.ml avatar

Not just read but modify even.

Atemu, (edited ) to linux in What is the easiest way to try all the DEs?
@Atemu@lemmy.ml avatar

Guix might also be able to do this but I don’t think the others can.

This relies on NixOS’ declarative configuration which Silverbluae and the like do not have; they are configured imperatively.

Atemu, to linux in I'm an idiot (arm)
@Atemu@lemmy.ml avatar

Only with the unfree unrar plugin.

Atemu, to linux in Linux 6.8 Network Optimizations Can Boost TCP Performance For Many Concurrent Connections By ~40%
@Atemu@lemmy.ml avatar

Depends. There was that one F2P COD clone which used TCP and IIRC it did fine?

Atemu, to datahoarder in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)
@Atemu@lemmy.ml avatar

I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.

This will happen in any case unless you had enough capacity for redundancy.

What is in this 4TB drive? A Linux installation? A bunch of user data? Both? What kind of data?

The first step to this is to separate your concerns. If you had e.g. a 20GiB Linux install, 10GiB of loose home files, 1TiB of Movies, 500GiB of photos, 1TiB of games and 500GiB of Music for example, you could back each of those up separately onto separate drives.

Now, it’s likely that you’d still have more data of one category than what fits on your largest external drive (movies are a likely candidate).

For this purpose, I use git-annex.branchable.com. It’s a beast to get into and set up properly with plenty of footguns attached but it was designed to solve issues like this elegantly.
One of the most important things it does is separate file content from file metadata; making metadata available in all locations (“repos”) while data can be present in only a subset, thereby achieving distributed storage. I.e. you could have 4TiB of file contents distributed over a bunch of 500GiB drives but in each one of those repos you’d have the full file tree available (metadata of all files + content of present files) allowing you to manage your files in any place without having all the contents present (or even any). It’s quite magical.

Once configured properly, you can simply attach a drive, clone the git repo onto it and then run a git annex sync --content and it’ll fill that drive up with as much content as it can or until each “file”'s numcopies or other configured constraints are reached.

Atemu, to privacy in Where can I see that websites can see my browser extensions?
@Atemu@lemmy.ml avatar

Detecting extensions using web accessible resources is not possible on Firefox as Firefox extension ID’s are unique for every browser instance. Therefore the URL of the extension resources cannot be known by third parties.

and also for Chrome:

in manifest v3 extensions will be able to enable ‘use_dynamic_url’ option, which will change the resource URL for each session (browser restart). This will render this detection method unusable.

Though it should be noted that this method isn’t the only way to detect extensions.

Atemu, to linux in Automated deployment of systems
@Atemu@lemmy.ml avatar

I use NixOS but I don’t bother with automatic deployment or even automatic formatting. I don’t feel it’s necessary in a homelab setting as hardware failure rarely happens at such small scale and the manual steps left aren’t that significant.

Atemu, to linux in Comparison between NixOS vs blendOS vs Vanilla OS: what to pick and why?
@Atemu@lemmy.ml avatar

In regular FHS distros, an upgrade to libxyz can be done without an update to its dependants a, b and c. The libxyz.so is updated in-place and newly run processes of a, b and c will use the new shared object code.

In Nix’ model, changing a dependency in any way changes all of its dependants too. The package a that depends on libxyz 1.0.0 is treated as entirely different from the otherwise same package a that depends on libxyz 1.0.1 or libxyz 1.0.0 with a patch applied/new dependency/patch applied to the compiler/anything.

Nix encodes everything that could in any way influence a package’s content into that package’s “version”. That’s the hash in every Nix store path (i.e. /nix/store/5jlfqjgr34crcljr8r93kwg2rk5psj9a-bash-interactive-5.2-p15/bin/bash). The version number in the end is just there to inform humans of a path’s contents; as far as Nix is concerned, it’s just an arbitrary name string.

Therefore, any update to “core” dependencies requires a rebuild of all dependants. For very central core packages such as glibc, that means almost all packages in existence. Because those packages are “different” from the packages on your system without the update, you must download them all again and, because they have different hashes, they will be in separate paths in your Nix store.

This is what allows Nix to have parallel “installation” of any version of any package and roll back your entire config to a previous state because your entire system is treated as a “package” with the same semantics as described above.

Unless you have harsh data caps, extremely slow connections or are extremely tight on disk space, this isn’t much of a concern though.
Additionally, you can always “garbage collect” old paths that are no longer referenced and Nix can deduplicate whole files that are 1:1 the same across the whole Nix store.

Atemu, to linux in What is the best distro for gaming?
@Atemu@lemmy.ml avatar

Any distro that ships relatively recent libraries and kernels.

With the exception of Debian, RHEL, SLES and the like, pretty much everything.

Atemu, to linux in What are the rules of buying used storage?
@Atemu@lemmy.ml avatar

Original price doesn’t matter, you need to compare it against current new offerings. A drive like that, I’d buy for 8-10€/TB at max. because current new HDD pricing is 15€/TB at the low end.

What you also need is SMART output. Watch out for high uncorrectable errors, writes and whatever. I’d never buy a drive without having seen its SMART data.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #