I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.
This will happen in any case unless you had enough capacity for redundancy.
What is in this 4TB drive? A Linux installation? A bunch of user data? Both? What kind of data?
The first step to this is to separate your concerns. If you had e.g. a 20GiB Linux install, 10GiB of loose home files, 1TiB of Movies, 500GiB of photos, 1TiB of games and 500GiB of Music for example, you could back each of those up separately onto separate drives.
Now, it’s likely that you’d still have more data of one category than what fits on your largest external drive (movies are a likely candidate).
For this purpose, I use git-annex.branchable.com. It’s a beast to get into and set up properly with plenty of footguns attached but it was designed to solve issues like this elegantly.
One of the most important things it does is separate file content from file metadata; making metadata available in all locations (“repos”) while data can be present in only a subset, thereby achieving distributed storage. I.e. you could have 4TiB of file contents distributed over a bunch of 500GiB drives but in each one of those repos you’d have the full file tree available (metadata of all files + content of present files) allowing you to manage your files in any place without having all the contents present (or even any). It’s quite magical.
Once configured properly, you can simply attach a drive, clone the git repo onto it and then run a git annex sync --content and it’ll fill that drive up with as much content as it can or until each “file”'s numcopies or other configured constraints are reached.
Unless some sandboxing or other explicit security measure is in place, any software you run typically has access to your entire home directory, including .ssh/. If any one of those was compromised somehow, they’ve got access to your SSH keys.
If this is a VM, video playback stutters do not surprise me one bit. There’s many layers between the video and the image you see on screen here and they’re not optimised for viewing fidelity. This is likely not due to Linux but because you’re running this inside a with an emulated GPU. GUIs in VMs usually suck.
Optional codecs won’t help for Youtube since they serve royalty-free codecs such as VP9 or AV1 most of the time rather than patent-encoumbered codecs such as H.264 and free codecs are always installed.
That would also not fix stutters, only videos not playing back at all (because there’d be no decoder that could).
If this is a VM, installing the Nvidia driver also won’t do anything because the machine has no access to your host’s GPU. Not that the nvidia driver would change anything about videos since no sane browser supports their proprietary crap driver, so it’s software decoding either way.
You should try this on real hardware. You technically don’t even need to install as most GUI distros have a graphical installer with Firefox etc. pre-installed that you can use to test this.
If you have an Nvidia GPU, I’d recommend you to try !pop_os.
These aren’t all versions per se but mostly variants, versions and versions of variants. For example, we have packaged the xanmod kernel which is a modified kernel optimised for desktop use but it has two variants: Main and LTS. We have packaged both.
Here are the names of all of our kernels currently to give you an idea (as a JSON list):
This is useful to have because users might have hardware constraints. It’s not hard to imagine a scenario where a user might have a WiFi chip that only works with kernel ABIs < 5.4 and require the 470 nvidia driver for their old GPU. Packaging just the latest kernel and just the latest Nvidia driver would make this user unable to use their system.
there’s a different nvidia driver for each kernel version. Already a stupid design
That’s not a stupid design at all. A nvidia kernel module artifact is only compatible with exactly one kernel ABI. Thus you need one binary nvidia package for each kernel you ship.
Arch also has one package for every kernel ABI they ship: nvidia and nvidia-lts.
Though it should be noted that their design assumes that these two ABIs are the only possible ABIs which isn’t strictly the case as the zen, hardened or RT variants may sometimes lag behind their regular counterpart. That’s a stupid design if anything as it increases the friction of kernel ABI upgrades as a kernel package maintainer.
We at NixOS also ship the nvidia module for each of our ~50 kernel variants; all major versions of the Nvidia module compatible with that kernel in fact.
The only possible way to access these nvidia kernel modules is via a certain kernel’s linuxPackages attribute set that contains all packages that rely on a kernel ABI such as kernel modules or packages like perf. That’s good design if you ask me but I’m obviously biased ;)
If you as a developer wanted a non-technical user to test a thing you fixed for them, you could ask them to try an AppImage from your CI pipeline and they would easily be able to install it. They’re great for that.
Also, trying out a package can leave unwanted system state around in traditional imperative system package managers. AppImages OTOH are self-contained and user-installable.