If you are lucky enough, borgbackup could deduplicate and compress the data enough to fit a 1tb drive. Depending on the content of course, but it’s deduplication & compression is really insanely efficient for certain cases. (I have 3 devices with ~900GB each (so just shy of 3TB in total) which all gets stored in a ~400gb borgbackup)
It’s going to take a little work here, but I have a large drive on my plex, and a couple of smaller drives that I back everything up to. On the large drive get a list of the main folders. You can do a “du -h --max-depth=1 | sort -hk1” on the root folder to get an idea of how you should split them up. Once you have an idea, make two files, each with their own list of folders (eg: folders1.out and folders2.out) that you want to go to each separate drive. If you have both of the smaller drives mounted, just execute the rsync commands, otherwise, just do each rsync command with the corresponding drive mounted. Here’s an example of my rsync commands. Keep in mind I am going from an ext4 filesystem to a couple of ntfs drives, which is why I use the size only. Make sure and do a dry run or two, and you may or may not want the ‘–delete’ command in there. Since I don’t want to keep files I have deleted from my plex, I have it delete them on the target drive also.
Im going to say that doesnt exist and restoring from it would be a nightmare. You could cobble together a shell or python script that does that though.
You’re better off just getting a drive bay and plugging all the drives in at once as an LVM.
You could also do the opposite, which is split the 4TB into the different logical volumes. Each the same size as a drive.
It wouldn’t be so complicated to restore as long as they keep full paths and don’t split up subdirectories. But yeah, sounds like they’d need a custom tool to examine their dirs and do a solve a series of knapsack problems.
I ran into the same problem some months ago when my cloud backups stopped being financially viable and I decided to recycle my old drives. For offline backups mergerfs will not work as far as I understand. Creating tar archives of 130TB+ also doesnt sound like a good option. Some of the tape backup solutions looked to be possible options, but are often complex and use special archive formats…
I ended up writing my own solution in python using json state files. It’s complete enough to run the backup, but otherwise very work-in-progress with no restore at all. So I do not want to publish it.
If you find a suitable solution I am also very interested 😅
Before doing anything, if your screen allows it, swap DP to HDMI or HDMI to DP as output, that may fix this to the point of being able to actually boot and further fix the issue.
I've had this before with drivers where suddenly it would fail on either port but would still run on one of the others.
Sleep/wake issues with AMD gpu and platform drivers are super, super, super common. Fish back through your kernel journal after a reboot (journalctl -kb -1 should do it) and look for the driver errors immediately after the wake event. If this has been fixed in a later kernel release then update your kernel, if not go report it to either the Ubuntu folks or on the amdgpu gitlab.
You might end up splitting files across drives, but I don’t think you’re likely to find a more “out of the box” solution. You might combine it with the compression flags to make sure things fit, and don’t forget to number your drives!
It is mentioning gpu in the errors, so it would be the first thing I would try, to see if the errors change, because I have no idea what’s going on here
Not sure if this is the root cause of your boot failure, but underscores in hostnames are not allowed. A- Z, 0-9 and - are the only allowed characters.
You’re welcome to use whatever init system you want, but Systemd solves a lot of the bullshit problems and limitations that come from init.d init scripts. Systemd also has a lot of its own bullshit and bloat, but it does an excellent job at actually being an init system and service manager if you know how to properly use it.
Almost everything you said is mere brochureware perpetuated by a tribe stronger than the vi mafia.
Sysvinit starts fast, starts well, and doesn’t try to control mounts, cron, Getty, and everything else.
The"but it retries things" whine was a solved problem in 2001. So easy.
The EL6 machines I have in storage start faster than the el7 machines joining them. PCLinuxOS is a very valid non-systemd system that only lacks a documented kickstart emulant.
solves a lot of the bullshit problems and limitations that come from init.d init scripts.
So do the other ~7 init systems developed since then. And, as far as i know, all of them print their relevant trouble directly to stderr. Who cares about SysV still?
Hey guys, why all the downvotes? Systemd is known for throwing all the irrelevant stuff at you, making it troublesome to debug. Which is why i switched. And i can confirm: Runit, S6, OpenRC and even simple Dinit are way better in that regard (and they do make less trouble generally).
linux
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.