linux

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Deckweiss, (edited ) in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

If you are lucky enough, borgbackup could deduplicate and compress the data enough to fit a 1tb drive. Depending on the content of course, but it’s deduplication & compression is really insanely efficient for certain cases. (I have 3 devices with ~900GB each (so just shy of 3TB in total) which all gets stored in a ~400gb borgbackup)

FigMcLargeHuge, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

It’s going to take a little work here, but I have a large drive on my plex, and a couple of smaller drives that I back everything up to. On the large drive get a list of the main folders. You can do a “du -h --max-depth=1 | sort -hk1” on the root folder to get an idea of how you should split them up. Once you have an idea, make two files, each with their own list of folders (eg: folders1.out and folders2.out) that you want to go to each separate drive. If you have both of the smaller drives mounted, just execute the rsync commands, otherwise, just do each rsync command with the corresponding drive mounted. Here’s an example of my rsync commands. Keep in mind I am going from an ext4 filesystem to a couple of ntfs drives, which is why I use the size only. Make sure and do a dry run or two, and you may or may not want the ‘–delete’ command in there. Since I don’t want to keep files I have deleted from my plex, I have it delete them on the target drive also.

sudo rsync -rhi --delete --size-only --progress --stats --files-from=/home/plex/src/folders1.out /media/plex/maindrive /media/plex/4tbbackup

sudo rsync -rhi --delete --size-only --progress --stats --files-from=/home/plex/src/folders2.out /media/plex/maindrive /media/plex/other4tbdrive

captcha, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

Im going to say that doesnt exist and restoring from it would be a nightmare. You could cobble together a shell or python script that does that though.

You’re better off just getting a drive bay and plugging all the drives in at once as an LVM.

You could also do the opposite, which is split the 4TB into the different logical volumes. Each the same size as a drive.

lemmyvore,

It wouldn’t be so complicated to restore as long as they keep full paths and don’t split up subdirectories. But yeah, sounds like they’d need a custom tool to examine their dirs and do a solve a series of knapsack problems.

restlessyet, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

I ran into the same problem some months ago when my cloud backups stopped being financially viable and I decided to recycle my old drives. For offline backups mergerfs will not work as far as I understand. Creating tar archives of 130TB+ also doesnt sound like a good option. Some of the tape backup solutions looked to be possible options, but are often complex and use special archive formats…

I ended up writing my own solution in python using json state files. It’s complete enough to run the backup, but otherwise very work-in-progress with no restore at all. So I do not want to publish it.

If you find a suitable solution I am also very interested 😅

AbidanYre, (edited ) in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

Git annex can do that and keep track of which drive the files are on.

git-annex.branchable.com

Endorkend, in Super weird error, what's happening?
@Endorkend@kbin.social avatar

Before doing anything, if your screen allows it, swap DP to HDMI or HDMI to DP as output, that may fix this to the point of being able to actually boot and further fix the issue.

I've had this before with drivers where suddenly it would fail on either port but would still run on one of the others.

Catsrules, in Debian Likely Moving Away From i386 In The Near Future

Can someone explain like I am 5?

Is just just talking about 32bit processor support? Or are we also talking about 32 bit programs as well?

eutampieri,

The first

Catsrules,

Thanks

eutampieri,

☺️

iwasgodonce, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

www.gnu.org/software/…/Using-Multiple-Tapes.html

Might do kind of what you want.

seaQueue, in Super weird error, what's happening?
@seaQueue@lemmy.world avatar

Shit’s broke yo.

Sleep/wake issues with AMD gpu and platform drivers are super, super, super common. Fish back through your kernel journal after a reboot (journalctl -kb -1 should do it) and look for the driver errors immediately after the wake event. If this has been fixed in a later kernel release then update your kernel, if not go report it to either the Ubuntu folks or on the amdgpu gitlab.

Molecule5076, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

Something like mergerfs? I think this is what Unraid uses if I remember right.

github.com/trapexit/mergerfs

HiddenLayer5,
@HiddenLayer5@lemmy.ml avatar

Thank you!

rambos, (edited )

If OP cant use more than one disk at once, how can they benefit from mergerfs?

Molecule5076,

Yeah you’re right. Scratch that then

ricecake, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive?

www.gnu.org/…/Multi_002dVolume-Archives.html

You might end up splitting files across drives, but I don’t think you’re likely to find a more “out of the box” solution. You might combine it with the compression flags to make sure things fit, and don’t forget to number your drives!

HiddenLayer5,
@HiddenLayer5@lemmy.ml avatar

Thank you!

governorkeagan, in Super weird error, what's happening?

Have you tried booting in with a live usb? You might be able to do some sort of recovery from there.

Having said, I’m still very much a Linux noob.

makingStuffForFun, in Debian Likely Moving Away From i386 In The Near Future
@makingStuffForFun@lemmy.ml avatar

Will that mean a bunch of my steam games won’t work? Please don’t say that, as I love debian, and I love my games.

NamelessGO,

Games aren’t being affected with this change

Patch,

No, this is just about the kernel and the installer/images.

You won’t be able to install Debian on an x86-32 computer anymore, but everything you can currently do on an x86-64 install still continue to work.

IsThisLoss,

No

luthis, in Super weird error, what's happening?

Can you remove the GPU and use onboard?

It is mentioning gpu in the errors, so it would be the first thing I would try, to see if the errors change, because I have no idea what’s going on here

SuperSpruce,

The computer is running a Ryzen 9 3900X, which does not have onboard graphics unfortunately.

madmaurice, (edited ) in Super weird error, what's happening?
@madmaurice@discuss.tchncs.de avatar

None of what’s visible helps identifying the error. Try journalctl -xb as suggested it might show more relevant information

Edit: oops should’ve been joirnalctl instead of journal

SuperSpruce,

I tried to do that, and it couldn’t find the journal package. So I tried to install it, but apt, flatpak, nor snap could find the package to install.

Penta,

This was probably supposed to say “journalctl -xb”

SuperSpruce,

Okay, that command works for me. The last line says that /etc/hosts:7: hostname “SuperSpruce_Iron_3900X” is not valid, ignoring.

SpaceCadet,
@SpaceCadet@feddit.nl avatar

Not sure if this is the root cause of your boot failure, but underscores in hostnames are not allowed. A- Z, 0-9 and - are the only allowed characters.

madmaurice,
@madmaurice@discuss.tchncs.de avatar

Oops 😬

UID_Zero,
@UID_Zero@infosec.pub avatar

Read the error again. It’s journalctl.

MonkderZweite,

See. Stuff like that is why i started going non-Systemd.

madmaurice,
@madmaurice@discuss.tchncs.de avatar

I doubt other init systems can make the screen bigger so that the vital part of the kernel log is still on screen, but be my guest to prove me wrong 🙄

MonkderZweite,
madmaurice,
@madmaurice@discuss.tchncs.de avatar

Thank you for proving my point.

pivot_root, (edited )

You’re welcome to use whatever init system you want, but Systemd solves a lot of the bullshit problems and limitations that come from init.d init scripts. Systemd also has a lot of its own bullshit and bloat, but it does an excellent job at actually being an init system and service manager if you know how to properly use it.

corsicanguppy,

Almost everything you said is mere brochureware perpetuated by a tribe stronger than the vi mafia.

Sysvinit starts fast, starts well, and doesn’t try to control mounts, cron, Getty, and everything else.

The"but it retries things" whine was a solved problem in 2001. So easy.

The EL6 machines I have in storage start faster than the el7 machines joining them. PCLinuxOS is a very valid non-systemd system that only lacks a documented kickstart emulant.

MonkderZweite, (edited )

solves a lot of the bullshit problems and limitations that come from init.d init scripts.

So do the other ~7 init systems developed since then. And, as far as i know, all of them print their relevant trouble directly to stderr. Who cares about SysV still?

Hey guys, why all the downvotes? Systemd is known for throwing all the irrelevant stuff at you, making it troublesome to debug. Which is why i switched. And i can confirm: Runit, S6, OpenRC and even simple Dinit are way better in that regard (and they do make less trouble generally).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #