datahoarder

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Appoxo, in Seagate Expansion external HDD power supply making a very faint alarm sound
@Appoxo@lemmy.dbzer0.com avatar

My monitor status LED makes a faint beeping sound when in standby.
Also the panel makes an audible whine when the content changes to a specific amount of white.

As another user already said: Probably low quality electronics.

notthebees, in Seagate Expansion external HDD power supply making a very faint alarm sound

It’s probably just coil whine.

drdabbles, in Whatever happened to DNA-based storage research?
@drdabbles@lemmy.world avatar

It wasn’t sensible, given the short life of DNA. One of those sci-fi ideas that caught media and technophile attention, but wasn’t ever going to go anywhere.

Project Silica appears to be attempting very high density, very long life storage, though.

tetris11, in Whatever happened to DNA-based storage research?
@tetris11@lemmy.ml avatar

I remember there being a water based storage solution for music that was under development, though it was said to drain entire ecosystems by doing so. Sad, as it seemed promising.

penguin_knight, in Whatever happened to DNA-based storage research?

The truth of academia is that it is extremely slow. there are less than 20 minds total on all of earth working on this idea, separately, in different countries. And these 20 people are in their 20’s, severely underpaid, don’t necessarily have all the resources they want, and science may not be their #1 life priority.

anyways:

reading and writing DNA is the main driver of evolution, and it does so because it is error prone (causing mutations). You can imagine this is bad if you want to preserve the integrity of the data.

DNA storage would be okay if you were to… say archive the entire internet for future generations, or geneology records, etc. things that do not need to be written and accessed quickly or often.

litchralee, (edited ) in Whatever happened to DNA-based storage research?

I recall watching a documentary (on Curiosity Stream maybe? I’m no longer subscribed) on data storage longevity. It covered DNA storage, which I think this PBS video w/ transcript provides more recent coverage of its developments. As well as holographic storage, which I could only find the Wikipedia page for.

As for which one I think might be the future, it’s tough to say. Tape is pretty good and cheap but slow for offline storage. Archival media will probably end up all being offline storage, although I could see a case for holographic/optical storage being near line. Future online storage will probably remain a tough pickle: cheap, plentiful, fast; select at most two, maybe.

Nunya, in Help me acquire discontinued sound effect libraries please

I’m not able to help, but good luck with your search.

7Sea_Sailor, in A dozen or two TB of storage for media on the cheap?

May I ask: are you sure you need a media center with transcoding? Because it may be totally sufficient for you to access files through a file explorer and play them with VLC/mpv or whatever else. Having a media center is only really useful if you need external access to your media. I set all that stuff up once, then realized i never watch shows/movies on the go. And if I do, i know beforehand and can copy the raw files to the device i plan to watch on.

bdonvr,

On my PC direct play is possible 99% of the time.

I also watch a lot on my Apple TV which requires transcoding for some codecs.

But the big thing is my SO who does watch on the go often.

restlessyet, in A dozen or two TB of storage for media on the cheap?

Honestly, unless you can spend more $, one or two USB disks for the mini pc is probably your only choice.

bdonvr,

Yeah that’s probably true. Perhaps something like this?

Hardware RAID. Not too expensive.

Yeah it’s USB and not likely to be super performant, but I don’t think I need super fast read/write for media playback.

restlessyet,

If this fits your budget (you still need the actuals disks…) it’s not a bad choice. Speed should be sufficient for HDDs, as it’s USB 3.

As the other poster suggested, don’t use its hardware raid. Use it as a JBOD and configure the raid in Linux with ZFS or similar.

And never forget: RAID is not a backup! You still need to do regular backups, at least for important data.

bdonvr,

Oh yeah, I backup all configs 4*day. The good thing about torrenting is even if I had catastrophic loss, as long as I have the list of torrents it should repopulate (assuming someone’s seeding).

Of course I also want to self host my personal photos/videos, and I can’t afford to lose those. I’ll have to look into seeing if any solutions support local storage plus maybe object storage as a backup.

stargazingpenguin,

This would be my recommendation as well. Either a shuckable external drive or a standard 3.5" drive with a USB 3.0 enclosure so you have the option to slot the drives into a NAS or server in the future.

Atemu, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)
@Atemu@lemmy.ml avatar

I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.

This will happen in any case unless you had enough capacity for redundancy.

What is in this 4TB drive? A Linux installation? A bunch of user data? Both? What kind of data?

The first step to this is to separate your concerns. If you had e.g. a 20GiB Linux install, 10GiB of loose home files, 1TiB of Movies, 500GiB of photos, 1TiB of games and 500GiB of Music for example, you could back each of those up separately onto separate drives.

Now, it’s likely that you’d still have more data of one category than what fits on your largest external drive (movies are a likely candidate).

For this purpose, I use git-annex.branchable.com. It’s a beast to get into and set up properly with plenty of footguns attached but it was designed to solve issues like this elegantly.
One of the most important things it does is separate file content from file metadata; making metadata available in all locations (“repos”) while data can be present in only a subset, thereby achieving distributed storage. I.e. you could have 4TiB of file contents distributed over a bunch of 500GiB drives but in each one of those repos you’d have the full file tree available (metadata of all files + content of present files) allowing you to manage your files in any place without having all the contents present (or even any). It’s quite magical.

Once configured properly, you can simply attach a drive, clone the git repo onto it and then run a git annex sync --content and it’ll fill that drive up with as much content as it can or until each “file”'s numcopies or other configured constraints are reached.

the_q, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)

This seems like a terrible idea.

keefshape, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)

Mergerfs to combine the smaller ones.

athos77, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)

What I do is on the originator drive, I create new subdirectories and start categorizing items by content; like I'll put all the ebooks into one directory, and all the television into another. It just makes it easier for me to find things later if I can just head to the drive with all the television on it.

If there's a particular directory with a lot of content, I might create further divisions - maybe shows that are finished vs those who are still getting new episodes, or sitcoms vs drama, that kind of thing.

Then I make a list of how big each master directory is, and I start copying them over to the most appropriate-sized drive. I usually find that I can fit in one large directory, and a couple of smaller ones, and then the last drive gets all the leftovers. I also tape a post-it note to each drive saying something like "2022-23 television" or "science fiction audiobooks" or whatever.

I also create a new directory on the originating drive called something like ++COPIED and, once I've copied content to a new drive, I move the original directory to ++COPIED: I'll still have access if I need it, but I don't have to keep track of it any longer. Once everything is successfully copied over, I can just delete that one directory.

It's a manual process, yes, but it does make it easier for me to find stuff when I want to look at it again later.

mindlessLump, in What's an elegant way of automatically backing up the contents of a large drive to multiple smaller drives that add up to the capacity of the large drive? (on Linux)

You could create a Python script to do this. There is a library called psutil that would help. Basically,

  • iterate over mounted drives and see how much each has available
  • based on these values, iterate over your backup files and separate them into chunks that will fit on each drive
  • copy chunks to respective drives

Would be a fun little project even for a beginner I think.

watson387, (edited ) in What to do with extra HDDs
@watson387@sopuli.xyz avatar

WTF is an extra HDD?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • datahoarder@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #