I’m sure plenty will disagree with me, but unless you have specific needs, I’d suggest spending more time sourcing your media rather than rely on transcoding. Most formats of popular stuff are available and Jellyfin will happily play it natively.
Also be aware that transcoding is VERY cpu intensive, unless you have a compatible gpu/transcoder. I run a ML110 with a 6-core Xeon (12 threads) and if Jellyfin needs to transcode something, it uses all of that and still stutters badly when seeking.
If you do need to transcode because you can’t source the media in a compatible way - you may want to use something like Tdarr to transcode it before you try to play it, so it’s ready when you are.
What if the cloud server corrupts your data in transfer or worse shuts down its server without notification. It can and has happened.
For example, I had a cloud backup went to get it and the server could no longer be found. That was with Dropbox mind you. I lost 10gb of important files because of it. Never trust just one source of backup. Always have a secondary just in case.
A mirror array is not a backup. So therefor I would use at least one of those extra drives for a weekly backup of your data. You want some sort of not real-time backup in case you get cryptoed for example
My monitor status LED makes a faint beeping sound when in standby.
Also the panel makes an audible whine when the content changes to a specific amount of white.
As another user already said: Probably low quality electronics.
It wasn’t sensible, given the short life of DNA. One of those sci-fi ideas that caught media and technophile attention, but wasn’t ever going to go anywhere.
Project Silica appears to be attempting very high density, very long life storage, though.
I remember there being a water based storage solution for music that was under development, though it was said to drain entire ecosystems by doing so. Sad, as it seemed promising.
The truth of academia is that it is extremely slow. there are less than 20 minds total on all of earth working on this idea, separately, in different countries. And these 20 people are in their 20’s, severely underpaid, don’t necessarily have all the resources they want, and science may not be their #1 life priority.
anyways:
reading and writing DNA is the main driver of evolution, and it does so because it is error prone (causing mutations). You can imagine this is bad if you want to preserve the integrity of the data.
DNA storage would be okay if you were to… say archive the entire internet for future generations, or geneology records, etc. things that do not need to be written and accessed quickly or often.
I recall watching a documentary (on Curiosity Stream maybe? I’m no longer subscribed) on data storage longevity. It covered DNA storage, which I think this PBS video w/ transcript provides more recent coverage of its developments. As well as holographic storage, which I could only find the Wikipedia page for.
As for which one I think might be the future, it’s tough to say. Tape is pretty good and cheap but slow for offline storage. Archival media will probably end up all being offline storage, although I could see a case for holographic/optical storage being near line. Future online storage will probably remain a tough pickle: cheap, plentiful, fast; select at most two, maybe.
May I ask: are you sure you need a media center with transcoding? Because it may be totally sufficient for you to access files through a file explorer and play them with VLC/mpv or whatever else. Having a media center is only really useful if you need external access to your media. I set all that stuff up once, then realized i never watch shows/movies on the go. And if I do, i know beforehand and can copy the raw files to the device i plan to watch on.
Oh yeah, I backup all configs 4*day. The good thing about torrenting is even if I had catastrophic loss, as long as I have the list of torrents it should repopulate (assuming someone’s seeding).
Of course I also want to self host my personal photos/videos, and I can’t afford to lose those. I’ll have to look into seeing if any solutions support local storage plus maybe object storage as a backup.
This would be my recommendation as well. Either a shuckable external drive or a standard 3.5" drive with a USB 3.0 enclosure so you have the option to slot the drives into a NAS or server in the future.
I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.
This will happen in any case unless you had enough capacity for redundancy.
What is in this 4TB drive? A Linux installation? A bunch of user data? Both? What kind of data?
The first step to this is to separate your concerns. If you had e.g. a 20GiB Linux install, 10GiB of loose home files, 1TiB of Movies, 500GiB of photos, 1TiB of games and 500GiB of Music for example, you could back each of those up separately onto separate drives.
Now, it’s likely that you’d still have more data of one category than what fits on your largest external drive (movies are a likely candidate).
For this purpose, I use git-annex.branchable.com. It’s a beast to get into and set up properly with plenty of footguns attached but it was designed to solve issues like this elegantly.
One of the most important things it does is separate file content from file metadata; making metadata available in all locations (“repos”) while data can be present in only a subset, thereby achieving distributed storage. I.e. you could have 4TiB of file contents distributed over a bunch of 500GiB drives but in each one of those repos you’d have the full file tree available (metadata of all files + content of present files) allowing you to manage your files in any place without having all the contents present (or even any). It’s quite magical.
Once configured properly, you can simply attach a drive, clone the git repo onto it and then run a git annex sync --content and it’ll fill that drive up with as much content as it can or until each “file”'s numcopies or other configured constraints are reached.
datahoarder
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.