Unlimited* plans are always sold on the idea that a sizeable part of the user base aren’t going to use an actual unlimited amount of the resource.
Unless there is a contract regarding a fee over a period of time, there isn’t that much that users can do to compel a service to offer a service they no longer want to offer.
Unlimited* plans are always sold on the idea that a sizeable part of the user base aren’t going to use an actual unlimited amount of the resource.
Unless there is a contract regarding a fee over a period of time, there isn’t that much that users can do to compel a service to offer a service they no longer want to offer.
Absolutely! But I don’t think that’s the point of contention here. The problem is the “abuse” rhetoric, since it’s not just incorrect but disingenuous to basically claim that the users did anything wrong here. They’re imposing limits because they miscalculated how many heavy users they could handle.
Again, that’s a completely reasonable move, but framing it as anything but a miscalculation on their part is just a dick move.
One easy solution might be to check into a self-hosted search engine? I’ve used mnogosearch in the past which worked well for spidering a single domain, but it only created the database and didn’t have a web page front end. Still, if you let it go crazy across your nextcloud pages and add a search bar to your website it could provide what you’re missing. They provided enough examples at the time for me to write my own search page pretty easily.
Thank you for this! I have sent this suggestion off to our web wizard it looks extremely promising, we had wanted to attempt something like this but couldn’t find a foot hold to get started!
Good luck! And don’t get stuck on the software I use, you may find something else that is better suited for your type of data. Like if your content is wrapped up in PDFs or some kind of zipped files then the best solution is one that can peer into those files to give search hits on the given text. Of course if your content is already fairly plan text then pretty much any solution would work.
Speaking of archive today; since yesterday or so, I’ve been getting nothing but the cloudflare challenge loops. I recall maybe four or so years back, they were adamantly against cloudflare, and if one were to use 1.1.1.1 for DNS, it would refuse to load or throw errors. I wonder what’s happening behind the scenes?
I was surprised the prices aren’t even that much higher than single actuator drives of the same size. I might be picking a few of these up for my next capacity increase.
What’s the best way to make an offsite backup for 42tb at this point with 20mbps of bandwidth? It would take over 6 months to upload while maxing out my connection.
Maybe I could sneakernet an initial backup then incrementally replicate?
Outside my depth but I'll give it a stab. Identify what data is important, (is the full 42Tb needed?). Can the data be split into easier to handle chunks?
If it is, then I personally do an initial sneakernet to get the fist set of data over. Then mirror different on a regular basis.
The m1/m2 looks like poweredge r630 equivalent with v3/v4 cpu and m4 is using scalable Xeon which is one generation newer. All of them are great systems, especially when maxed out. The m4 being the newest is probably the best all around choice.
From power point of view, they’re gonna be “less” energy efficient than consumer diy stuff in that they’re supposed to be highly dense systems ran in a data centre with thousands of other similar servers, to pack as much punch in as little space as possible.
Another thing I’d be wary about is noise… 1U means you’re stuck with itty bitty tiny fans that need to spin very quickly and make a lot of noise, should your components heat up. Again, that whole data centre high density thing… noise isn’t something they’re optimizing for.
I know, stupid question to a datahoarder. My point is that archiving all of instagram or threads would be impossible even for ArchiveTeam, much less a single person. Are these random posts, or ones you care about?
What if the cloud server corrupts your data in transfer or worse shuts down its server without notification. It can and has happened.
For example, I had a cloud backup went to get it and the server could no longer be found. That was with Dropbox mind you. I lost 10gb of important files because of it. Never trust just one source of backup. Always have a secondary just in case.
datahoarder
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.