An F&P induction range was on our short list for an upcoming replacement to our aging gas range. It is now off the short list. Not sure how many API calls a $8000 range would have paid for, but I’m sure they’ll be happy to know my HA server won’t be pinging them any time soon.
I hate how cease and desist are essentially blackmail. Even if you did nothing wrong, you can still get fucked over by costs of a potential legal battle.
It’s a bigger problem in the States than elsewhere. In the US, awarding legal costs is the exception, not the norm, so someone with a lot of money and access to lawyers can basically intimidate a defendent into avoiding court. In the rest of the world, courts are much more likely to award costs to a defendent who has done nothing wrong - if you file a frivilous lawsuit and lose, you’ll probably have to pay the costs of the person you tried to sue.
This guy’s in Germany, so I think he’d be alright if he clearly won. The issue, however, is that courts aren’t really equipped for handling highly technical cases and often get things wrong.
Based on the verbiage of the threat from haier it kinda sounds like they don’t have a leg to stand on. Short of just the financial cost of fighting this blatantly bullshit lawsuit should they file one. The TOS isn’t the law, so to demand the devs to cease all illegal activities means nothing here.
You are right, TOS isn’t the law. However businesses will try to trick you with this technique, especially if they don’t think you have any legal support. You can’t commit a crime just because the victim agreed to it, no amount of contracts negate this. Employers often pull this trick to force employees to accept illegal practices.
The person hosting and publishing the code may have never agreed to the TOS. So can’t be bound by it. They also can revoke their agreement, and no longer have to comply with it. However, continued use of the businesses web services likely requires agreeing to the TOS and this plug in may be using the businesses web services to make the plugin work.
You can find a password checking utility on haveibeenpwned.com (the tool doesn’t send your password to the server, but only the first 5 characters of the hashed password, which is very safe). There are CLI tools on GitHub you can use to bulk test passwords. They also provide a downloadable list of hashes.
Alternatively, check if your password manager has a built-in tool for checking for passwords in known databases.
Alternatively, just start changing passwords, regardless if they’re in the breach or not. Prioritize the ones with financial information, then the ones with personal info, the ones you visit frequently versus some shitty site you visited once that made you make an account back in 2011, etc.
I know that’s a lot of accounts for some people but you don’t have to do them all at once. Go reset a password or two on a site today at lunch. Then do another one tomorrow. And a few the next day.
I actually remember reading about an app or feature on a password manager that would do something like this. Rather than bark at you to reset 100 different accounts at once, it would just give you 1 or 2 random accounts a day to go reset the password on.
What’s more insane is that some of those passwords in the lists are I still live intrusions that companies haven’t acted on, like for example my Dropbox password is there and that’s a new password that I just gave them a few months ago before I deleted my account
This is the great thing about FOSS. Someone else will just take the code and reupload it. If they want it removed from GitHub, they can deal with Microsoft. At which point it’ll just be re-uploaded again. There’s nothing illegal about it.
So Haier suffers the Streisand effect and the people who want to simply continue using it.
Right… they claim hosting it is a violation of their TOS, but I’m not one of their customers. How can I violate their TOS if I don’t even use their product.
Well, the issue here is that your backup may be physically in a different location (which you can ask to host your S3 backup storage in a different datacenter then the VMs), if the servers themselfs on which the service (VMs or S3) is hosted is managed by the same technical entity, then a ransomware attack on that company can affect both services.
So, get S3 storage for your backups from a completely different company?
I just wonder to what degree this will impact the bandwidth-usage of your VM if -say- you do a complete backup of your every day to a host that will be comsidered as “of-premises”
if you backup your vm data to the same provider as you run your vm on you don’t have an ‘off-site’-backup, which is one criteria of the 3-2-1 backup rule.
But man, I’ll be able to amend all those TODO items that have been accumulating of the last 12 months and fix all those issues while rebuilding my raid.
I mean that’s only if my GITs aren’t hijacked during the ransomware attack.
And I mean, I’ll probably just push the same config to my server and let it on its merry way again.
Well, based on advice of Samsy, take a backup of home-server network to a NAS on your home-network. (I do home that your server-segment and your home-segment are two seperated networks, no?) Or better, set up your NAS at a friend’s house (and require MFA or a hardware security-key to access it remotely)
I’m more worried about what’s going to happen to all the self-hosters out there whenever Cloudflare changes their policy on DNS or their beloved free tunnels. People trust those companies too much. I also did at some point, until I got burned by DynDNS.
We start paying for static IPs. If cloudflare shuts down overnight, a lot of stuff stops working but no data is lost so we can get it back up with some work.
They’re just creating a situation where people forget how to do thing without a magic tunnel or whatever. We’ve seen this with other things, and a proof of this is the fact that you’re suggesting you’ll require a static IP while in fact you won’t.
Where I live, many ISPs tie public IPs to static IPs if they are using CG-NAT. But of course there are other options as well. My point was that the other options don’t disappear.
Though I do get the point that Cloudflare aren’t giving away something for nothing. The main reason to me is to get hobbiest using it so they start using it (on paid plans) in their work, or otherwise get people to upgrade to paid plans. However, the “give something away for free until they can’t live without it then force them to pay” model is pretty classic in tech by now.
However, the “give something away for free until they can’t live without it then force them to pay” model is pretty classic in tech by now.
Yes, this is a problem and a growing one, like a cancer. This new self-hosting and software development trends are essentially someone reconfiguring and mangling the development and sysadmin learning, tools and experience to the point people are required to spend more than ever for no absolute reason other than profits.
Backups are usually encrypted from most popular backup programs, either by default or as an option (restic, borg, duplicati, veeam, etc…). So that would take care of someone else getting their hands on your backup data.
I never store my actual files on a cloud service, only encrypted backups.
For local data on my devices, my laptop is encrypted with bitlocker, and my Android phone is by default. My desktop at home is not though.
Indeed. Whatever you put in a cloud needs backups. Not only at the cloud provider, but also “at home”.
There has been a case of a cloud provider shutting down a few months ago. The provider informed their customers, but only the accounting departments that were responsible for the payments. And several of those companies’ accounting departments did not really understand the message except for “needs no longer be paid”.
So for the rest of the company, the service went down hard after a grace period, when the provider deleted all customer files, including the backups…
The real issue here is backups vs disaster recovery.
Backups can live on the same network. Backups are there for the day to day things that can go wrong. A server disk is corrupted, a user accidentally deletes a file, those kinds of things.
Disaster recovery is what happens when your primary platform is unavailable.
Your cloud provider getting taken down is a disaster recovery situation. The entire thing is unavailable. At this point you’re accepting data loss and starting to spin up in your disaster recovery location.
The fact they were hit by crypto is irrelevant. It could have been an earthquake, flooding, terrorist attack, or anything, but your primary data center was destroyed.
Backups are not meant for that scenario. What you’re looking for is disaster recovery.
On the other hand, most of the disaster senarios you mention are solved by geographic redundancy: set up your backup // DRS storage in a datacenter far away from the primary service. A scenario where all services,in all datacenters managed by a could-provider are impacted is probably new.
It is something that, considering the current geopolical situation we are now it, -and that I assume will only become worse- that we should better keep in the back of our mind.
It should be obvious from the context here, but you don’t just need geographic separation, you need “everything” separation. If you have all your data in the cloud, and you want disaster recovery capability, then you need at least two independent cloud providers.
I have been looking into a way to copy files from our servers to our S3 backup-storage, without having the access-keys stored on the server. (as I think we can assume that will be one of the first thing the ransomware toolkits will be looking for).
Perhaps a script on a remote machine that initiate a ssh to the server and does a “s3cmd cp” with the keys entered from stdin ? Sofar, I have not found how to do this.
bleepingcomputer.com
Active