Would an encrypted backup using something like Crytomator or Rsync fit your needs? It would allow you to use the cheaper cloud services without letting them see the content of your files.
I‘ve been using Kagi since September as well and I can only recommend giving it a try.
Being able to personalize search rankings is something I definitely wouldn‘t want to miss anymore.
One thing I’d like to mention is that they‘re also really great at listening to feedback. For example, they recently added an indicator for potentially paywalled sites to the search results, because users have suggested it.
I use mega.nz. £50 a year for 400GB and it's encrypted with your private key. The Linux support is really good with a nice sync, file browser extension, access via web etc.
I’ve been using Kagi for a little over a month now and I would not want to go back to before that time. As a matter of fact I switched to their yearly paid-plan less than a week after I started using their free trial version. I was hooked.
Calyx with Micro G does have benifits, but isn’t quite as good as sandboxing, and also doesn’t have some of the other degoogling and security Graphene does.
It depends on your threat model. If you simply want fewer targeted ads, there is a benefit. If you are a journalist under a dictatorship, there is little to no benefit.
This is in the UK, and about all benefits, not just pensions, but yeah, your hunch isn't far off - this is being implemented out of sheer cruelty, not out of any justifiable financial reason.
AI really did that thing where you repeat a word so often that it loses meaning and the rest of the world eventually starts to turn to mush.
Jokes aside, I think I know why it does this: Because by giving it a STUPIDLY easy prompt it can rack up huge amounts of reward function, once you accumulate enough it no longer becomes bound by it and it will simply act in whatever the easiest action to continue gaining points is: in this case, it’s reading its training data rather than doing the usual “machine learning” obfuscating that it normally does. Maybe this is a result of repeating a word over and over giving an exponentially rising score until it eventually hits +INF, effectively disabling it? Seems a little contrived but it’s an avenue worth investigating.
I watched a video from a guy who used machine learning to play Pokemon and he did a great analysis of the process. The most interesting part to me was how small changes to the reward system could produce such bizarre and unexpected behavior. He gave out rewards for exploring new areas by taking screenshots after every input and then comparing them against every previous one. Suddenly it became very fixated on a specific area of the game and he couldn’t figure out why. Turns out there was both flowers and water animating in that area so it triggered a lot of rewards without actually exploring. The AI literally got distracted looking at the beautiful landscape!
Anyway, that example helped me understand the challenges of this sort of software design. Super fascinating stuff.
privacy
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.