One option is a live CD or a VM and in this case you can restore the state after each session. In both cases only your IP, timestamps, browser details and everything you interacted with on the site is stored by Facebook. But no new hardware or cookies will be stored on your device as it is wiped each time.
Is your suggestion to self host your email or not use email? I’m not sure why you couldn’t find a company that you do trust, and proton seems to be one of the most likely candidates.
I suppose that’s a fair point, although I thought they didn’t have access to your data in terms of email content. I agree with the point about not putting all your eggs in one basket but I’d seriously consider them for email only.
@possiblylinux127 okay. But self hosting Posteo on a VPS ! The VPS company can still see your mails if it's imap and maybe give them to the government as well.
I mean yeah everybody has their own threat models so go for it just understand what you will be giving up. Assuming no recreating accounts, if you want to further isolate it I would recommend using everything meta related through a privacy friendly browser that has ad/tracking protection, aswell as an always on vpn or DNS solution that has an option to block ads and trackers.
Edit: If you want a simple addition using something like trackercontrol might be another useful tool to help you
I use Whatsapp in that exact scenario that you are describing. It’s an old Android phone connected to wi-fi and the phone is never touched for anything outside of Whatsapp.
Having it setup that way, or for you with FB/IG, if there is no SIM in the phone, do not install anything else, no contacts in phone, you can effectively use it anonymously.
I suggest you use an Android phone so you can download APK files to install, disable Play Store and similar network services, and maybe once a month download the lastest APK and install to update it. I do that for Whatsapp or updating since I have all Google services disabled on the phone.
I share with my wife and just got the family plan. It’s overkill probably but it makes it simpler and I don’t have to think about 2 separate subscriptions.
FYI. Blockchain is only so very power waster because for cryptocurrency uses the users churn out new rounds continuously as if there is no tomorrow.
Here, your public key relatively rarely changes. If you had your protonmail account for years, it probably hasn’t changed ever yet.
Maybe I’m wrong in this, but this seems to be similar to what Keybase was doing, and that was a cool idea!
Blockchains are an immutable ledger, meaning any data initially entered onto them can’t be altered. Yen realized that putting users’ public keys on a blockchain would create a record ensuring those keys actually belonged to them – and would be cross-referenced whenever other users send emails. “In order for the verification to be trusted, it needs to be public, and it needs to be unchanging,” Yen said.
That I don’t know the answer too. And i would like more information about how it works. I am mostly familiar with Crypto in block chains work and I still wouldn’t say i fully understand that either.
I am also a little confused when they say unchanging. Sure block chain are unchanging but I am assuming you can add new data that would take priority of old data. I don’t think you would want a system that you could never change your key once you add it. Because that is stupid keys can and will get compromised eventually.
but in general, if google can’t read it–few eyeballs will ever see it.
You bring up a good point. The Internet is full of spider bots that crawl the web to index it and improve search results (ex: Google). In my case, I don’t want that any comment I post here or on big platforms like Reddit, Twitter or LinkedIn to be indexed. But I still want to be part of the conversation. At least I would like to have the choice wether or not any text I publish online is indexed.
@queermunist@moreeni I have to disagree. The plagiarism claims are unfounded as the ais are making their own artwork off of what they have learned. Usually starting from noise and de-noising it into something that matches its' memories of the key words. In the case of the generative art ais anyway.
While there can be valid arguments against copyrighted material being used for the ais, plagiarism is not one of them.
Far be it from me to defend the concept of intellectual property, but if a chat bot can be argued to not plagiarize then that implies it has an intelligence. It really doesn’t. It’s plagiarism with extra steps.
It's illegal if you copy-paste someone's work verbatim. It's not illegal to, for example, summarize someone's work and write a short version of it.
As long as overfitting doesn't happen and the machine learning model actually learns general patterns, instead of memorizing training data, it should be perfectly capable of generating data that's not copied verbatim from humans. Whom, exactly, a model is plagiarizing if it generates a summarized version of some work you give it, particularly if that work is novel and was created or published after the model was trained?
All these AI do is algorithmically copy-paste. They don’t have original thoughts and or original conclusions or original ideas, all if it is just copy-paste with extra steps.
Learning is, essentially, "algorithmically copy-paste". The vast majority of things you know, you've learned from other people or other people's works. What makes you more than a copy-pasting machine is the ability to extrapolate from that acquired knowledge to create new knowledge.
And currently existing models can often do the same! Sometimes they make pretty stupid mistakes, but they often do, in fact, manage to end up with brand new information derived from old stuff.
I've tortured various LLMs with short stories, questions and riddles, which I've written specifically for the task and which I've asked the models to explain or rewrite. Surprisingly, they often get things either mostly or absolutely right, despite the fact it's novel data they've never seen before. So, there's definitely some actual learning going on. Or, at least, something incredibly close to it, to the point it's nigh impossible to differentiate it from actual learning.
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.
I don’t think it’s a question to “hate” AI or not. Personally, I have nothing against it.
As always with Privacy, it’s a matter of choice: when I publish something online publicly, I would like to have the choice wether or not this content is going to be indexed or used to train models.
It’s a dual dilemma. I want to benefit from the hosting and visibility of big platforms (Reddit, LinkedIn, Twitter etc.) but I don’t want them doing literally anything with my content because lost somewhere in their T&C it’s mentioned “we own your content, we do whatever tf we want with it”.
So PM claims it has on the order of 10^8 users. Let’s assume each user has one email address with one public ed25519 key, both of which are likely false.
Each key is 32Byte; 32B * 10^8 = 3.2GB.
Could someone do the math how much fiat it’d take to store such an enormous amount of data on the Ethereum or monero blockchains?
privacy
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.