@touzovitch@lemmy.ml avatar

touzovitch

@touzovitch@lemmy.ml

This profile is from a federated server and may be incomplete. Browse more on the original instance.

touzovitch,
@touzovitch@lemmy.ml avatar

r3d4kt-U2FsdGVkX1/lGJZ5fHhIJPQ8w7fdKIrvJKGa4C6hVzgxa99BNXMr7LQFL9Rur05EFVITe2pREZaianyq1F5k4dQEovbUKXWwjoj7R2ZXmu3z836vItVgTHh/Wen4p0pp&&&

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

Captcha was just an example :-)

What I’m trying to say is that any small changes that we add to the extension will have very few (or none) effect on the real users, but will force the srappers to adapt. That might require important human and machine ressources to collect data at a massive scale.

EDIT: And thank you for your feedback <3

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

You are absolutely right! Using a single public encryption key can not be considered as secured. But it is still more than having your content in clear.

I intend to add more encryption options (sharable custom key, PGP), that way users can choose the level of encryption they want for their public content. Of course, the next versions will still be able to decrypt legacy encrypted content.

In a way, it makes online Privacy less binary:

Instead of having an Internet where we choose to have our content either “public” (in clear) or “private” (E2E encrypted), we have an Internet full of content encrypted with heterogeneous methods of encryption (single key, custom key, key pairs). It would be impossible to scale data collection at this rate!

touzovitch,
@touzovitch@lemmy.ml avatar

I don’t think AI is bad as a whole. At least I would like to choose if the content I post online can be used (or not) to train models.

touzovitch,
@touzovitch@lemmy.ml avatar

Slow them down and prevent them to scale is actually not that bad. We are in the context of public content accessible to anyone, so by definition it can not be bulletproof.

Online Privacy becomes less binary (public vs private) when the internet contains content encrypted using various encryption methods, making it challenging to collect data efficiently and at scale.

Thank you so much for your comment though <3

touzovitch,
@touzovitch@lemmy.ml avatar

You have a point. Or even malicious links!

We have to be careful with the decrypted output. Redakt is an open source and collaborative project, just saying… 😜

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

You’re right. “Securing” is bad word. “Obfuscating” might be more appropriate. Actually had the same feedback from Jonah of Privacy Guides.

I use AES encryption with a single public key at the moment. That way, if I want to give the option to the user to create encrypt with a custom key, I don’t have to change the encryption method.

EDIT: Editing the title of this thread ̶P̶r̶o̶t̶e̶c̶t̶

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

Exactly!

For example, here’s a Medium article with encrypted content: https://redakt.org/demo/

touzovitch,
@touzovitch@lemmy.ml avatar

but in general, if google can’t read it–few eyeballs will ever see it.

You bring up a good point. The Internet is full of spider bots that crawl the web to index it and improve search results (ex: Google). In my case, I don’t want that any comment I post here or on big platforms like Reddit, Twitter or LinkedIn to be indexed. But I still want to be part of the conversation. At least I would like to have the choice wether or not any text I publish online is indexed.

touzovitch,
@touzovitch@lemmy.ml avatar

But on topic: I see the same problem as with link shorteners. One single service or extension disappears and all good content or links are gone.

Not exactly. The extension is open source so even if the official extension is gone, you would still be able to decrypt previously “redakted” content.

touzovitch,
@touzovitch@lemmy.ml avatar

😂😂😂

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

But why? Why do you people hate AI so much?

I don’t think it’s a question to “hate” AI or not. Personally, I have nothing against it.

As always with Privacy, it’s a matter of choice: when I publish something online publicly, I would like to have the choice wether or not this content is going to be indexed or used to train models.

It’s a dual dilemma. I want to benefit from the hosting and visibility of big platforms (Reddit, LinkedIn, Twitter etc.) but I don’t want them doing literally anything with my content because lost somewhere in their T&C it’s mentioned “we own your content, we do whatever tf we want with it”.

touzovitch,
@touzovitch@lemmy.ml avatar

Image injection is something I will need to stress out.

touzovitch,
@touzovitch@lemmy.ml avatar

What do you mean by non private platforms?

In this POC, you can only encrypt content using Redakt’s public key. That way you are guaranteed to see the content since the key is already installed in the extension.

I intend to add the option to encrypt with a custom sharable key in the v.2.

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

Thank you 😊

I actually thought about this. Adapting the same approach with other kind of content like image, audio or video would be game breaker!!

Imagine uploading videos to Youtube that only viewers with a key would be able to understand!

But it is a challenge as it might require advanced knowledge in image and audio.

touzovitch, (edited )
@touzovitch@lemmy.ml avatar

You’re right, App traffic is something we’ll need to crack. But as a first step, anything traffic going through a web browser is already significant.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #