i think that is only valid for text, the method to restore blurred text is to draw and blur a lot of combinations and compare them to the blurred image. that’s probably not a thing with faces i guess…
That does sound more effective. You really have to trust that the blur algorithm cannot be reverse engineered if you use that. Removing the data seems more certain than transforming it somehow.
I recall a story of a pedophile being caught because they posted pictures using a radial warp on the face. It wasn’t too hard for enforcement to code a filter that undoes the radial warp, and instantly saw the original photo to identify and lock away the creep.
To my knowledge, it’s kind of hard to quantify exactly how much information is lost with a normal blurring algorithm (gaussian, box, etc), but it’s usually less than you think. There are certain edge cases where no information is lost at all and the original image can be perfectly reconstructed if it’s simple enough. Even if it’s a normal photo of something complex, a deconvolution algorithm can work seemingly impossible magic on a blurry image without the need for an AI that will hallucinate details.
On the other hand, pixelating part of an image provably removes a large amount of information from that section of the image and no algorithm will be able to de-pixelate something without hallucinating details. Using a big box is the absolute best because it just deletes all information from that part of the image.
ETA: the problem is a lot worse in videos because you can use multiple frames with different offsets to reconstruct a higher quality image even if it’s pixelated.
They had to ask adobe if i recall correctly. Which does mean it isnt as easy as it sounds to reverse engineer (since adobe developed it, they obviously knew how to do it)
signal.org
Active