It actually uses two fonts, each with a random half the characters, and missing the other random half. The "font-face" CSS declaration lists both of them, so when a character in one isn't present, it falls back to the other.
Everything looks perfect to the user, but it basically renders the fonts worthless for anything except web use, since system tools or Photoshop can't use character-based font fallbacks the way browsers can. Pretty clever, really.
Now, obviously, the way my eyes do it is temporal averaging. Thus, a camera with a sufficiently long exposure would capture it just the same. What it does prevent is screenshots, taken internally by the device - which generally don't do averaging or have a concept of "exposure". Of course, that could be implemented, but it's not standard at present.
He mentions using more than two frames - a less precise approach is to just add massive random noise over an indefinite number of frames (a noisy video of a static image), and instead of taking care that they cancel out, just rely on statistical convergence. Depending on how much noise is added, it gets more difficult to see the original.
I guess there's a risk that only a little noise is added in some frames, revealing the person. So perhaps instead of adding random noise, you use offsets of a legitimate face. This would make either single frame actively misleading, and perhaps not obviously distorted. Your eyes wouldn't be deceived (I think...), because they average the pixels, like a long exposure.
Also, one could selectively apply the technique - as google street-view blurs out people's faces, but nothing else. This might reduce the flicker.
Hmm. It didn't work for me at all. On the latest Firefox beta it was flickering at about 10fps (ballpark guess), which was slow enough for me to see the individual images and made it look like the normal picture plus horrible static (IE, analog TV noise of old). It also took 40% of my CPU.
It looks better on chrome (and only took 10% CPU), but there's still a lot of noise there. And the flickering is unbearable.
This definitely isn't universally true. The flickering was immediately obvious to me, and it induced a headache fairly quickly.
I wonder if there's a way to obscure the source image even more though; while the details aren't clear the the source positive and negative frames, it's still pretty clear that it's an image of a face. Perhaps the addition of some sort of noise that could be cancelled out would work.
It would also be interesting to see this applied to video streams in addition to just static images.
EDIT: A very simple workaround that would be easy for the general public is to just take a picture of the screen with another device. Example: http://i.imgur.com/ZLdpp.jpg
I don't think that this technique (even in a more complicated form) could ever be exploit-proof. Even with more randomness or noise, a computer can ultimately compute the same averaging that our eyes are doing, and could always get the source image back.
There's an even easier exploit too: just record the screen with another device.
(It took me a moment to locate it since I wasn't reading carefully.)
I'm particulary sensitive to this; back in the days of CRT monitors I could not use any with a refresh rate of less than about 75Hz without being consciously aware of the flicker.
On topic, the second thing I noticed (after the bloody obvious flicker even with no eye movement) was a pattern marking the boundary between the two halves of the images. Then I remembered that I was running my retina display at a higher than native resolution, and scaling of course introduces artifacts at edges.
Besides, who doesn't love 10% CPU usage to display a static image?
This provides 'good enough' security for most people's uses.
I don't remember there being as much flicker, but since the goal was to get more subtle colors the images were probably much closer to each other than this demo.
I didn't realise anybody ever bothered to do this on the Amiga - 4096 colours used to be enough for anybody - but of course there's no reason why you couldn't. The effect would probably only be improved with colours that are closer together.
However, I said "more than 4096 colors"...
I doubt any technical control will solve the ephemeral picture problem because of the analog hole.
For this specific technique, if a photo of the screen doesn't defeat the dithering (in my quick tests the photos were worse than what I could see on the screen but better than either of the component frames ) you could reconstruct the image from a movie fairly quickly.
Mission Impossible had the best one and with that once you have read the message your phones battery could catch fire. Even then you could not garantee anybody else filming it.
Still nice idea and it does hinder screenshots, though they could just take a few and grab both frames to combine.
The hack is cool anyway.
I teach in a public school, and a lot of my students have Android phones. The plurality of them have the Samsung Galaxy S III, actually.
But your point still stands. This technique just has to make it relatively inconvenient for the average non-technical user.
I would hate to see this on a film, it'd do my eyes in completely.
Still, really nice concept.
I for one really like the proof of concept for its creativity. I haven't seen this earlier.
It is also limited to some extent by the lack of remote attestation of non-jailbroken-ness using some sort of hardware crypto module.
No. It doesn't.
That's how it goes. The split-second something transits on the Internet there are countless copies on servers, caches, proxies, etc. In addition to that TFA mentioned the loophole that, well, once something can be seen it can be recorded using lots and lots of endless external means.
Oh, and that "everything sticks around forever" is not just for stuff on the Internet: it's much worse than that.
It's that "anything that is on any device connected to the Internet" can potentially stick around forever.
Ask Scarlett Johannson how it worked for her private pic in her smartphone...
People saying that "our pics are never saved anywhere" are using deception to lure clueless people in.
No, we just took our experience from the real world and applied it to this new-fangled communications thingy. Physical things can "stick around forever" if cared for properly. There's no law that says I have to destroy my things after some period of time.
Now that we have digital copies of things, those copies are made of intangible ephemeral bits that are copied repeatedly to get them from one place to another. I think the author's statement that you quote overlooks that digital things are analogies of physical things- physical things that we've expected to last for as long as we like.