Hacker News new | comments | show | ask | jobs | submit login
Screenshot-proof images via temporal dithering (persistent.info)
161 points by olefoo on Dec 22, 2012 | hide | past | web | favorite | 65 comments



You can check out "font dithering" if you look at the Gotham fonts used on Obama's site.

It actually uses two fonts, each with a random half the characters, and missing the other random half. The "font-face" CSS declaration lists both of them, so when a character in one isn't present, it falls back to the other.

Everything looks perfect to the user, but it basically renders the fonts worthless for anything except web use, since system tools or Photoshop can't use character-based font fallbacks the way browsers can. Pretty clever, really.


Interesting idea. How hard would it be to merge the two fonts back into a single one that has all characters?


It's amazing how well my eyes reconstruct the original image from the demo http://persistent.info/web-experiments/temporal-dithering/ I had to blink (to capture a "screenshot" in my eyes) to see that the distorted frames were really there.

Now, obviously, the way my eyes do it is temporal averaging. Thus, a camera with a sufficiently long exposure would capture it just the same. What it does prevent is screenshots, taken internally by the device - which generally don't do averaging or have a concept of "exposure". Of course, that could be implemented, but it's not standard at present.

He mentions using more than two frames - a less precise approach is to just add massive random noise over an indefinite number of frames (a noisy video of a static image), and instead of taking care that they cancel out, just rely on statistical convergence. Depending on how much noise is added, it gets more difficult to see the original.

I guess there's a risk that only a little noise is added in some frames, revealing the person. So perhaps instead of adding random noise, you use offsets of a legitimate face. This would make either single frame actively misleading, and perhaps not obviously distorted. Your eyes wouldn't be deceived (I think...), because they average the pixels, like a long exposure.

Also, one could selectively apply the technique - as google street-view blurs out people's faces, but nothing else. This might reduce the flicker.


> It's amazing how well my eyes reconstruct the original image from the demo

Hmm. It didn't work for me at all. On the latest Firefox beta it was flickering at about 10fps (ballpark guess), which was slow enough for me to see the individual images and made it look like the normal picture plus horrible static (IE, analog TV noise of old). It also took 40% of my CPU.

It looks better on chrome (and only took 10% CPU), but there's still a lot of noise there. And the flickering is unbearable.


> I had to blink (to capture a "screenshot" in my eyes) to see that the distorted frames were really there.

This definitely isn't universally true. The flickering was immediately obvious to me, and it induced a headache fairly quickly.


Really cool concept. While it wouldn't phase someone actively trying to retrieve the source image (i.e., I'm sure someone could write a screenshotting service for Android) that could get around this pretty readily, I'm sure that this would phase all the basic screenshotting functionality on mobile (and otherwise) currently.

I wonder if there's a way to obscure the source image even more though; while the details aren't clear the the source positive and negative frames, it's still pretty clear that it's an image of a face. Perhaps the addition of some sort of noise that could be cancelled out would work.

It would also be interesting to see this applied to video streams in addition to just static images.

EDIT: A very simple workaround that would be easy for the general public is to just take a picture of the screen with another device. Example: http://i.imgur.com/ZLdpp.jpg


Your very simple workaround is also mentioned by the author as 'the analog hole' if I presume correctly.


I was about to mention rapid screenshoting as an exploit, but you took care of that. However, couldn't a rapid screenshoting application end up replacing the default screenshot function if this technique became commonplace? As the bar for using the vulnerability gets lower, the technique grows more and more useless. Maybe this can be modified to eliminate the exploit somehow (my thoughts go towards adding more randomness in some way or another)?


Even if it were to become a commonplace way to time-protect images, I can't really see Apple making their screenshotting functionality actually deal with it - they'd be doing it just to circumvent this simple security system, it wouldn't have any other benefits. On Android, though (and jailbroken iOS), I can definitely see people writing mods to break this right away.

I don't think that this technique (even in a more complicated form) could ever be exploit-proof. Even with more randomness or noise, a computer can ultimately compute the same averaging that our eyes are doing, and could always get the source image back.

There's an even easier exploit too: just record the screen with another device.


This is about a good idea as DRM. To those who want to get around it, it's easily surmountable. To everybody else, it's a nauseating pain in the ass.


Seriously, if your image content being stolen by screenshots is causing a problem for you, you should think about why its being screenshot in the first place and solve the problem at its root instead of playing the cat/mouse DRM game.


My first thought for why someone would want to defeat screenshotting was banks trying to make their websites more secure. I confess I can't quite see how this technique would help their use case.


Direct link to his proof of concept: http://persistent.info/web-experiments/temporal-dithering/

(It took me a moment to locate it since I wasn't reading carefully.)


The flickering was highly noticeable on my MacBook Air until I lowered the screen brightness. An interesting side-effect of the rapid switching is that whenever I blink, I could see the artifacts. Or when rapidly moving my head. I figure persistence of vision and peripheral vision might have something to do with it. Also smaller tile size appears to reduce flickering.


Same effect is seen with LED brake lights or even Christmas lights... they appear to flicker if you are moving when you look at them, particulary if they are in your peripheral vision field.

I'm particulary sensitive to this; back in the days of CRT monitors I could not use any with a refresh rate of less than about 75Hz without being consciously aware of the flicker.


I'll bet you see rainbow artifacts on every last one-chip DLP/LCoS and yellow flashes on plasma, too.

On topic, the second thing I noticed (after the bloody obvious flicker even with no eye movement) was a pattern marking the boundary between the two halves of the images. Then I remembered that I was running my retina display at a higher than native resolution, and scaling of course introduces artifacts at edges.

Besides, who doesn't love 10% CPU usage to display a static image?


The side-effect I think is saccadic masking: http://en.wikipedia.org/wiki/Saccadic_masking


Viewing it in Chrome on Xubuntu made me have a seizure.


This is an interesting idea, but since the client ultimately gets both halves one can just recombine them out-of-band. (If each half is displayed 50% of the time, it shouldn't require many screenshots to get both. And that assumes you have a 100% secure runtime / SSL chain for delivering the images to the app, which is highly unlikely.)


I think you are over-estimating the technical capacity of most users of SnapChat.

This provides 'good enough' security for most people's uses.


But it's very easy to write an app that's called "SnapChat Self-Destructing Image Saver" and then the feature becomes pointless.

http://en.wikipedia.org/wiki/Smart_cow_problem


Thank you for that wiki reference, I'm adding 'smart cow' to my lexical bestiary.


maybe you could generate a new set of randomized halves every second, so that getting two matching halves is less likely.


Could this have a negative side-effect on people with photosensitive epilepsy? https://en.wikipedia.org/wiki/Photosensitive_epilepsy#Bucha_...


In the UK this could be against Ofcom regulations. Television stations in the UK check for compliance with a special software (http://en.wikipedia.org/wiki/Harding_test). It's not freely available, I think, so unfortunately we can't run it on Mihai Parparita's method.


The first time I heard the term "dithering" it was on my Amiga. They were referring to this kind of temporal dithering, not the kind of we all think about now. The point back then was to display more than 4096 colors on the screen at once. I believe the loaded up 2 hardware screen buffers with the 2 images and then page flipped back and forth as fast as possible.

I don't remember there being as much flicker, but since the goal was to get more subtle colors the images were probably much closer to each other than this demo.


The Amiga used HAM (Hold And Modify) to show 4096 colors. Not dithering. IIRC HAM was similar to delta encoding where the difference was stored pixel to pixel rather than the absolute RGB used just 5 bits.


If you flick between two images at 25Hz then you'll sort-of get more colours. I think Quantum Paint on the Atari ST did this, to get more than 512 colours on screen (at the cost of the thing looking like a flickering mess), or maybe it's some other package I'm thinking of.

I didn't realise anybody ever bothered to do this on the Amiga - 4096 colours used to be enough for anybody - but of course there's no reason why you couldn't. The effect would probably only be improved with colours that are closer together.


Yup, HAM was a weird encoding that was mostly good for still images, though I remember there was a game that made a big deal about using the HAM mode.

However, I said "more than 4096 colors"...


You can also use this technique to get more dynamic range in intensity (and avoid normal HDR compromises).


You could also use a variant of the effect to do something similar to spot motion, so that one or a few features would appear to move in an otherwise still picture.


I went ahead and used SimpleCV/python to defeat this. Perhaps when I get more time tonight I will have it automatically capture the screen shots. Creating variations on this method could be really fun. I bet someone could bake something like this into SimpleCV.js (https://demo.simplecv.org/).

Source: https://gist.github.com/4360664

Results: http://imgur.com/a/hBMed


An interesting technique, I think it's cool.

I doubt any technical control will solve the ephemeral picture problem because of the analog hole.

For this specific technique, if a photo of the screen doesn't defeat the dithering (in my quick tests the photos were worse than what I could see on the screen but better than either of the component frames [1]) you could reconstruct the image from a movie fairly quickly.

[1] http://imgur.com/jA5f6


It's really cool how you can reconstruct the original image in your browser by dragging one of the frames on top of the other. At least it works for me in Firefox on Ubuntu where the image I drag becomes see-through.


On my iPhone the flashing alternating images were extremely visible. I'll have to try it on my computer later. One thing is for sure this won't work well on slower computers or mobile devices.


Same experience on android stock browser, the flickering is too slow for me to be able to see the picture.


For me the flicker gets better by reducing the block size and it's imperceptible at 1px - there is the occasional run of negatives or positives but that can be fixed with vsync


This is nice but does not prevent somebody using another smartphone to take a picture or video clip.

Mission Impossible had the best one and with that once you have read the message your phones battery could catch fire. Even then you could not garantee anybody else filming it.

Still nice idea and it does hinder screenshots, though they could just take a few and grab both frames to combine.


Obviously, it's easy to circumvent, a normal user would just record the screen instead of capturing one frame.

The hack is cool anyway.


How does a normal user record the screen on iOS?


There are video-recording devices that are not iPhones.


Given their respective install bases, I think Android should be the default OS for J. Random Average.

I teach in a public school, and a lot of my students have Android phones. The plurality of them have the Samsung Galaxy S III, actually.

But your point still stands. This technique just has to make it relatively inconvenient for the average non-technical user.


Does not work well on my phone (Galaxy S2 running CM10 nightlies) using Chrome. Animation is just not fast enough, or possibly the AMOLED screen is too quick to respond compared to an LCD. Maybe a combination of these. Anyway it's unusable (as is the HN comment text box but that's a different story)


chrome and android browser are too slow. try firefox. works on my galaxy nexus.


I just took 3 screenshots of the page, and by layering them together you get the exact same image as the left one. I don't really think it's a solution. As long as the eye can see it, you can't prevent recording anyways. http://imgur.com/ab1gk


It looks really cool as a concept, but I've found that playing around with the values and since closing the tab my eyes are really messed up now.

I would hate to see this on a film, it'd do my eyes in completely.

Still, really nice concept.


I love the comments on the page itself. They just whine about the issues, rather than compliment the author for being creative.

I for one really like the proof of concept for its creativity. I haven't seen this earlier.


This doesn't work at all on a Retina MBP in Safari. First image didn't animate at all, "Lenna" image flickered but at a rate that didn't make it really visible.


Not trying to troll, but doesn't this all boil down to DRM? Why not simply create a DRM-enabled jpeg format that mobile devices support and be done with it?


How would a different (DRM'd) format do anything to prevent screenshots?


DRM is a function of the viewer, not the format, and your ability to create effective DRM on iOS is limited by the lack of a way to diable screenshots (unless it's a corporate-owned device with systemwide policies).

It is also limited to some extent by the lack of remote attestation of non-jailbroken-ness using some sort of hardware crypto module.


Hmm - is there any way to implement a version of this effect which uses more than two images, or even one which generates a new random image for each frame?


First, I must say that this is some awesome tech. Though, I think people should not be sharing images that would require them to use a tech like this. Moreover, if this is applied to display sensitive information then the person who intends to steal it will not be dithered by something like this. They would certainly look at means to circumvent it. It's like limiting people to right click and save an image or copy some text. Those who want to do it, can always grab the source code and load images directly from their links or copy text from the source code.


I've thought of this before, but it's just too easy to get around, and it looks horrible. We'd need much higher screen refresh rates.


Works impressively well in Chrome.


Fails because ow my eyes are bleeding.


could you use this same technique to enhance the apparent resolution of a screen?


This is quite awesome.


“It seems odd that at the beginning of the Internet everyone decided everything should stick around forever,”

No. It doesn't.

That's how it goes. The split-second something transits on the Internet there are countless copies on servers, caches, proxies, etc. In addition to that TFA mentioned the loophole that, well, once something can be seen it can be recorded using lots and lots of endless external means.

Oh, and that "everything sticks around forever" is not just for stuff on the Internet: it's much worse than that.

It's that "anything that is on any device connected to the Internet" can potentially stick around forever.

Ask Scarlett Johannson how it worked for her private pic in her smartphone...

People saying that "our pics are never saved anywhere" are using deception to lure clueless people in.


"...decided..."

No, we just took our experience from the real world and applied it to this new-fangled communications thingy. Physical things can "stick around forever" if cared for properly. There's no law that says I have to destroy my things after some period of time.

Now that we have digital copies of things, those copies are made of intangible ephemeral bits that are copied repeatedly to get them from one place to another. I think the author's statement that you quote overlooks that digital things are analogies of physical things- physical things that we've expected to last for as long as we like.


This would look way better on actual canvas; right now the demo is just a IMG tag that changes it's z-Index really fast; not an smart way to do it.


I tried this with a canvas (putImageData with alternatively the positive and negative frame) and it wasn't any faster on desktop Chrome (and slower on iOS devices).


Yep, its true. Maybe you should try using an animated gif.


I tried that too (https://github.com/mihaip/web-experiments/blob/master/tempor...). It still flickers, I'm guessing because the frame delay interval is specified as an integer, and both 16 and 17ms don't quite line up with 60Hz (or whatever the screen's refresh rate is).


The only suggestion I have left is to convert it to a video; that is always difficult to snapshot anyway.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: