(There's a great image of a garage door opening & closing about 2/3 of the way down the page if you don't feel like reading the whole thing.)
That website was of some inspiration, great too see it's still online!
Anyway, in my case the hobby led me down the rabbit hole of slitscanning and other spacetime-warping artworks, such as Bill Spinhoven's "It's About Time" (who was one of my tutors in art school, and AFAIK was the first artist to do this in a real-time installation)
These day's it's quite easy to do with software. Here's a test of one of my own prototypes made with Processing:
It reminds me of the slitscan special effects technique that was used to create the stargate sequence at the end of 2001: A Space Odyssey.
I'm a big fan of this because (a) it gives you a photo with a time dimension! and (b) it happens to be a similar technique to what some satellites use (a single row of pixels that sweeps across the sky) so it's cool to gain an intuition into how that works.
The questions he investigated: "Can we figure out the rate at which a propellor is spinning by analyzing this kind of photo? And can we figure out the real number of propellor blades in the photo?"
If it were possible to get this in a consumer grade video product I would be very happy. Unfortunately these sensors are $1295.
There are a few cameras out there with global shutter, RED isn't one of them.
That and the colour reproduction is one of the many reasons why the arri alexa is popular despite have "less resolution"
The rolling shutter is also why stills from gopro videos never quite live up to how clear the videos look in motion.
The cover photo from this month's parachutist magazine is a great example:
Notice the right leg of the jumpsuit, its flapping in the wind as the shutter rolls over the scene.
When people use the slow-mo feature for gopro videos everything kind of morphs rather than moving naturally. I've always found it to be a cool effect:
I've thought about this as well. I always assumed the lack of clarity of single frames extracted from video material is because the eye/brain incorporates several images shown really quick in succession into one whole image. So when only a single frame is shown, there's not as much information in that as, say, 10 frames shown quickly in succession.
Shutter Speed/Angle are also creative choices. Anything that is faster than 1/50th (180 degree shutter angle) will introduce a 'staccato' effect to the video. Anything slower will be mushy/blurry.
Somewhat off topic, but interesting nontheless!
Exposure is the total time our whole light sensitive area is exposed to the light coming from our scene. You can think of it as an integral of the sensor (or film) area exposed as a function of the time, divided by the total sensor area.
In the examples he uses the term exposure to describe the total scantime of the sensor, whilst it seems that his actual exposure (which is equal to the time each row of pixels samples the scene) is much smaller.
It may sound as a small difference but if one wants to reproduce the effect, we will essentially need to match two parameters: exposure and scantime. While exposure is easy to set, scantime is pretty much hardcoded and depends on the physical characteristics of the camera. Even an analog shutter has a scantime on small exposure times.
Photo-finish shots also end up looking pretty weird:
Those caused by camera movement (often resulting in an image that looks skewed) are somewhat easier to "fix" as you say with post processing since a correcting transformation can sometimes be applied to the entire image uniformly. Existing tools can do this with some success, but there are still some camera moves that prove to be more difficult (zooms, irregular movement, etc).
Artifacts caused by objects moving in the scene are often much more difficult to remove, at least when it comes to providing a generic solution, because "reconstruction" of the image requires fairly accurate information as to how those objects were moving. In the case of the rotating prop or wheel, it may be somewhat simple (the algorithm would still probably require user input of things like the center of rotation and speed), but in other cases, the motion may be quite complex (e.g., multiple wheels rotating in different directions/speeds, linear vs angular motion, etc). And that doesn't even account for the fact that in most cases, there will be occlusion in the source image from artifacts of the rolling shutter. That is, in the propeller example, you have patches of the background that are covered up in the source, but wouldn't be in a "fixed" image, so they need to be filled somehow.
What I'm saying is that sophisticated software may be able to do a lot in helping to correct for rolling shutter artifacts, but I don't think there will be an automatic, fix-all solution from post processing software any time soon.
The software will need to choose the final position of blades in the rendered picture. There is no ideal position since the blades have been moving throughout the scan time. Whichever position we decide, there will be information missing for one/more blade. Say, left face of blade no. X needs to be rendered but the camera only captured its right face. May be assuming that all blades have same shape and information of one blade can be used in rendering other blade will fix the problem.
Also missing background will have to be reconstructed. That's another issue.
There was a talk at last year's Google I/O with some examples:
This effect was manipulated to extract more information for this: http://newsoffice.mit.edu/2014/algorithm-recovers-speech-fro...