

New Camera Has Real-time Focus of Near and Far Field  - J3L2404
http://www.sciencedaily.com/releases/2010/05/100504173823.htm

======
gjm11
So, what they actually have is N cameras focused at different distances, plus
a distance-measuring device; then the image they produce is basically a mosaic
of the N cameras' images, choosing whichever one is nearest to being in focus.

The clever bit is yet another camera, of a different kind, which somehow
determines distance-to-camera at high resolution. Supposedly the key to this
is the fact that intensity is proportional to 1/distance^2, but just how they
use this (given that different objects will be different in brightness) the
article doesn't say.

Fortunately, the lead researcher has a press release at
[http://individual.utoronto.ca/iizuka/research/OmnifocusVideo...](http://individual.utoronto.ca/iizuka/research/OmnifocusVideoCamera_newsrelease.pdf)
with a bit more detail: they illuminate the scene with two IR sources in turn,
one closer than the other, and look at how brightness changes between the two.
(So they have C/r^2 and C/(r+k)^2, where k is the known separation between the
IR sources, r is the distance to the nearer source, and C is the IR
reflectivity; from that they can compute r.) I guess this doesn't work so well
with anything that doesn't reflect enough IR.

At present N=2, and their demonstrations all show an image with some "near"
and some "far" bits and nothing in between. Presumably they've carefully
focused one camera on the "near" and one on the "far".

Using N cameras in this way gets you N times the depth of field, and 1/N as
much light into each camera. If instead you make your aperture N times
smaller, you again get N times the depth of field, but now 1/N^2 as much light
into the camera. So it does seem possible that it might be a win, for
applications where you really want as much depth of field as possible, and
nothing in your scene is moving much, and everything reflects enough IR.

~~~
wendroid
> Supposedly the key to this is the fact that intensity is proportional to
> 1/distance^2, but just how they use this (given that different objects will
> be different in brightness) the article doesn't say.

I'd imagine something not far away from :

pixel(x, y) = max(intensity(x, y, ccds))

The IR distance system has been around for a while. Sony released a camera
which recorded the distance per pixel too, they introduced it as an
alternative to green screening, using the distance map as a matte. I can't
find what that's called atm. and though I looked forward to using one, I've
never come across one in my film work.

~~~
gjm11
No, not that (unless I'm misunderstanding you, which of course I might be).
Here's what they do, according to the PDF that I linked to:

1\. Illuminate with two IR sources, more or less on-axis and at known
positions relative to the lens. 2\. From the ratio of the resulting pixel
intensities, estimate per-pixel distances. 3\. Take each pixel from the image
whose camera is focused nearest to that pixel's distance.

~~~
wendroid
Ah right. I realised in bed that what I guessed was totally wrong

------
wendroid
No it doesn't. It has an array of CCDs and chooses the appropriate pixel to
capture at runtime based on inferred distance information so the headline
should read :

"New Camera Simulates Real-time Focus of Near and Far Field"

Sounds like a fun piece of kit though personally I think in general, having
all objects in the field of view in focus will be sensory overload and perhaps
a little disturbing, similar to the dissonance created by the inverse - fake
depth of field.

