
No lens? No problem for FlatCam - fezz
http://www.opli.net/opli_magazine/imaging/2015/no-lens-no-problem-for-flatcam-rice-nov-news/
======
davnicwil
This is absolutely awesome to see, having done (fairly modest - masters level)
research in this field in the past.

For the uninitiated, I'll try and summarise my understanding of the general
concept of computational photography:

A 'traditional' camera uses a physical system of lenses to focus light to a
single point (area really), where the light is captured and recorded, using
either film or a digital sensor. The captured light information is then
processed in whatever necessary way depending on the capture mechanism to
ultimately produce a photograph. But all that's produced is one, singular
photograph, that's an exact mapping of the physical properties of the lens
system.

Computational photography tries to virtualise aspects of this process by
modelling the system's optics, so that extra information can be inferred from
the captured light using these models. This extra information can reveal
things about the scene being captured (more the computer vision side) or be
used to post-process different effects into the image after the fact in an
intelligent way that uses real information about the scene (more the
photography, art, side) such that the captured information represents a space
of possible images of the scene, rather than just one singular point in that
space (one photograph).

To increase the quality and quantity of this information, extra sensors,
better sensors, different sensors (light field capture etc) are used, and
modifications to the system can be made such as coded apertures, adjusting
parameters like focal length and exposure time by known amounts, and
collecting samples over a range of these modifications. Looks like a coded
aperture is used in this technique for example but this article doesn't go
into a lot of detail on that.

When you start thinking about the possibilities of working with image 'spaces'
instead of photographs, both for practical computer vision applications and
just plain cool art applications, it quickly becomes obvious what a
fascinating and exciting (and important) field this is, and will be in the
future.

It's really, truly, awesome to see an application of this relatively new field
of research which could hit the mainstream in a big way - namely thinner,
lighter, solid state cameras in phones, which it looks like this research
could ultimately lead to. Bravo to everyone involved!

~~~
fallingfrog
So - score my understanding of this. A lens performs a fourier transform,
right? So the idea here is that instead of doing the transform with a lens, we
grab the raw information and do an fft in software?

~~~
Ono-Sendai
A lens doesn't compute the fourier transform of the image, it inverts it
(geometrically speaking). On a wave level (far-field diffraction) it does
something like the fourier transform.

~~~
tortle
A lens is a convolution. Imagine if a photograph was a 1D signal, then a lens
would be a convolution of a signal with a square wave (Lens sliced along one
plane). Now, typically when a square wave is convolved with a signal, if you
look at the magnitude of the fourier transform, you'll notice that you are
multiplying the frequencies of the signal with a sinc function. Sinc functions
hit zero amplitudes at various points, thereby destroying information when
multiplied. Therefore, if you could create a lens (known as a mask), which
didn't destroy signal, but instead was more like a band-pass filter, then you
could effectively recover the signal and adjust for amplitude changes
digitally. Now take this model to the 3D and you've just created a lightfield.

~~~
Ono-Sendai
Calling a lens a convolution is an idealisation. It only convolves the signal
on the focal plane/sensor, for perfectly focused light.

------
archimedespi
This is pretty amazing, like 'davnicwil said. Personally though, the coolest
thing about this is how easy it is to make if you have some way to make the
mask.

It uses a Raspberry Pi Camera Module[1], which is $30, with no modifications
other than removing the lens and adding the mask.

I wonder if you could make a mask by machining plastic down _very_ thin, and
then punching tiny holes through with the smallest endmill you could find.

[1]: [https://www.raspberrypi.org/products/camera-
module/](https://www.raspberrypi.org/products/camera-module/)

~~~
saidajigumi
Offhand, this seems like a job for a laser cutter more than a milling machine.
(But hey, if that's what you've got...) Depending on the cutter, largely its
minimum kerf size, a single-pass may produce an entirely acceptable result.

An alternative to direct hole cutting would be to use the laser to ablate an
etching mask medium then etch the actual holes into the cut surface, which
_might_ produce a better end result. Anyone given this approach a try for
high-detail work?

~~~
archimedespi
I had forgotten that we have a laser cutter at the hackerspace! The minimum
kerf on it is pretty bad though, so I bet it wouldn't work.

After reading the pattern, it seems to be more than just holes, though, so
it's probably a bit harder to fabricate.

------
kakali
The mask kinda looks like a cosine filter rather than just a simple grid to
make many pinholes. If so, then it might be a small version of heterodyne
light field camera.
[http://web.media.mit.edu/~raskar/Mask/](http://web.media.mit.edu/~raskar/Mask/)

------
stared
It looks like a Lytro camera, but going one step further. In general, the main
idea is the same: using mask and then recovering the full light field.

While it looks that Lytro was/is not a great commercial success, I see that
the field progresses. And very likely we will see things which can't be done
in a different ways (e.g. pseudo-3D camera for videos... or 3d
teleconferences).

~~~
sbierwagen
A Lytro camera uses many small image sensors with many lenses. No mask is
involved.

The conventional Lytro cameras are of pretty questionable utility, but light
field cameras show some promise for capturing VR video:
[https://www.lytro.com/immerge](https://www.lytro.com/immerge)

~~~
stared
In Lytro there is a mask, but of a different kind - of lenslets. See e.g. this
presentation: [http://www.slideshare.net/cameraculture/3-intro-
lightfields](http://www.slideshare.net/cameraculture/3-intro-lightfields)

------
Ono-Sendai
Interesting idea, but it seems like diffraction will be a serious problem for
this. Any time you have tiny little holes (apertures) you are going to get a
lot of diffraction that will result in very blurry images.

~~~
chrisBob
Their mask is next to the sensor. Diffraction isn't as much of an issue if
there is no propagation.

~~~
Ono-Sendai
You can't have it both ways. Either the mask hole is right by the sensor, in
which case there is no angular resolution. Or the mask hole is further away,
in which case you get lots of diffraction.

------
rocky1138
Did I miss the example image produced by this camera? I didn't find one
anywhere on the site.

Edit: Example images are shown in the included video.

~~~
cschmidt
There are also images in the paper.

------
desireco42
As someone who last few years got into photography, I can't wait for
computation to be used more to produce better images and more innovative
systems. The one for example is camera with 16 small sensors that will
probably be flop, but will open doors to so many others that will come after
it.

I personally can't wait.

------
sbierwagen
Direct link to arxiv paper:
[http://arxiv.org/abs/1509.00116](http://arxiv.org/abs/1509.00116)

------
emeraldd
It seems like this is an abstraction on the idea of a light field camera.
(i.e. each hole in the mask acts as a tiny piny hole camera in the composite
...)

edit: fix typo

------
tomcam
Actual output can be seen in video starting here:
[https://www.youtube.com/watch?v=BdgwO_i5p54&feature=youtu.be...](https://www.youtube.com/watch?v=BdgwO_i5p54&feature=youtu.be&t=81)

Also... it's just amazing that a college lab can afford to fabricate these
prototypes.

------
pjc50
Reminds me of insect compound eyes.

~~~
jessaustin
ISTM that "experts" have long derided the compound eyes of arthropods, since
if they were "advanced" they would be like human eyes. Maybe developing
cameras that aren't just direct copies of vertebrate eyes will enable us to
perceive the benefits of other eye forms.

------
sandworm101
So it is a sheet of pinhole cameras? And the images are focused by selecting
from various diameters of pixels behind each pinhole? It looks like a great
idea, but some of the descriptions make it seem more than it is.

------
peterclary
"We can make...wallpaper that’s actually a camera"

Uh oh.

------
100ideas
Similar lens-free "computational photography" techniques have been described
for super-resolution microscopy (see below).

In conventional imaging, lenses resolve a sharp image of the target onto the
sensor by focusing just the light emitted by the target within the lens'
"focal plane" (a volume in front of the lens determined by the width of its
aperture and the wavelength of the light passing through it) while blurring
light from everywhere else into background noise. One can think of this system
as amplifying certain information (the light from the focal plane) while
filtering out other information (the light from everywhere else) from the
total input light.

In other words, the images formed by lenses are lossy by design. This isn't
practically a problem at macroscales, but at microscales the focal plane
becomes extremely thin (micron-scale), oftentimes 1/100th as thick as the
sample being imaged. To image large cells, for instance, a microscopist might
be forced to capture tens or hundreds of images of the sample from different
focal lengths (a "z-stack"), then deconvolve them to generate an image in
which most/all of the sample is in focus.

Anyway, I think these new techniques are interesting because they
fundamentally attempt to capture more of the "information" present in the
incident light. The lens does not capture an "image" that can be directly
visualized, but rather a more full representation of the wave-field of all the
incident light. Generating a conventional image from the raw data requires
sophisticated additional processing.

The Ozcan lab has published some really neat (and eminently DIYable) work in
this area:

"Lensfree On-chip Tomographic Microscopy Employing Multi-angle Illumination
and Pixel Super-resolution", Serhan O. Isikman, Waheb Bishara, and Aydogan
Ozcan. J Vis Exp. 2012; (66): 4161. doi: 10.3791/4161\.
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3487288/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3487288/)

 _" Existing 3D optical imagers generally have relatively bulky and complex
architectures, limiting the availability of these equipments to advanced
laboratories, and impeding their integration with lab-on-a-chip platforms and
microfluidic chips. To provide an alternative tomographic microscope, we
recently developed lensfree optical tomography (LOT) as a high-throughput,
compact and cost-effective optical tomography modality. 7 LOT discards the use
of lenses and bulky optical components, and instead relies on multi-angle
illumination and digital computation to achieve depth-resolved imaging of
micro-objects over a large imaging volume. LOT can image biological specimen
at a spatial resolution of <1 μm x <1 μm x <3 μm in the x, y and z dimensions,
respectively, over a large imaging volume of 15-100 mm3, and can be
particularly useful for lab-on-a-chip platforms."_

Also see [http://spie.org/newsroom/technical-articles-
archive/3979-mul...](http://spie.org/newsroom/technical-articles-
archive/3979-multi-angle-illumination-with-pixel-super-resolution-enables-
lensfree-on-chip-tomography?ArticleID=x84293)

