
Lensless camera creates detailed 3-D images without scanning - sgk284
https://phys.org/news/2017-12-lensless-camera-d-images-scanning.html
======
crusso
It depends on your definition of "lensless".

You know those kids' books that have the bumpy plastic coating and when you
turn the book one way you see one image - look at it from a different angle
and you see another image?

This is the same concept. They have a bumpy plastic coating that sends the
incoming light in different directions. They do some processing on standard
images to determine how the scattering works and then use that scattering
pattern to reconstruct new images.

I would view the bumpy coating as a myriad of lenses that change the character
of the incoming light.

We have one of those windows in our bathroom with the glass that is warped so
that it breaks up the light so much that it gives you privacy. I've often
thought that it would be a fun project to create a camera system that you
could calibrate to decrypt that scattered image by placing a known image
behind the window and pre-determining how the light waves are refracted. It's
cool to see that someone implemented something similar.

~~~
iaw
I don't think you know what the word 'lens' means[0]. It has a pretty solid
definition, and the objects used in the paper do not meet it. To be a lens the
object has to focus or disperse light via refraction, the objects used in the
paper and then referenced in the article diffuse[1] light.

Even relaxing the definition of lens to be an object designed to have
particular optical properties, the diffusion filters referenced in the article
weren't designed but chosen arbitrarily (they use the laminate from an ID
badge at one point). That's the bulk of the reason this is valuable is because
it eliminates one of the most expensive components required to accomplish the
same result (the micro-array lens they reference in the article).

Now, the lenticular lens you referenced _is_ a lens because of how it
operates, but the referenced in the article technique definitively does not
require a lens because lenses aren't diffusers. Does that make sense?

[0]
[https://en.wikipedia.org/wiki/Lens_(optics)](https://en.wikipedia.org/wiki/Lens_\(optics\))
[1]
[https://en.wikipedia.org/wiki/Diffuser_(optics)](https://en.wikipedia.org/wiki/Diffuser_\(optics\))

~~~
crusso
_I don 't think you know what the word 'lens' means_

I'm not really interested in having a semantic argument about the meaning of
lens, but what the heck...

 _To be a lens the object has to focus or disperse light via refraction, the
objects used in the paper and then referenced in the article diffuse[1] light_

What do you think the "bumpy piece of plastic" in the article does, if not
focus and disperse light using refraction? There's no separate optical effect
called "diffusion" that isn't based upon dispersion. As a matter of fact,
often a diffuser is referred to as a "diffuser lens":

[https://www.ylighting.com/element-lighting-accessory-
lenses....](https://www.ylighting.com/element-lighting-accessory-lenses.html)

 _the diffusion filters referenced in the article weren 't designed but chosen
arbitrarily_

Nothing I said indicated that the specific bumpy plastic patterns needed to be
designed. See my related thoughts on decoding images seen through privacy
glass. It's the same concept, except they use the refracted images to
specifically select for multiple incoming light angles to create 3D images
(when they're doing the 3D image part). I actually understand how they did
what they did fairly well. I read the first couple of lines in the article and
knew that they had implemented the same concept that I had thought about years
ago.

 _Now, the lenticular lens you referenced is a lens because of how it
operates, but the referenced in the article technique definitively does not
require a lens because lenses aren 't diffusers. Does that make sense?_

I think maybe you're coming at this subject from photography terminology?
Perhaps that's why you think there's some kind of distinction between a "lens"
and a "diffuser". I'm coming at it from a physics perspective where these are
really the same thing.

~~~
iaw
Semantics is important when you're trying to change the definition of a word
that is rigorously defined. Your assertion that the bumpy piece of plastic in
the article focus's or disperse's light is false because it does neither, it
diffuses light. Focus and disperse have _very_ specific meanings in physics
that are not met by an arbitrary piece of material.

If you're confident that a diffuser meets the physical definition of a lens
could you point me to reference material that is from something a bit more
rigorous than ylighting's webpage? I have yet to see _any_ physics source
refer to a diffuser as a lens, Edmund Optics [0] is very precise in their
aversion to using that term for a diffuser.

I have yet to find _any_ physics texts that indicate a diffuser is a lens.
Please correct me if you have a source because ylighting looks like a
commercial supplier using the same imprecise language that you were.

I am happy to stand corrected but I'm not okay with the top comment on a
science article undermining scientific definitions.

[0] [https://www.edmundoptics.com/resources/application-
notes/opt...](https://www.edmundoptics.com/resources/application-
notes/optics/diffuser-selection-guide/)

~~~
crusso
_Semantics is important when you 're trying to change the definition of a word
that is rigorously defined_

I'm not changing the meaning of "lens". First-of-all, I don't have that kind
of authority. Second-of-all, the bumpy plastic is clearly being used as a
lens.

The bumpy plastic is used to focus light onto the sensor array in a novel way
that allows the resulting sensor image to be used to construct 2D and 3D
images. A lens focuses or disperses light, normally to form an image. That's
what is happening here in this device. Just because the lens shape is
irregular, unplanned, and not a standard one you'd find in a camera shop
doesn't mean it's not a lens.

 _reference material that is from something a bit more rigorous than ylighting
's webpage_

That was just one of many examples of references to "diffuser lens". Feel free
to do some googling.

------
alanfalcon
Brilliant and amazing. "This is a very powerful direction for imaging, but
requires designers with optical and physics expertise as well as computational
knowledge." It’s a little crazy to me how much can be accomplished by tackling
hard problems in one domain by leveraging ideas and expertise from a seemingly
unrelated[1], unexpected domain. Specialization is at the same time super
important and a potential bottleneck to innovation. This fascinates me.

[1]Not that physics expertise in imagery is unrelated, but I feel like it’s
being used in very non-traditional ways here.

------
saycheese
Here’s the research paper:
[https://pdfs.semanticscholar.org/9cff/c0c80b1ae3c1b773b761f3...](https://pdfs.semanticscholar.org/9cff/c0c80b1ae3c1b773b761f37c66e58890639e.pdf)

------
foota
This is really neat, it seems to me like they use a see through material of
some sort that scatters light randomly as a filter in front of the camera.
They then move around a small light and use that to figure out the pattern
that the light is scattered in by the material?

~~~
eeZah7Ux
> They then move around a small light

...or wave around a checkerboard pattern of known size to make calibration
faster perhaps?

~~~
solarkraft
Maybe, but they might need absolute positions.

------
mrow84
Can someone explain why they can't apply the same reconstruction technique to
the data that would be captured without a diffuser; i.e. why the diffuser is
required?

~~~
frumiousirc
It's a good question and I think strikes at the heart of the idea.

A lens transforms a family of rays to a pixel location. Given knowledge of
that pixel's intensity there is a degenerate solution for the original ray (in
terms of it's location and direction at some plane). This degeneracy is one
thing that leads to blurry photos.

The micro lens camera referenced in the article spreads this family over more
pixels in a known, analytic way to make the solution more unique. In principle
it suffers from the same degeneracy but each micro lens limits the possible
location of rays so if any of the pixels under it are hit then the general
location is set and the exact pixel in the group determines the direction.

The diffuser works similarly but spreads direction and position location over
many pixels and in a random way (seeded by the material and it's precise
placement). While this spread can not be calculated it can be discovered
through calibration with known point sources.

In both these latter cases one inverts this analytic or calibrated ray->pixel
matrix and applies that to the measured pixels to reconstruct the rays that
have likely caused the measurement.

In the case of the diffuse "lens", the required matrix inversion can be
computationally expensive at best and impossible at worse. However, the
methods of compressed sensing (in particular L1 regularization) allow an
approximate inversion to be done in a relatively fast manner.

~~~
mrow84
Ok, thank you. As I understand what you have written, the diffuser does as its
name suggests, and spreads light rays over more pixels than they would
otherwise have struck, making it easier to construct the pixel -> ray
function.

As a matter of interest, do you think it would still be possible to apply the
technique without the diffuser, presumably obtaining a lower-fidelity
reconstruction, by leaning more heavily on the regularisation?

~~~
chowells
Without the diffuser, the only information you have is roughly "a light ray
hit the sensor at location (x, y)". You can't derive from that information
what direction the photon hit the sensor from.

This technique gives you "a light ray hit the sensor at locations (x1, y1)
through (xn, yn)". You can deconvolve that list to get an approximate vector
the ray hit the diffuser at.

Obviously there's a lot of calculation involved to apply this deconvolution
over the entire image at once, but it's the same thing light field cameras
have been doing for a while. The innovative bit here is working with a random
diffuser, rather than a very precise lens configuration.

~~~
mrow84
Ah yes, of course, I see what you mean. Presumably there is some minimal
number of pixels required to stand any chance of resolving the orientation of
a particular bundle of light rays (I would imagine 3)?

Also, would it be true to say that the more pixels you manage to spread a
given ray bundle over, the better the reconstruction, and that the main trade-
off is between the accuracy and the density of the reconstructed ray bundles,
for a fixed number of pixels?

~~~
chowells
I have to admit you're moving past my level of knowledge on the topic. Both of
your suppositions seem likely correct, but my understanding of the calculation
technique involved is superficial at best.

~~~
mrow84
Ok no worries - thanks for helping me to understand what's going on.

------
FractalNerve
This makes the Holographic_imager a reality, hooray!

[1] [http://memory-alpha.wikia.com/wiki/Holographic_imager](http://memory-
alpha.wikia.com/wiki/Holographic_imager)

To me this is the most major breakthrough I've heard in the recent years,
which can and will hopefully affect everything. Using the extra CMOS Chip on
your flagship smartphone will allow for taking 3D and soon Holographic
pictures!

How Amazing! I remember there was an AI trained to turn 2D pictures into 3D
[2], combining that with the NPU Chip on smartphones can truly make this
happens very soon.

[2]
[http://www.dailymail.co.uk/sciencetech/article-4904298/The-A...](http://www.dailymail.co.uk/sciencetech/article-4904298/The-
AI-turn-selfie-3D-image.html)

------
punnerud
The Research article (PDF):
[https://www.osapublishing.org/DirectPDFAccess/3ADDF00A-E071-...](https://www.osapublishing.org/DirectPDFAccess/3ADDF00A-E071-C39B-B3DA82E301EA25C6_380297/optica-5-1-1.pdf?da=1&id=380297&seq=0)

------
visarga
I especially like the applications in cheap and compact 3D sensing and brain
neural interfaces. Very exciting!

------
kelvin0
This could also be used to analyze some material used as the diffuser.

The 'shape' of the caustics captured by the sensor with a given
electromagnetic 'lightsource' can probably yield some interesting information
regarding the diffuser. Kinda like an spectrograph works.

------
jaclaz
For a moment I thought they could manage to have something _like_ the ESPER
machine in Blade Runner.

~~~
sp332
Lytro's light-field cameras can do this.
[https://vimeo.com/102302646](https://vimeo.com/102302646)

~~~
jaclaz
Yes, I know, thanks, but at the end of the day the Lytro nice technology
allows for focus and depth of field correction (besides some slight shift in
pespective), but once the "wow" effect has faded away (I mean for "plain"
photography), that's it.

For cinema, VR and CGI it is simply great of course.

~~~
sp332
Based on the movie, I'm not sure what features you want besides "move slightly
to the right".

~~~
jaclaz
In the movie, from a plain photography, the ESPER machine manages, somehow, to
enter into a reflection, and then expand in the reflection, seeing things that
are "not there" in the original image, in practice in the fiction the whole 3d
space (even what is on the side of the door) is navigable and viewable:

[http://thelegalgeeks.com/2017/06/28/admissibility-of-
zhoras-...](http://thelegalgeeks.com/2017/06/28/admissibility-of-zhoras-photo-
in-blade-runner/)

What I mean can be better appreciated in this reconstruction:

[https://typesetinthefuture.com/2016/06/19/bladerunner/](https://typesetinthefuture.com/2016/06/19/bladerunner/)

[https://typesetinthefuture.files.wordpress.com/2016/06/blade...](https://typesetinthefuture.files.wordpress.com/2016/06/bladerunner_esper_room_layout_full.png)

[https://vimeo.com/169392777](https://vimeo.com/169392777)

~~~
romwell
I think this is close to the Blaredunner camera tech you want:
[http://web.media.mit.edu/~raskar/cornar/](http://web.media.mit.edu/~raskar/cornar/)

~~~
jaclaz
>I think this is close to the Blaredunner camera tech you want:
[http://web.media.mit.edu/~raskar/cornar/](http://web.media.mit.edu/~raskar/cornar/)

Yes, that's it, never heard about Femto-Photography, thanks.

------
bhouston
Reminds me of insect eyes.

------
eeZah7Ux
I wonder if unscrewing the lens on a cheap USB camera could lead to some
interesting pictures.

Any lensless camera Open Source project around?

EDIT: A library, not the camera itself

~~~
crusso
You still need the diffraction grating-like piece in front of the sensor
array.

------
p1mrx
Would it be possible to run this system in reverse, and make a holographic
display?

------
ToJans
TL;DR: this is an approach that simplifies the production of light field
cameras (cameras that measure both color and angle of incoming light beams):
instead of building a grid of microscopic lenses, you use a "random" piece of
opaque plastic like scotch tape, and figure out how it modifies incoming light
using a calibration phase.

~~~
tischler
Here is a similar approach using water droplets instead of a diffuse film, but
they don't go as far as performing 3D reconstruction from the light field and
the ray directions are not calibrated by a calibration pattern but by
inferring a 3D model of the droplets: [https://light.cs.uni-
bonn.de/4d-imaging-through-spray-on-opt...](https://light.cs.uni-
bonn.de/4d-imaging-through-spray-on-optics/)

------
solarkraft
Article and especially the title are of pretty low quality. Assuming the
voxels are surface voxels, how is it even theoretically possible to turn 1
million pixels into 100 million voxels? This means you get 100x the xy
resolution AND depth information out of this process. I'm sceptical of that
claim.

As crusso already mentioned lenses and scanning are essential parts of image
capture, a lens being needed to direct the light to the sensor somehow and
scanning to actually read out the image. "Using diffuse foils to replace
microlens arrays" would probably be a more fitting and still teasing headline.
Or "Diffuse foils can replace microlens arrays for 3D imaging", perhaps.

Article aide, the research seems very sound and very cool. It demonstrates
another case of extracting high quality information from low quality sensors -
something I think we'll be seeing a lot more of. Another previously precisely
manufactured piece of hardware is being replaced by a software-supported low-
quality part through optimizations that in their spirit remind me of the
Google Pixel's camera and that drone that can fly (steer) with one rotor.

~~~
rleigh
We already have commercial systems which do this. See PhaseFocus for example
([http://www.phasefocus.com/technology-virtual-
lens/](http://www.phasefocus.com/technology-virtual-lens/)). This uses
proprietary deconvolution algorithms to reconstruct a 3D volume from
diffraction patterns in unfocussed (or partially focussed) 2D planes. See also
X-ray crystallography, which samples in Fourier space and uses a computed
transform to reconstruct the 3D image.

This new microlens technology is very neat, but the processing problem has
been solved for several different applications for several years now.
Reconstructing a 3D volume is absolutely possible, and while this new
development will undoubtedly require new algorithms to work, it's already got
a sound basis in existing technology in use today.

