
Researchers create focus-free camera with new flat lens - dnetesn
https://phys.org/news/2020-03-focus-free-camera-flat-lens.html
======
barbegal
Depth of field for a normal camera is a function of the relative aperture size
(f-number), focal length and distance to the subject. This camera is able to
create a huge depth of field with a far smaller aperture than would be
expected with a conventional camera.

So how does it break the laws of physics? Well it doesn't; it cheats by not
preserving the phase of the light. In a normal lens, the phase of the light is
preserved. Light that must travel further to reach the point of focus travels
faster as it passes through less lens material (where it travels slower than
the speed of light).

The lens is also made on a flat disc a little bit like a Fresnel lens. The
difference with this lens is that it doesn't have a single point of focus,
instead light is focused onto many planes, simultaneously with equal power of
light focused onto each plane. In this paper the planes were chosen to be
between 5 and 1200mm. A clever computer algorithm calculates the exact shape
of the lens to get this distribution given a specific wavelength of light.

The end result is a lens which has amazing depth of field but the trade off of
only operating well at a single frequency and poor efficiency. Most of the
incoming light is being focused on a plane that the sensor isn't on. Indeed at
1200mm it has an equivalent f-number of 555 requiring some insanely bright
lighting and long exposure time to get a good image.

I can't see many real world applications of this camera given the downsides
but in a world of more computational photography this could be one part of a
better imaging system.

~~~
thaumasiotes
> The end result is a lens which has amazing depth of field but the trade off
> of only operating well at a single frequency and poor efficiency. Most of
> the incoming light is being focused on a plane that the sensor isn't on.
> Indeed at 1200mm it has an equivalent f-number of 555 requiring some
> insanely bright lighting and long exposure time to get a good image.

How does this compare to a pinhole camera, which (ideally) has perfect depth
of field and focuses all incoming light onto the plane of the sensor, but
which achieves that at the cost of admitting very little incoming light? (Thus
requiring insanely bright lighting and long exposures.)

It seems like the exact same tradeoff.

------
kragen
The abstract:
[https://www.osapublishing.org/optica/abstract.cfm?uri=optica...](https://www.osapublishing.org/optica/abstract.cfm?uri=optica-7-3-214)

The paper:
[https://www.osapublishing.org/optica/viewmedia.cfm?uri=optic...](https://www.osapublishing.org/optica/viewmedia.cfm?uri=optica-7-3-214&seq=0)

I've only skimmed it, but it looks super interesting.

I was expecting some kind of computational holography wizardry, but no, it's
just a multilayer diffractive lens with circular symmetry which produces a
beam with a really stretched-out waist.

Usually diffractive lenses are limited to a single wavelength — they're using
850 nm. Is there a way something like this can work over the whole visible
spectrum? If not, this might be more like a lightsaber than a camera.

~~~
kuprel
If they can get it to work well with one wavelength, can't you just use 3 of
them positioned in a triangle for RGB

~~~
dTal
The problem is that 3 narrowband wavelengths doesn't add up to very much
light. If we call visible range 400-700 nanometers, and bandwidth is 1nm, then
you are throwing away 99% of visible light.

~~~
_0ffh
Also, if the incoming light is banded you might end up throwing away 100% if
the sensors happen to fall into the gaps.

------
glitchc
They are probably experimenting with sandwiches of high refractive index
material.

There are no statements regarding transmissivity, which leads me to believe
that its actually lower than a typical lens wide open i.e. the incident
irradiance (light on the sensor) is equivalent to a regular lens set to a
narrow aperture. There is no free lunch in optics or physics usually.

~~~
bravo22
Looks like they just etch pattern into a single material.

~~~
glitchc
So it's a diffraction grating. Will dig more into it.

------
jfengel
It sounds like kind of a cross between a light-field and a fresnel lens. It
uses the shape of the surface to make patterns on the chip, and which then has
to be decoded with a lot of processing. Instead of microlenses on the chip you
put microlens-replacements in the flat surface.

------
aj7
Show us ONE image.

~~~
IanCal
The paper is open access and linked from the article, here you go:
[https://doi.org/10.1364/OPTICA.384164](https://doi.org/10.1364/OPTICA.384164)

------
dcanelhas
A pinhole camera is also focus-free :)

------
branko_d
Could this be used for glasses?

Could the same glasses serve both as reading glasses and as long-distance
glasses?

~~~
warkdarrior
No, unless you replace your eyes with chips that perform the post-processing
of the image.

~~~
rini17
Brain could certainly learn to do it :)

~~~
red75prime
Maybe it can, but it doesn't do a thing. At least for me. Deconvolution of
astigmatic eye lens image? Nope. Spatial superresolution by temporal
integration? Nope.

Low-level visual processing in our retinas and brains seems to be too
specialized to learn new tricks.

~~~
rini17
The brain does all the time spatial superresolution by temporal integration!
Look up "saccade".

No idea about deconvolution, but I doubt computers can do better job
identifying things from blurry images than brains.

~~~
red75prime
Saccades are just compensation for low peripheral resolution and shortcomings
of rods and cones. I had in mind getting more resolved images than what number
of photoreceptors allows.

~~~
rini17
There are no "resolved images" that can be measured by pixels anywhere in the
humans' visual system. Only talking about identifying things makes sense here,
and that too, it is capable:

"Because the disalignments are often much smaller than the diameter and
spacing of retinal receptors, vernier acuity requires neural processing and
"pooling" to detect it."

[https://en.wikipedia.org/wiki/Vernier_acuity](https://en.wikipedia.org/wiki/Vernier_acuity)

------
css
A lens that keeps everything in focus (aka, large or infinite depth of field)
is not desirable for most photography applications; this mainly has important
implications for industrial use. It is always interesting to see innovations
in this space.

~~~
altcognito
Software can fill in many of the gaps required for photography.

~~~
glup
A common opinion among software engineers, but less common among serious
photographers. "Portrait mode" looks like a cheap instagram filter. To be
fair, it does let a much larger population experiment with effects that
otherwise take a bit of specialized gear and knowledge.

~~~
ansgri
Modern phones get you 90% there. The bokeh quality apparently wasn't a
priority until very recently. For indoor applications, a Huawei P20 Pro with
artificial bokeh and good onboard image processing beats my APS-C-format Sony
camera with an F/1.4 lens most of the time — not even counting cases when the
depth of field of said F/1.4 is too shallow! No way can that mobile thing
compete with a good lens in good lighting though. But soon that won't be
because camera hardware isn't competitive; rather, the phone's computational
photography pipeline and lack of dedicated controls wouldn't allow enough
flexibility in shooting techniques. Again, the mobile phone manufacturers are
just starting advancing to higher-end photography — the science and computing
power is basically there, you can do a lot with deep learning combined with
real understanding of image formation and photography.

I wouldn't call myself anything serious as a photographer; OTOH some serious
photographers whom I follow start turning more into 3D artists or
videographers.

Photography as a profession seems to be in a crisis: not that it's not needed,
it's that many adjacent things (3D, video, design, marketing) are also needed,
not unlike in programming: there's almost no such thing as a senior (e.g.)
Python developer, there are senior Web, Data, etc. developers which primarily
use Python along with other tools (JS, sql, etc).

~~~
altcognito
Yes, you get where I'm coming from, and I think both of us agree that nothing
will ever replace a photographer who knows how to compose a photo. Taking a
picture is different!

------
econcon
Did someone ever use electromagnetic diaphragm to control lens shape?

~~~
banana_giraffe
I think you're referring to so called electrically tunable lens. Most every
camera these days uses an electromagnetic diaphragm to control the aperture.

And yes, it's a electrically tunable lens are a thing. Optotune is one company
I've heard of making hardware for the industrial space, there are probably
others.

------
kazinator
Screw phone cameras; help people with visual impairments see better!

Phone cameras are already twice as good as they need to be.

~~~
tal8d
Which market will bring costs down and feed into the next cycle of
development, a market that potentially includes everyone - or a market
restricted to desperate legally blind people? I'm not a fan of consumerism,
but it goes a lot further in improving the lives of everyone, versus
conspicuous virtue projects.

------
anovikov
Someone just reinvented the Fresnel lens?

------
jungletime
This seems like it would increase the distance that you need to hold the phone
to take a selfie. And won't have Bokeh.

Bokeh is the effect of blurring out the background. Its important because it
lets you isolated the subject, like a human face, from the background.

Still, if it does one thing better than other lenses, it will be useful.

~~~
ska

      Bokeh is the effect of blurring out the background.
    

This is slightly incorrect in an important way. Depth of field has the effect
of blurring out the background and/or foreground; it is a property of optical
systems. However, the design of lenses changes how this _looks_. The latter
part is what "bokeh" means, and it is why people talk about "the bokeh" of a
particular lens.

In other words, bokeh is fundamentally an aesthetic characteristic.

~~~
hatsunearu
[http://jtra.cz/stuff/essays/bokeh/](http://jtra.cz/stuff/essays/bokeh/)

here's an extremely in depth look at what that means.

It explains what people mean by "soft bokeh" (~= bokehlicious!), apodized
lenses, and the "cat's eye" shit you find sometimes.

