
Imaging Without Lenses - Gatsky
https://www.americanscientist.org/article/imaging-without-lenses
======
hliyan
As I understand, one of the biggest costs of optical telescopes is the mirror,
which needs to be meticulously ground to a paraboloid. Ever since I was a
child, I always wondered if we could use a semi-cylindrical mirror (which is
much easier to fabricate) to focus light into a linear sensor array and
achieve the remaining focus computationally. If this is possible, we might be
able to build much larger, cheaper versions of the James Webb Telescope.

I discovered 5 years back that cylindrical reflectors are not viable, purely
optically: [https://physics.stackexchange.com/questions/29853/are-
cylind...](https://physics.stackexchange.com/questions/29853/are-cylindrical-
mirror-telescopes-possible)

~~~
sp332
I can't quite visualize the problem. As long as you make the focal length of
the first cylinder equal to the distance to the second cylinder + the distance
from there to the sensor, and make the focal length of the second cylinder
just the distance to the sensor, then both dimensions should converge at the
right point?

~~~
hliyan
I thought so too. But apparently that doesn't give you a _point focus_, rather
a square focus, which doesn't work.

~~~
sp332
Ok, I get it. A circular mirror focuses all directions equally, while two
cylindrical ones focus only exactly left-right and exactly up-down correctly,
leaving every diagonal out of focus to varying extents.

------
anfractuosity
I'm curious if you could DIY the mask used for example in FlatCam that they
mention, it looks a really interesting area.

Apparently for the mask they use "a custom-made chrome-on-quartz photomask
that consists of a fused quartz plate". I was wondering if you could make a
very poor equivalent through just acetate and printing.

It vaguely reminds me of an old fashioned technique such as:

[https://en.wikipedia.org/wiki/Zone_plate](https://en.wikipedia.org/wiki/Zone_plate)

~~~
stochastician
You can actually just place a random diffuser directly on a sensor -- this is
work from some colleagues at UC Berkeley which I (surprisingly) didn't see
mentioned in this article [https://www.medgadget.com/2018/01/diffusercam-
lensless-3d-im...](https://www.medgadget.com/2018/01/diffusercam-
lensless-3d-imaging-without-scanning.html)

DiffuserCam was previously discussed on HN but I couldn't find the link.

~~~
anfractuosity
Cheers, I'll have to look more at that. It's neat the source is available too!

------
taeric
The title makes it sound almost like we are just ditching lenses. Makes much
more sense to say that we have managed to build much more sophisticated
sensors.

The part about "computational imaging" rings awkward to me. In a very real
sense, lenses are just computation systems. They are crafted to manipulate
their inputs in a controlled way. We just now can do more sophisticated
computation in other parts of the systems we have built.

------
kmill
Just yesterday someone was telling me about using diffraction for imaging. He
also mentioned the work of Laura Waller, who apparently is able to get 3D
images of a scene by putting translucent-but-not-transparent tape directly
over the lens-free sensor. It seems the idea is that each point source of
light has its own caustic pattern.

~~~
winkywooster
A talk was given at the last CCC about free electron lasers, and in it the
speaker talks about using diffraction to image chemical reactions. It's great
watch.

[https://media.ccc.de/v/34c3-8832-free_electron_lasers](https://media.ccc.de/v/34c3-8832-free_electron_lasers)

------
birdman3131
Why does this sound like the real version of "Zoom. Enhance."

~~~
ryandamm
I used to dismiss those obvious Hollywood cliches, but the more I looked into
the space professionally, the less silly it sounds. Images-as-data are far
richer than images-as-images, it turns out, and clever algorithms can extract
a lot more information from them than you would naively expect.

So yeah, you're right!

~~~
cooper12
They are silly though. You can't create data out of thin air and this article
even notes multiple times that for more complexity/detail, it requires more
data to be ingested at shooting time.

~~~
IanCal
Yes, though you can do better than is commonly claimed:

You can make use of known properties to improve the picture/extraction as you
aren't taking pictures of random data.

You can improve resolution with multiple frames, having video gives you a lot
more info.

You can extract even with very low resolution in limited cases, getting number
plates for example. Very limited possible inputs (one font, known character,
limited combinations) mean you can find a closest match more easily.

Of course some TV things are silly, but sometimes they're also far worse than
we can actually do.

------
bob_theslob646
An extremely interesting read, but a very long article. Minimum of a 10 minute
read.

------
neves
I believe that astronomers should have very special technics of data
visualization, since almost all their data nowadays are non visual and they
make stunning images. Sure some images are made just to become stunning, but
they probably have technics to gather insights. These could be used in other
high dimensionality scenarios.

Do you know some good references about astronomical data visualization?

