
Show HN: Wave-based non-line-of-sight computational imaging in Julia - krrutkow
https://www.analytech-solutions.com/analytech-solutions/blog/nlos.html
======
gugagore
"Honestly, the quality is probably on par with a 40 year old digital camera.
On the other hand, it hopefully means that in about 40 years we will have
GoPro’s with this capability strapped to our helmets showing us what’s around
the corner before we even get there!"

I would have guessed there are some resolution limits. For a traditional [line
of sight?] camera, the resolution is limited by the imager and
[https://en.wikipedia.org/wiki/Angular_resolution](https://en.wikipedia.org/wiki/Angular_resolution)
. What limits are at play for non-line-of-sight imaging?

~~~
mohn
The quantization of light is going to be a serious limit. The detector
elements of a camera could be made pretty sensitive, but they can't detect
fewer than one photon.

In NLOS (and traditional photography) the scene to be imaged is illuminated
with N photons but the detector only receives M photons worth of signal back,
where M<<N. So future people are going to need:

1) helmet-mounted lasers capable of sustained very high power output, to
increase the signal enough to get over the quantum detection threshold

2) to slow their roll, so a lot of imaging can occur before they round the
corner

or both. There are already practical limitations on laser power output in air
because the air will turn into plasma along the beam path. Similarly, for NLOS
you need to not burn up and destroy the surface that you're using to bounce
the light around the corner.

~~~
etatoby
> _The detector elements of a camera could be made pretty sensitive, but they
> can 't detect fewer than one photon._

Or can they?

If the detector elements were part of a quantum computer (or a quantum
computing "chip", whatever that will turn out to be) they would be able to
analyze all the photon paths (Feynman paths) bouncing back from the subject,
even those that would decohere / collapse away in a traditional detector.

IANAP, but wouldn't a quantum chip be able to perform some amount of NLOS by
analyzing the paths of even a single photon?

~~~
darkmighty
As soon as your photon interacts with a realistic scene (thermal), I believe
it loses coherence with any apparatus you might have prepared in the detector.
In short, no, I don't think this is possible, unless you consider idealized
environments.

------
krrutkow
Direct link to the Julia project repository: [https://github.com/analytech-
solutions/fkMigration.jl](https://github.com/analytech-
solutions/fkMigration.jl)

~~~
billconan
wait, why can it see color? if this algorithm is based on the speed of light
traveling (similar to ultrasonic), I would think it won't show color? the
resulting image shows the printed 1,2,3 on that paper.

is that because light reflects differently across the same paper due to
different colors (black vs. white)?

~~~
elihu
I think the mouseover text on the color picture is a mistake. The article
labels the blurry grayscale image as a reconstruction and the color image as a
regular image.

~~~
Arnavion
The two panels in the README are a single image, which is why they share the
mouseover text.

------
User23
This reminds me of Feynman’s QED, which is a must read for laymen who want to
know more about things like why shadows have fuzzy edges and other ways light
doesn’t exactly travel in a straight line.

~~~
whatshisface
You don't need QED for that, only Maxwell's equations. QED is necessary if you
want to explain how those effects jive with the fact that you can count
individual photons.

~~~
Judgmentality
If I am wrong, please correct me: but I don't believe traditional
electromagnetism can explain the fuzziness (which I think is better described
as noisiness) of shadows.

~~~
thisrod
It depends what you mean by "traditional" and "fuzziness".

In ray optics, shadows from point sources are black and white. (If half of the
sun is obscured, the surface it illuminates is half as bright, but that's
pretty obvious.) That's very old school. Newton could get it right, even
though he got particles and waves entirely wrong.

In wave/electromagnetic optics, shadows are shades of grey. Just like water
waves can be ripples at one end of a beach, crashing surf at the other, and
vary continuously in between.

In quantum optics, shadows are a superposition of different shades of grey,
due to shot noise from the photons. That matters if you're doing very precise
interference measurements, but not when you're taking photographs, no matter
how short the exposure.

------
SubiculumCode
I can't help but wonder to what extent biological systems could theoretically
implement an analogous algorithm. Knowing a tiger is behind the tree would
sure be advantageous.

~~~
hirundo
First, evolve a high power laser.

~~~
lopmotr
There are passive NLOS imaging technologies based on shadows:

[https://arxiv.org/pdf/1807.02444.pdf](https://arxiv.org/pdf/1807.02444.pdf)

[https://www.quantamagazine.org/the-new-science-of-seeing-
aro...](https://www.quantamagazine.org/the-new-science-of-seeing-around-
corners-20180830/)

But animals can already "see" round corners using sound and smell. Dogs don't
need any sort of line of sight to identify who's nearby and cats have night
vision.

------
MrSpiffy
This is quite analogous to exploration seismic imaging, including the Stolt
migration. Thank you for sharing this.

~~~
woeirua
Yeah, it's strange to me that the people working in this subfield are
proposing more exotic approaches to basic time migration which is ideal for
this problem and has been known for more than 50 years now.

------
sbhn
Perhaps this can be done with multi frequency audio pulses. That would be
easier to build than a NLOS camera

~~~
woeirua
It definitely can be done. A process similar to this is used for ultrasonic
imaging.

------
xyzal
I wonder wether there are already any military applications based on wave-
based imaging. I assume one would gain significant advantage having such
capability, especially in urban warfare.

------
amingilani
There was a similar technique discussed in a Ted Talk using femto-photography,
which had an animation showing how it was done for an ELI5[0].

The talk contains references future technology to see around corners. This
article may have been inspired by the talk, but it certainly makes it much
more available for the rest of us.

[0]: [https://youtu.be/Y_9vd4HWlVA?t=354](https://youtu.be/Y_9vd4HWlVA?t=354)

------
eximius
It is interesting that the reproduced image seems rotated from the color
image. Is that change an artifact of the algorithm or being NLOS, or is it the
angle of the perspective (for lack of a better word since I'm not really sure
how you would define your viewpoint) / a difference in where the color image
was taken from the special sensor (i.e., you could place the normal camera in
front of a mirror to have a similar perspective from the special sensor)...

~~~
GistNoesis
The captured data is a 3D voxel grid (not a 2.5D image like kinect). In theory
it can view behind the objects themselves.

The reproduced image is an orthogonal projection of this 3D grid of pixel. Any
projection could have been chosen. For example the camera projection could
have been chosen to obtain exactly the same image.

~~~
eximius
Do they have a lower effective resolution at different projections due to
light reflected being somehow less direct (more reflections?)?

~~~
GistNoesis
Yes. The best views will be from projections whose camera position is at a
point scanned by the laser. With the f-k migration algorithm, the output is in
fact the aligned sum of 2.5D images that would have produced by cameras
located at the laser scanned locations. The algorithm is based on a simple
model which doesn't take into account obstructions or secondary ray
reflections.

In theory for slower algorithms in O(N^5) like inverting the rendering engine
you can take obstructions, and ray bouncing into account, but the effective
resolution will depend on the dynamic range of your measurements, that you can
improve a little by increasing exposure time. In practice reflections don't
carry enough information.

You can also incorporate phase information (like with holograms) instead of
just intensity, if you are able to measure it to get sharper image thanks to
coherent imaging. Once you use the phase information, you can improve the
spatial resolution, using multiple wavelength.

------
amingilani
Wait, what is the hardware required for capturing this kind of data? Can I do
this with a point-and-shoot camera?

~~~
snazz
The picosecond shutter time scale part and high speed control of a powerful
laser would make that awfully difficult.

~~~
amingilani
I agree, sorry, I went back to a video I a few years ago on something similar
and realized that the defining feature of their camera was Femto-photography.

------
samstave
Can you apply these theories to using a wifi AP to read the 3d layout of the
room/space based on RSSI and its various interferences?

Could you make a 3D scanner app in conjunction with UBNT and see what could be
done....

~~~
mbarronj
Not only wave length, but also beam width. The laser is spatially very
compact. Radio waves need larger antenna apertures to be that focused. It's
hard to get tight spatial resolution with a beam that looks like a balloon
animal.

Also, multipath and time-of-flight is difficult with radio waves, and the EM
reflectance is weirder. Light is just much more convenient for this purpose.
But - it's not outside of the realm of possible.

~~~
orbital-decay
_> It's hard to get tight spatial resolution with a beam that looks like a
balloon animal._

Would compressed sensing help? It doesn't need the focus to be precise to
reconstruct the scene from a single pixel detector.

~~~
woeirua
No, compressed sensing only allows you to attempt to reconstruct the original
signal from a limited subset of the data sampled in a specific way, as long as
you can assume some kind of sparsity on the data in another domain.

Fundamentally, you cannot get something from compressed sensing that you
wouldn't be able to otherwise normally.

------
lokimedes
Fascinating. I have been using the same approach to generate synthetic
aperture radar images, but having a non-radar platform to test the processing
algorithms with is quite useful.

------
nurettin
It appears we are escaping plato's cave allegory.

What would be really interesting to reconstruct what is behind walls using
run-of-the-mill hardware.

------
gurumeditations
Does anyone have a more ELI5 explanation?

~~~
cjhanks
When the light bounces off of the target surface it scatters. The light which
bounces back to the sensor array has time offsets depending on the geometry of
the scene, those can be modeled as curves.

Similar (in that it's the inverse case) to a bucket of water, if you drop
something in it, it creates rippled waves. If you only observe the ripples in
the bucket, you can reasonably derive where the thing was likely dropped.

~~~
zimpenfish
> If you only observe the ripples in the bucket, you can reasonably derive
> where the thing was likely dropped.

Akin to the Polynesian wave navigation using stick charts:
[http://thenonist.com/index.php/thenonist/permalink/stick_cha...](http://thenonist.com/index.php/thenonist/permalink/stick_charts/)

