Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Wave-based non-line-of-sight computational imaging in Julia (analytech-solutions.com)
375 points by krrutkow 52 days ago | hide | past | web | favorite | 71 comments



"Honestly, the quality is probably on par with a 40 year old digital camera. On the other hand, it hopefully means that in about 40 years we will have GoPro’s with this capability strapped to our helmets showing us what’s around the corner before we even get there!"

I would have guessed there are some resolution limits. For a traditional [line of sight?] camera, the resolution is limited by the imager and https://en.wikipedia.org/wiki/Angular_resolution . What limits are at play for non-line-of-sight imaging?


The quantization of light is going to be a serious limit. The detector elements of a camera could be made pretty sensitive, but they can't detect fewer than one photon.

In NLOS (and traditional photography) the scene to be imaged is illuminated with N photons but the detector only receives M photons worth of signal back, where M<<N. So future people are going to need:

1) helmet-mounted lasers capable of sustained very high power output, to increase the signal enough to get over the quantum detection threshold

2) to slow their roll, so a lot of imaging can occur before they round the corner

or both. There are already practical limitations on laser power output in air because the air will turn into plasma along the beam path. Similarly, for NLOS you need to not burn up and destroy the surface that you're using to bounce the light around the corner.


> The detector elements of a camera could be made pretty sensitive, but they can't detect fewer than one photon.

Or can they?

If the detector elements were part of a quantum computer (or a quantum computing "chip", whatever that will turn out to be) they would be able to analyze all the photon paths (Feynman paths) bouncing back from the subject, even those that would decohere / collapse away in a traditional detector.

IANAP, but wouldn't a quantum chip be able to perform some amount of NLOS by analyzing the paths of even a single photon?


As soon as your photon interacts with a realistic scene (thermal), I believe it loses coherence with any apparatus you might have prepared in the detector. In short, no, I don't think this is possible, unless you consider idealized environments.


> "In NLOS (and traditional photography) the scene to be imaged is illuminated with N photons but the detector only receives M photons worth of signal back, where M<<N."

How about "averaging" several attempts to compose a single shot? Would it be possible to shine lasers of multiple wavelengths to achieve better results? maybe varying angles as-well?


Averaging N images should (roughly?) make the noise go down by sqrt(N), so long as you don't introduce more sources of noise during collection. So the camera would have to be still, or the images registered to a common perspective.


The camera or subject can move, and as long as the movements are known then the reconstruction is still perfect.

High end mobile phone cameras take hundreds of shots over a second or more to reduce noise, and rely on the fact they can estimate movement of things in the frame with optical flow and a gyroscope.


This is what I meant by "registration", for the case when the subject isn't changing.


At least surfaces that are anisotropic scattering, highly absorbing surfaces, and temporal resolution of the laser and shutter will all affect the image's spatial resolution.


Direct link to the Julia project repository: https://github.com/analytech-solutions/fkMigration.jl


wait, why can it see color? if this algorithm is based on the speed of light traveling (similar to ultrasonic), I would think it won't show color? the resulting image shows the printed 1,2,3 on that paper.

is that because light reflects differently across the same paper due to different colors (black vs. white)?


It uses the time of arrival to determine where a particular reflection came from, but it can also observe the intensity of that reflection to determine reflectivity.


You could easily generate a colour image by using a red, green and blue laser sequentially. But... they haven't done that. It's all black and white so I don't know why you think it can see colour?


I think the mouseover text on the color picture is a mistake. The article labels the blurry grayscale image as a reconstruction and the color image as a regular image.


The two panels in the README are a single image, which is why they share the mouseover text.


Not color, but reflectivity.


super cool!

why julia?


From my experience it is one of the most productive programming languages to develop in, and the code can be iterated on to reach performance that is as good as C or Fortran code.


Note that @krrutkow is the author so this answer is fairly definitive.


I cannot answer for the author, but for projects which build a lot of expensive numerical routines from scratch, Julia is a good fit: The JIT overhead is a small price compared to the high-level feel of the code and the resulting post-compiled speed.


This seems to be one of those “poster child” projects for Julia. As in there are likely lots of custom primitives or numerical routine that’d be impossible to have perform well in Python/Numpy but still saving development effort over coding a custom C++ case (much less C++ expertise needed to do so).


This reminds me of Feynman’s QED, which is a must read for laymen who want to know more about things like why shadows have fuzzy edges and other ways light doesn’t exactly travel in a straight line.


You don't need QED for that, only Maxwell's equations. QED is necessary if you want to explain how those effects jive with the fact that you can count individual photons.


The previous poster is referring to this book [0], which is a series of pop science lectures.

[0] https://en.wikipedia.org/wiki/QED%3A_The_Strange_Theory_of_L...


If I am wrong, please correct me: but I don't believe traditional electromagnetism can explain the fuzziness (which I think is better described as noisiness) of shadows.


That's just diffraction. Any kind of wave spreads slightly into the shadow behind an obstacle; the waves described by Maxwell's equations are no exception.


It depends what you mean by "traditional" and "fuzziness".

In ray optics, shadows from point sources are black and white. (If half of the sun is obscured, the surface it illuminates is half as bright, but that's pretty obvious.) That's very old school. Newton could get it right, even though he got particles and waves entirely wrong.

In wave/electromagnetic optics, shadows are shades of grey. Just like water waves can be ripples at one end of a beach, crashing surf at the other, and vary continuously in between.

In quantum optics, shadows are a superposition of different shades of grey, due to shot noise from the photons. That matters if you're doing very precise interference measurements, but not when you're taking photographs, no matter how short the exposure.


I suspect "fuzziness" here is something specific? E.g. fuzzy-edged shadows from the sun seems pretty obvious - it's not a point light source. So this is referring to... nano-scale edge fuzziness or something? Or the shenanigans needed to do sub-wavelength features (like we do for silicon lithography)?


This doesn't actually take advantage of the wave-like nature of light. There's no diffraction or refraction happening.


> why shadows have fuzzy edges

? Shadows have fuzzy edges in geometric optics as well (unless you're mentioning usually micrometer-scale diffraction effects). It's because illuminators are not point-like, so when viewed near an edge your light source progresses in size until it disappears (penumbra to umbra)

https://en.wikipedia.org/wiki/Umbra,_penumbra_and_antumbra


I can't help but wonder to what extent biological systems could theoretically implement an analogous algorithm. Knowing a tiger is behind the tree would sure be advantageous.


First, evolve a high power laser.


There are passive NLOS imaging technologies based on shadows:

https://arxiv.org/pdf/1807.02444.pdf

https://www.quantamagazine.org/the-new-science-of-seeing-aro...

But animals can already "see" round corners using sound and smell. Dogs don't need any sort of line of sight to identify who's nearby and cats have night vision.



This is quite analogous to exploration seismic imaging, including the Stolt migration. Thank you for sharing this.


Yeah, it's strange to me that the people working in this subfield are proposing more exotic approaches to basic time migration which is ideal for this problem and has been known for more than 50 years now.


I wonder wether there are already any military applications based on wave-based imaging. I assume one would gain significant advantage having such capability, especially in urban warfare.


Perhaps this can be done with multi frequency audio pulses. That would be easier to build than a NLOS camera


It definitely can be done. A process similar to this is used for ultrasonic imaging.


There was a similar technique discussed in a Ted Talk using femto-photography, which had an animation showing how it was done for an ELI5[0].

The talk contains references future technology to see around corners. This article may have been inspired by the talk, but it certainly makes it much more available for the rest of us.

[0]: https://youtu.be/Y_9vd4HWlVA?t=354


It is interesting that the reproduced image seems rotated from the color image. Is that change an artifact of the algorithm or being NLOS, or is it the angle of the perspective (for lack of a better word since I'm not really sure how you would define your viewpoint) / a difference in where the color image was taken from the special sensor (i.e., you could place the normal camera in front of a mirror to have a similar perspective from the special sensor)...


The captured data is a 3D voxel grid (not a 2.5D image like kinect). In theory it can view behind the objects themselves.

The reproduced image is an orthogonal projection of this 3D grid of pixel. Any projection could have been chosen. For example the camera projection could have been chosen to obtain exactly the same image.


Do they have a lower effective resolution at different projections due to light reflected being somehow less direct (more reflections?)?


Yes. The best views will be from projections whose camera position is at a point scanned by the laser. With the f-k migration algorithm, the output is in fact the aligned sum of 2.5D images that would have produced by cameras located at the laser scanned locations. The algorithm is based on a simple model which doesn't take into account obstructions or secondary ray reflections.

In theory for slower algorithms in O(N^5) like inverting the rendering engine you can take obstructions, and ray bouncing into account, but the effective resolution will depend on the dynamic range of your measurements, that you can improve a little by increasing exposure time. In practice reflections don't carry enough information.

You can also incorporate phase information (like with holograms) instead of just intensity, if you are able to measure it to get sharper image thanks to coherent imaging. Once you use the phase information, you can improve the spatial resolution, using multiple wavelength.


They used some sort of wall to reflect off of. I'm sure that when they moved / removed the wall to take the regular picture, it wasn't in the same alignment.


Wait, what is the hardware required for capturing this kind of data? Can I do this with a point-and-shoot camera?


The picosecond shutter time scale part and high speed control of a powerful laser would make that awfully difficult.


I agree, sorry, I went back to a video I a few years ago on something similar and realized that the defining feature of their camera was Femto-photography.


Can you apply these theories to using a wifi AP to read the 3d layout of the room/space based on RSSI and its various interferences?

Could you make a 3D scanner app in conjunction with UBNT and see what could be done....


Not only wave length, but also beam width. The laser is spatially very compact. Radio waves need larger antenna apertures to be that focused. It's hard to get tight spatial resolution with a beam that looks like a balloon animal.

Also, multipath and time-of-flight is difficult with radio waves, and the EM reflectance is weirder. Light is just much more convenient for this purpose. But - it's not outside of the realm of possible.


> Light is just much more convenient for this purpose

With RADAR you get phase information, which helps a lot, and now that 60GHz wifi is a thing the balloon animal has a manageable size :)


>It's hard to get tight spatial resolution with a beam that looks like a balloon animal.

Would compressed sensing help? It doesn't need the focus to be precise to reconstruct the scene from a single pixel detector.


No, compressed sensing only allows you to attempt to reconstruct the original signal from a limited subset of the data sampled in a specific way, as long as you can assume some kind of sparsity on the data in another domain.

Fundamentally, you cannot get something from compressed sensing that you wouldn't be able to otherwise normally.


You could build a ghetto imaging RADAR using RSSI and phase inference, sure, but once you started optimizing it past the parlor trick phase (pun intended), you'd quickly find yourself reinventing traditional imaging RADAR, which is very much a thing.


Fairly sure I've seen an MIT paper on passive wifi imaging; including seeing humans through walls.


Is this[1] the paper you're talking about?

[1]: https://www.mit.edu/~ty20663/SAR_files/Ralston_Charvat_Peabo...



I don't think the wavelengths involved would be the best choice for it. The question is how much reflection is exhibited at wifi frequencies with the surfaces in a room, probably not enough to be able to observe it.


Fascinating. I have been using the same approach to generate synthetic aperture radar images, but having a non-radar platform to test the processing algorithms with is quite useful.


It appears we are escaping plato's cave allegory.

What would be really interesting to reconstruct what is behind walls using run-of-the-mill hardware.


Does anyone have a more ELI5 explanation?


When the light bounces off of the target surface it scatters. The light which bounces back to the sensor array has time offsets depending on the geometry of the scene, those can be modeled as curves.

Similar (in that it's the inverse case) to a bucket of water, if you drop something in it, it creates rippled waves. If you only observe the ripples in the bucket, you can reasonably derive where the thing was likely dropped.


> If you only observe the ripples in the bucket, you can reasonably derive where the thing was likely dropped.

Akin to the Polynesian wave navigation using stick charts: http://thenonist.com/index.php/thenonist/permalink/stick_cha...


This site — which appears to be the source data for OP as well — has some really nice visualizations. It looks like it's more a scan than a flash, combined with extraordinarily fast frame rates to precisely measure the light's travel time and allow for reconstruction.

http://www.computationalimaging.org/publications/nlos-fk/


Imagine you have a cave the ceiling of which is covered by a thin film of water. At t=0, there is an explosion which shakes the water and make it fall vertically to the ground as droplets.

You have a grid of water detectors on the grounds, which can tell you where and when a droplet hit the floor.

If at t1 you detect a drop of water at floor coordinates (x,y), you know the height of the ceiling at (x,y) is h1 which you can compute from t1. So you can make an image of the ceiling.

The paper algorithm is a fancier version of this, where the droplets are not falling straight down to the floor, but spread out like sphere ripples. So you need some fft math to deconvolve.

The added benefit of using waves, is that you can see behind the objects, and you have a real 3D and not a 2.5D image.


My guess: echolocation, but with electromagnetic waves (light) rather than sound waves


Makes me wonder if the results of this test could be improved by layering in sound waves for additional resolution


Why not use other frequencies of the EM spectrum, some that say penetrate walls?

I've heard of WiFi being used to 3D map buildings for example.


My guess is that it doesn't create the echo.


Soundwave SnR deteriorates much faster than lightwave. That said, there are companies uses sound for generating maps.


Imagine turning a wall into a mirror. Essentially, it uses a laser beam and fancy math to see around a corner, similar to looking at a mirror.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: