I would have guessed there are some resolution limits. For a traditional [line of sight?] camera, the resolution is limited by the imager and https://en.wikipedia.org/wiki/Angular_resolution . What limits are at play for non-line-of-sight imaging?
In NLOS (and traditional photography) the scene to be imaged is illuminated with N photons but the detector only receives M photons worth of signal back, where M<<N. So future people are going to need:
1) helmet-mounted lasers capable of sustained very high power output, to increase the signal enough to get over the quantum detection threshold
2) to slow their roll, so a lot of imaging can occur before they round the corner
or both. There are already practical limitations on laser power output in air because the air will turn into plasma along the beam path. Similarly, for NLOS you need to not burn up and destroy the surface that you're using to bounce the light around the corner.
Or can they?
If the detector elements were part of a quantum computer (or a quantum computing "chip", whatever that will turn out to be) they would be able to analyze all the photon paths (Feynman paths) bouncing back from the subject, even those that would decohere / collapse away in a traditional detector.
IANAP, but wouldn't a quantum chip be able to perform some amount of NLOS by analyzing the paths of even a single photon?
How about "averaging" several attempts to compose a single shot? Would it be possible to shine lasers of multiple wavelengths to achieve better results? maybe varying angles as-well?
High end mobile phone cameras take hundreds of shots over a second or more to reduce noise, and rely on the fact they can estimate movement of things in the frame with optical flow and a gyroscope.
is that because light reflects differently across the same paper due to different colors (black vs. white)?
In ray optics, shadows from point sources are black and white. (If half of the sun is obscured, the surface it illuminates is half as bright, but that's pretty obvious.) That's very old school. Newton could get it right, even though he got particles and waves entirely wrong.
In wave/electromagnetic optics, shadows are shades of grey. Just like water waves can be ripples at one end of a beach, crashing surf at the other, and vary continuously in between.
In quantum optics, shadows are a superposition of different shades of grey, due to shot noise from the photons. That matters if you're doing very precise interference measurements, but not when you're taking photographs, no matter how short the exposure.
? Shadows have fuzzy edges in geometric optics as well (unless you're mentioning usually micrometer-scale diffraction effects). It's because illuminators are not point-like, so when viewed near an edge your light source progresses in size until it disappears (penumbra to umbra)
But animals can already "see" round corners using sound and smell. Dogs don't need any sort of line of sight to identify who's nearby and cats have night vision.
The talk contains references future technology to see around corners. This article may have been inspired by the talk, but it certainly makes it much more available for the rest of us.
The reproduced image is an orthogonal projection of this 3D grid of pixel. Any projection could have been chosen. For example the camera projection could have been chosen to obtain exactly the same image.
In theory for slower algorithms in O(N^5) like inverting the rendering engine you can take obstructions, and ray bouncing into account, but the effective resolution will depend on the dynamic range of your measurements, that you can improve a little by increasing exposure time. In practice reflections don't carry enough information.
You can also incorporate phase information (like with holograms) instead of just intensity, if you are able to measure it to get sharper image thanks to coherent imaging. Once you use the phase information, you can improve the spatial resolution, using multiple wavelength.
Could you make a 3D scanner app in conjunction with UBNT and see what could be done....
Also, multipath and time-of-flight is difficult with radio waves, and the EM reflectance is weirder. Light is just much more convenient for this purpose. But - it's not outside of the realm of possible.
With RADAR you get phase information, which helps a lot, and now that 60GHz wifi is a thing the balloon animal has a manageable size :)
Would compressed sensing help? It doesn't need the focus to be precise to reconstruct the scene from a single pixel detector.
Fundamentally, you cannot get something from compressed sensing that you wouldn't be able to otherwise normally.
What would be really interesting to reconstruct what is behind walls using run-of-the-mill hardware.
Similar (in that it's the inverse case) to a bucket of water, if you drop something in it, it creates rippled waves. If you only observe the ripples in the bucket, you can reasonably derive where the thing was likely dropped.
Akin to the Polynesian wave navigation using stick charts: http://thenonist.com/index.php/thenonist/permalink/stick_cha...
You have a grid of water detectors on the grounds, which can tell you where and when a droplet hit the floor.
If at t1 you detect a drop of water at floor coordinates (x,y), you know the height of the ceiling at (x,y) is h1 which you can compute from t1. So you can make an image of the ceiling.
The paper algorithm is a fancier version of this, where the droplets are not falling straight down to the floor, but spread out like sphere ripples. So you need some fft math to deconvolve.
The added benefit of using waves, is that you can see behind the objects, and you have a real 3D and not a 2.5D image.
I've heard of WiFi being used to 3D map buildings for example.