The techniques presented here only work if you set up the environment, or at least know a lot about it. The only case where they could actually reconstruct an object, was an extremely elaborate setup with a very simple scene, and a controlled laser to scan the scene.
Just letting your algorithm run over random photos won't reveal a thing.
Sorry which techniques are you referring to? It's probably this one, but this is just one small part of the article
While Freeman, Torralba and their protégés uncover images that have been there all along, elsewhere on the MIT campus, Ramesh Raskar, a TED-talking computer vision scientist who explicitly aims to “change the world,” takes an approach called “active imaging”: He uses expensive, specialized camera-laser systems to create high-resolution images of what’s around corners.
The article starts by describing an observation of very poor quality camera obscuras (the faint image of the outside world you sometimes get through a window) and goes on to talk about many other ways to get information out of images that isn't easily seen. Most of those techniques seem to use two images of the same scene, with some change, to actually get access to that information. That means most old photos cannot be analysed, but old videos can be.
Imagine you’re filming the interior wall of a room through a crack in the window shade. You can’t see much. Suddenly, a person’s arm pops into your field of view. Comparing the intensity of light on the wall when the arm is and isn’t present reveals information about the scene. A set of light rays that strikes the wall in the first video frame is briefly blocked by the arm in the next. By subtracting the data in the second image from that of the first, Freeman said, “you can pull out what was blocked by the arm” — a set of light rays that represents an image of part of the room. “If you allow yourself to look at things that block light, as well as things that let in light,” he said, “then you can expand the repertoire of places where you can find these pinhole-like images.”
EDIT: Rereading the quotes from the script, it seems like Deckard was using the reflection from a mirror to look into another room. However, in the movie it felt like he was going around a corner to get to the mirror.
It was definitely sci-fi photograph tech.
1) There were huge amounts of recordings
2) It generated plenty of clear shadows (and presumable sun-shadows can be recovered as well)
3) For high intensity images, many cameras might decrease shutterspeed frame by frame, so each line is sampling a shorter period, thus a collection of cameras can be considered to perform random oversampling at a higher framerate (as some oscilloscopes do when not in single-shot mode)
4) The recordings are taken from many vantage points in a wide region, potentially allowing a 3d reconstruction of the meteor as it breaks up