I wonder how this line of research will be commercially exploited.
Although the reached depth of field was much shorter, this work reminds me about Lytro [0] Illum, a camera capable of refocusing the image after being taken. Announced in 2014 it received significant hype, but it never reached commercial success. One factor that hindered its adoption was the absence of a standard file format and the corresponding viewers for their "living pictures" – adopters were forced to publish their images on Lytro's website in order to allow viewers the ability to refocus the image.
I recall somewhere that this same type of optics can make heads-up displays that have depth of field as perceptible to the eye, but I can't find the reference. It does sort of make sense tho
> I recall somewhere that this same type of optics can make heads-up displays that have depth of field as perceptible to the eye, but I can't find the reference. It does sort of make sense tho
Yeah, the lack of this effect is one reason that VR games cause nausea. Unfortunately, even if light field displays were cheap (they're not), rendering to them was extremely expensive last I checked (around 50x more expensive than running in stereo)
VR video (not only 360, but also with a bit of wiggle room for your head to move) comes to mind. See this impressive demo presented at Sigraph two years ago (if possible with a VR headset) :
https://augmentedperception.github.io/deepviewvideo/
Not the same thing, but a friend of mine invented an infrared lens based on lobster eyes, with the application being to focus IR energy from heaters down to where it's needed. Ordinary glass doesn't pass much IR, so you can't make an IR lens for this purpose out of glass; and quartz is not practical for other reasons. His lens uses metal reflectors in what amount to tubes, on the model of lobster eyes, to focus the IR. It acts like a lens, not a mirror, in the sense that it's in front of the IR source, not behind it. The company is Radiant Optics.
> The camera utilizes a multiscale convolutional neural network-based reconstruction algorithm to eliminate optical aberrations that result from the camera's 'metalens' array
This is definitely cool, but I wonder what effect this has on the image quality for unusual lighting conditions. Now you have to content not just with optical aberrations, but also the correction algorithm.
In general you're already contending with correction algorithms any time you use a digital camera, so that's not as big of a problem as you might think.
Even if you shoot in "raw" there's probably a proprietary or pseudo-proprietary bit of software in your image editor that runs demosaicing, lens correction etc on the data from the camera sensor.
Although the reached depth of field was much shorter, this work reminds me about Lytro [0] Illum, a camera capable of refocusing the image after being taken. Announced in 2014 it received significant hype, but it never reached commercial success. One factor that hindered its adoption was the absence of a standard file format and the corresponding viewers for their "living pictures" – adopters were forced to publish their images on Lytro's website in order to allow viewers the ability to refocus the image.
[0] https://en.wikipedia.org/wiki/Lytro