Hacker News new | past | comments | ask | show | jobs | submit login
What Is Computational Photography? (dpreview.com)
7 points by jenthoven on April 6, 2022 | hide | past | favorite | 4 comments



The other aspect of this subject that I am interested in, is the idea of a virtual focal plane. With an array of cameras (or if you have time instead of budget, one camera moved about), you can align the images, and select for a specified focal plane, which can be in almost any angle relative to the camera.

Lytro made a camera with a series of microlenses inside that captured a lightfield. As above, this allows focus shifting AFTER the photo was taken. The camera itself had no focusing mechanism, it was fixed... all the focus was in postprocessing. You never had to worry about a blurry shot.

Once you get into this experimenting with cameras in different locations, you start to consider Photogrammetry, the measurement of locations by photograph. With a sufficient knowledge of the lens used and a rough idea of location, of each picture, you could assemble an array of images of an object over time, and measure even small shifts in position. Like, for example, brick walls shifting position as buildings settle, etc.


Here is an example set of photos I put up on Flickr that show the results of combining the same set of images to generate different focal planes.

https://www.flickr.com/photos/---mike---/albums/721777202979...


> Everywhere, including Wikipedia, you get a definition like this: computational photography is a digital image capture and processing techniques that use digital computation instead of optical processes. Everything is fine with it except that it's bullshit. The fuzziness of the official definitions kinda indicates that we still have no idea what are we doing.

Seems rather harsh. "digital computation instead of optical processes" makes complete sense to me. Capture information about light at a scene, not in a simple plane but akin to a hologram, then later 'in post' compose the image from the data which can reconstruct the scene from many perspectives or other conditions determined after capture.


Seems like you could have a light room type software to do this in post if you had the data. Some modern mirrorless cameras can capture quite a lot of frames shutterless. You'd just need a new raw format to simplify the process and capture the buffer for the post later. Maybe it wouldn't work though since it isn't really clear what magic happens in the traditional digital camera's asic. Also, a vendor like sony would know both aspects of this coin (making phones, sensors, and cameras) and likely already be doing some magic with that knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: