
The future of photography is code - hug
https://techcrunch.com/2018/10/22/the-future-of-photography-is-code/
======
slr555
The implicit subtitle of this article is: In Smartphone Cameras.

The modern smartphone certainly adds computational strength that likely
exceeds the image processing sophistication of even pro-level DSLRs. After all
the performance gap between desktop and mobile CPUs is quite narrow at this
point. The author rightly implies that the form factor of the phone creates an
inherent set of limitations.

Outside the phone realm there are fewer and somewhat different limitations to
deal with and that is where really interesting things are happening in
photography today. Modern sensors have made great strides towards closing the
gap between film and digital in terms of dynamic range. Full frame sensors
with a large number of pixels allow for far greater resolution in images.
Looking on DXO mark on how far sensors have come in the last decade is
amazing. When I look at images created on my Nikon D200s they were very good
and acceptable for a broad range of applications. Compared to the images from
my D850, however, there is a quantum difference. Shooting RAW files gives me
unprecedented creative control over the final image using a laptop instead of
expansive requirements of a full darkroom. While, I shoot Nikon other
companies like Sony, and Canon are more or less in the same place. We have
reached the point where output from a DSLR sized body compares very favorably
to a medium format sensor.

While computational adjuncts to image acquisition, whether in the form of
phone software or Adobe like products, will play an increasingly important
role in photography, there are still areas where hardware such as sensors,
lenses, and physical stabilizers will improve.

~~~
devindotcom
Certainly true (I'm the author). As a photographer I look forward to more
interesting techniques in the non-smartphone world too but ultimately I think
what will advance them is also code, not a major advance in optics or sensor
tech.

It's kind of a lame argument in a way (mine, that is) because code underlies
just about everything these days. But I do think we've mostly tapped out the
physical side of things, barring clever new constructions like the L16 and
successfully wrangling hyper-sensitive, hyper-noisy sensors.

Anyway it's exciting no matter which one is advancing the art. Consumers are
winning (as with a consolidation in mirrorless form factor, which is another
piece I'm working on). Thanks for reading!

~~~
slr555
I agree that at this point everything is computational. And you are right that
consumers are winning big time. I'm pretty serious about photography and there
are far fewer situations that arise where I think, "crud, I've only got my
phone". My hope is that there is still room to squeeze low light, low noise
performance out of future sensors. Again, great article, a good read!!!

------
teekert
I like this text and I agree that a large part of the future is in code yes,
but there are still fancy optical techniques not tapped yet. Like wavefront
shaping (for the flash) or using other parts of the spectrum or deformable
lenses or ultra-thin lenses from meta-materials (with a negative refractive
index?) And then there is the lyttro camera (which can be combined with the
techniques mentioned before). I've never heard of smartphone camera's with a
Fresnel lens, is this because we can't manufacture them with molecular
precision yet? Perhaps there will be camera's with ultrasound sensors, non-
optical (active?) sensors that still yield optical information. Perhaps we
find smarter ways to use polarization information... etc. I'm not an expert
but I have spend some time in the optics field and saw some quantum optics
talks that may at some point lead to significant improvements of the physical
aspects of cameras again, imho. Of course, combining all the input together
into something humans would like will require... computation :)

------
mips_avatar
I wish there was better software supported robotics cameras. You can buy small
csi-mipi modules to capture frames from a ton of imaging sensors. But all the
nice tricks Apple/Google use to improve image quality like capturing multiple
underexposed images to create a clearer composite are not implemented in
OpenCV. All these machine vision sdks are stuck to idea that you will only get
a single matrix of pixels to glean info from, forgetting that you need to
integrate pixel accuracy into your machine vision code!

~~~
rtkwe
I think it's a question of division of responsibilities OpenCV focuses on the
CV part of the equation. You don't have to feed OpenCV the raw camera feed
there can be a separate step that does the kind of image processing you're
talking about.

~~~
mips_avatar
I usually use openCV capture. Which is nice because it works pretty well.
Adding some capture modes like the one I listed would be nice, but I see how a
separate capture software module might be better.

edit: The CV part isn't really separate, how you capture affects the chance
that the pixel is incorrect. This information is useful, maybe you'd want to
integrate this into your CV.

------
KineticLensman
Just a few thoughts.

For the vast majority of people, the future of photography will depend on how
many of the techniques in the article are implemented in smart phones (as they
don't have a dedicated camera).

For high-end camera users, the main providers are still in the early phases of
introducing mirrorless devices an an alternative to DSLRs. Early mirrorless
offer advantages in terms (e.g. framerates, weight, silent ops, previews, etc)
but are still largely 'conventional' with respect to the computation
techniques in the article. Many of these users are motivated to keep their
existing lenses, which may impact the types of computational technique that
are relevant (why compute bokeh when you can apply it in glass?)

I wonder what the sweet spot is for devices that really push the computational
side.

------
Cynddl
> What those devices do with that light, however, is changing at an incredible
> rate. This will produce features that sound ridiculous, or pseudoscience
> babble on stage, or drained batteries. That’s okay, too. Just as we have
> experimented with other parts of the camera for the last century and brought
> them to varying levels of perfection, we have moved onto a new, non-physical
> “part” which nonetheless has a very important effect on the quality and even
> possibility of the images we take.

I'm confused. What do “those” devices do or could do except capturing an
erzatz, a frozen view of world? What is this new non-physical “part”? Cameras
captures photons (from one [1] to many) and display a composite image, post-
processing or no.

[1] Photon-efficient imaging with a single-photon camera:
[https://www.nature.com/articles/ncomms12046](https://www.nature.com/articles/ncomms12046)

~~~
russdill
Point and shoot photography will move more and more to gathering information,
and then painting a scene. Is a detail blurred? Try to guess what it is based
on training data and fill it in. Is someone blinking in a photo? Look back
through previous photos of that person and figure out what they'd look like
with their eyes open. etc.

[https://www.youtube.com/watch?v=HvH0b9K_Iro](https://www.youtube.com/watch?v=HvH0b9K_Iro)

------
gmiller123456
Photography has never really been about the limits of the camera. Your average
consumer will look at megapixels and just assume more is better. A more astute
consumer might look at things like aperture size, dynamic range, noise levels,
etc. And those things do matter in extreme cases, but a good photographer will
be able to produce great photos even with a well below average camera.

People like Ansel Adams aren't known because they had the best camera of the
day. Adams is mostly known for what he did in the darkroom, that's where his
ability to take a piece of crap the film originally recorded and turn it into
a work of art.

Any serious photographer will tell you it's a lot more about what you do
before and after you press the button, than the equipment you have. And some
really good photographers don't do any post processing, just like some
photographers won't move a leaf. But post processing both digitally and in the
darkroom have been at the heart of most photographer's toolbox since the
beginning.

------
tempodox
This should be pretty obvious regarding almost everyone outside professional
photography. Jane & John Doe clearly don't have the time or inclination to
learn a few things before they start taking photos (or even afterwards). If
instead you can whip up some software that makes every last shitty shot look
passable, you found yourself a money printing machine.

------
MIKarlsen
The future of everything seems to be code. And yet, surprisingly few are aware
of it, it seems...

~~~
criddell
That doesn't mean there isn't money to be made with old tech though. Jack
White (yeah, that Jack White) recently opened a film processing lab.

[https://thirdmanrecords.com/photo-studio](https://thirdmanrecords.com/photo-
studio)

~~~
RmDen
This was mentioned in the book The Revenge of Analog: Real Things and Why They
Matter [https://t.co/5HJdzJPGC5](https://t.co/5HJdzJPGC5)

------
VikingCoder
I'm very much reminded of this image of a camera with a slew of lenses:

[https://www.notebookcheck.net/fileadmin/Notebooks/News/_nc3/...](https://www.notebookcheck.net/fileadmin/Notebooks/News/_nc3/MANY_CAMERAS.jpg)

I keep thinking about house fly and spider eyes. I suspect their 3D view of
the world is much better than we realize. They're basically Light Field
Cameras.

------
ratsimihah
Can it really outperform my analog Leica M6 + Summicron 35mm bokeh?

~~~
adrr
Eventually. Smart Cameras will soon remove unwanted objects from pictures so
you no longer have to wait for the person(s) behind the subject to get out of
the way.

~~~
lifeformed
One day they'll just generate the exact image you wanted to see and you won't
even have to have the actual experience of the real scene.

Then they'll just generate the corresponding chemical releases in your brain
to make you feel the same pleasure of seeing the image, so you won't even have
to open your eyes.

Finally, they'll invent a perfect, euphoric euthanization process that'll be
objectively more pleasurable than being alive.

