
Computational photography from selfies to black holes - dsego
https://vas3k.com/blog/computational_photography/
======
sansnomme
On the extreme end, if you live in the countryside you can replace GPS
entirely using celestial navigation/star tracking. This is commonly used for
rockets and satellites but right now if you own a truck or pickup you can
easily go completely off-grid by mounting a lens on the rooftop. E.g.
[http://nova.astrometry.net/](http://nova.astrometry.net/)

For implementing the system on an embedded device e.g. toy drone, raspberry pi
etc., the main data structure you want is a k-d tree together with some sort
of evergreen star chart (it doesn't have to be extremely evergreen, current
astronomy libraries can easily predict orbits for a couple decades without
significant skew/deviation unless you are are aiming for centimeter level
geolocation accuracy).

For the hardware you can either use existing consumer-grade stuff followed by
a ton of image processing with ML as suggested above or you can use a
industrial grade tracker which easily exceeds 4 figures.

[https://blog.satsearch.co/2019-11-26-star-trackers-the-
cutti...](https://blog.satsearch.co/2019-11-26-star-trackers-the-cutting-edge-
celestial-navigation-products-available-on-the-global-space-marketplace)

It's a pretty fun weekend project. Here are some links to get started:

[https://github.com/mrhooray/kdtree-rs](https://github.com/mrhooray/kdtree-rs)
[https://github.com/astronexus/HYG-
Database/blob/master/READM...](https://github.com/astronexus/HYG-
Database/blob/master/README.md)

Instead of jacking up your truck, add celestial nav to it. Nothing screams
freedom and independence more than cutting dependency on state-funded
satellite systems. Caveats: needs more signal processing during daytime,
fallback to inertia navigation when it is cloudy.

~~~
greglindahl
Rockets and LEO satellites often use GPS these days because it's easier, but,
I am impressed by this diy startracker.

------
ipsum2
This is a great article!

> In fact, that's how Live Photo implemented in iPhones, and HTC had it back
> in 2013 under a strange name Zoe.

A reference to zoetropes, which were arguably one of the first "movies".
[https://en.wikipedia.org/wiki/Zoetrope](https://en.wikipedia.org/wiki/Zoetrope)

> To solve the problem, Google announced a different approach to HDR in a
> Nexus smartphone back to 2013. It was using time stacking.

I don't think time stacking is the appropriate term to use here, as standard
HDR is also doing "time stacking", in that it takes multiple photos with
different exposures across a small interval of time. Maybe "Fixed exposure
fusion"?

I think there's a lot more to be done in computational photography, in
research and engineering. In fusion of images from multiple cameras, we're
barely scratching the surface. Exciting times ahead!

------
gdubs
I dropped my previous iPhone while cycling home from work and lived for a
while with a busted lens. It made for some interesting photos, but was mostly
annoying.

As a result I started using my DSLR again, and I rediscovered how beautiful
the photos were. They also print nicer.

This fall I finally replaced the busted phone with an iPhone Pro. The camera
on this thing is great, but the computational photography enhancements are
particularly nice.

But my DSLR still smokes it.

There’s the old maxim of “the best camera is the one you have on you.” I’m
happy I went with the pro, and in isolation, it’s an amazing all around
snapshot camera. Occasionally I get lucky with some stunning shot.

But the DSLR just wins hands down when it comes to shooting something like the
foliage of a Japanese maple. The bokeh is beautiful straight out of the
camera. The iPhone still struggles, and there’s a ton of matte noise around
edges that needs to be cleaned up. It’s much better at things it was designed
for, like portraits.

So, anecdotally, for me at least, it’s a mixed bag. I love both cameras for
different reasons but ultimately the DSLR still has the edge on quality. But
the iPhone is always there, and has some tricks (especially low light) that
the DSLR can’t compete with.

For the foreseeable future, I don’t see my phone fully replacing my DSLR.

------
numbol
Sorry for bad english, and maybe I am deeply wrong, but:

In extreme cases, it even not "photo" as some information about photons
recieved by some optic system with noise reduction afterwards. Not, it just
pictures, based on recognised faces, objects and stars. And I don't know why,
but I feel panifully bad about it. It is not approximation of world-how-it-is,
but some expectation about world-how-people-want-it. It can recognise
constellation based on few stars, and will draw nice picture of great stary
sky, but will delete starlink sattelite, meteora or supernova as some
unexpected noise.

~~~
GuB-42
You can think of these enhanced pictures as an artistic interpretation. Like a
robot painter drawing your portrait. That's not a bad thing, good artists, and
most likely good robots know how to make something look good while preserving
the essentials. Unless I am doing astronomy, I'd rather have a beautiful night
sky than a speck of dust that might be a starlink satellite. And if I am doing
astronomy, having all the super-resolution features is really nice.

Unprocessed camera modes will continue to exist for people who want that.
Maybe with some built-in digital signature in case it is used as proof.

Photography, and earlier that that, drawing, has always been world-how-people-
want-it. Accuracy is just one of the things you may want.

------
roywiggins
> This approach pioneered by the guys, who liked to take pictures of star
> trails in the night sky. Even with a tripod, it was impossible to shot such
> pictures by opening the shutter once for two hours

Yes, but also you can't leave a digital sensor collecting for two hours, the
pixels start saturating and the noise builds up- not like film, which can do
truly long exposures.

~~~
ryandamm
For what it's worth, film also suffers from reciprocity failure (nonlinear
response at extended exposure times with low flux).

Some things were never easy.

~~~
foldr
Reciprocity failure doesn't stop you doing long exposures. It actually helps,
if your goal is to make the exposure as long as possible. See e.g.
[http://itchyi.squarespace.com/thelatest/2010/7/20/the-
longes...](http://itchyi.squarespace.com/thelatest/2010/7/20/the-longest-
photographic-exposures-in-history.html)

Digital sensors obey the reciprocity law to a much greater extent than film:
[https://photo.stackexchange.com/questions/37241/does-
recipro...](https://photo.stackexchange.com/questions/37241/does-reciprocity-
failure-schwarzschild-effect-exist-in-digital-photography). The problems the
OP mentions are not related to reciprocity failure.

------
tomxor
This is all really cool, but, there's one thing you can't make up for enough
with processing: optical zoom, (digital zoom, however much temporal super
resolution trickery, has a different angle).

~~~
m463
DSLRs have exceptional lenses with exceptional but large sensors, which means
the zooms are modest (up to around ~600mm)

So the really interesting long focal lenght cameras are the all-in-one
superzoom cameras like the canon sx70hs and nikon p1000.

They accomplish the high magnification by using a sort-of-good lens with a
sort-of-good small sensor, achieving up to 3000mm "equivalent" zoom.

Unfortunately, the "pro-sumer" design gives you an electronic viewfinder and
slower less accurate focus and all kinds of other non-dslr mediocrity.

sigh.

~~~
LegitShady
The smaller cameras never really convjnced me as real DSLR replacements - they
lacked the versatility and ergonomics of the DSLR and the smaller sensors
meant more noise especially in dark scenes.

The first camera that I use that has started being ok is the pixel phone
cameras using night sight to improve overall resolution and detail.

~~~
rimliu
Define "smaller cameras". I (not so) recently replaced my Nikon D7100 with a
collection of pro lenses with Olympus cameras (OM-D E-M1markII and OM-D
E10markII to be specific). I could not be happier. The first one is small, has
one of the best water/dust/freeze proofing available, extremely capable and
customizable. Yes it has even smaller sensor than my APS-C Nikon had, but I
can get better images with it.

One interesting thing about Olympus cameras, that they do have a lot of
computational photography things described in the article:

\- there is a "Live Composite" mode, which allows you to get those star
trails, to do light painting, etc.

\- there is build-in focus stacking, very useful for macro photography

\- there is hi-res mode implemented with pixel (or rather sensor) shifting

\- there is IBIS and digital image stabilization.

~~~
LegitShady
Smaller = anything smaller than a consumer grade DSLR.

The form factor make the smaller units far more difficult to hold properly
while shooting, lack the optical viewfinder in most cases, etc.

The only thing a person needs for Star trails is a tripod and the sky. Stars
start to "trail" generally anything over 500/effective focal length in
exposure time in seconds.

Built in focus stacking does sound useful if it works well and macro shots are
of interest.

Hi res modes are fine but it can never make your pixels bigger - bigger pixels
of the same generation more sensitive to light so less noise at higher isos.

I have canon glass so tried one of their mirrorless cams and found its
Ergonomics completely different to DSLRs and personally not in a good way.

------
someguyorother
> Yes, it opens up a lot of possibilities for us today, but there is a hunch
> we're still trying to wave with hand-made wings instead of inventing a
> plane. One that will leave behind all these shutters, apertures, and Bayer
> filters.

> The beauty of the situation is that we can't even imagine today what it's
> going to be.

Optical phased arrays of nanoantennas?

------
sandGorgon
here's my question - why isnt there a computational photography app startup
that works on these phones (either android or iphone).

Let's take android - the snapdragon 855 is a very standard flagship CPU and is
on tons of phones (including the 450$ Xiaomi K20 Pro). Why isnt there a
computational photography app that works on these phones ?

Why is Pixel 4 - which uses the same 855 chip - the only one that has this
software. Is it patent encumbered ? or is there some massive dataset deep
learning stuff involved.

I'm surprised that there isnt a startup that is building these apps out there.

~~~
StingyJelly
Issue may be that the algorithms are trained on specific camera unit and
gathering a large database of data from all possible cameras may be difficult
even for big guy like google. Google puts a ton of work into their camera app
and you can get clones that work on other devices with varying degree off
success. For example I have mi5s that uses the same sensor as first pixel and
get quite decent results with arnova's gcam clone.

~~~
sandGorgon
I'm not referring to GCam clone. I'm asking can't someone build this
commercially? Like an app company that does computational photography apps for
each kind of camera sensor?

------
anta40
So someday we can expect a $1000-ish smartphone which image quality-wise can
compete with medium format gears like Fuji or Hasselblad?

~~~
rocqua
I'd expect that at some point mirorless cameras will get close to feature
parity at the computational photography part, and will beat smartphones
handily by having the space for a good lens and slightly bigger sensors.

Cause all the tricks used on smartphones still work if you have a better lens
and sensor. They will still yield these improvements. It's just gonna take a
while for the manufacturers to catch up. Especially on the DSLR end, because
of slow shutters and massive momentum.

------
pabs3
I wonder if there are any open source computational photography tools.

------
vas3k
Usually I don’t mind reposting my articles but with a direct reference to
original at the beginning
[https://vas3k.com/blog/computational_photography/](https://vas3k.com/blog/computational_photography/)

Let’s be respectful to the original author, I spent couple of months writing
it:(

~~~
katecatkitty
Hi Vas3k!

My name is Kate and I am Head of Content at Let's Enhance.

We are very thankful for the given material, because it's highly related to
us.

And I want to clarify this unpleasant situation:

I've directly talked with the author of the article - Vas3k and asked for the
permission to publish this awesome material. I can attach the screenshots of
our conversation.

All the copyrights are reserved. We mentioned you and your blog as the
original source. What's more, we've saved all the links in the article that
refer to your blog.

So, all the accuses are unfounded.

~~~
jacquesm
Yes, I get this all the time too. Vaguely worded emails for permission to rip
my stuff and any answer will be used as permission granted. You're in
violation of copyright, and the author is right here to dispute your claim. As
you are no doubt aware as students of copyright law you've now been told that
your permission to copy has been rescinded which leaves you with only one
option.

~~~
Chris2048
Call them on their bluff; ask them to "attach the screenshots of our
conversation."

------
jacquesm
Flagging this. Paging dang for a link change and a ban on letsenhance.io.

~~~
PopeDotNinja
What's the problem with letsenhance.io? Honest question. I skimmed the article
and didn't see anything off-putting.

~~~
BurnGpuBurn
Maybe the fact that they copy-pasted someone else's content. Someone worked
hard on this article and that person is not properly credited for that work.

Oh, and stealing ad revenue from the original creator.

~~~
glenneroo
I don't know if this was recently appended but at the end of the article there
is this:

> The article was originally published by Vas3k in
> [https://vas3k.com/](https://vas3k.com/) on 30.06.2019 Let's Enhance team is
> very thankful for given materials.

Granted I still agree it's a blatant copy/paste and I also vote that dang
replaces the link with the original.

------
marknadal
This is a truly incredible article.

And even a mind-bending (or optic?) glimpse into the future.

Thank you for writing it.

~~~
rozhok
This is a complete rip-off of
[https://vas3k.com/blog/computational_photography/](https://vas3k.com/blog/computational_photography/)

------
growlist
These hand-drawn-look diagrams are starting to become a bit of a cliche.

------
ryandamm
This article is completely, utterly wrong as soon as it starts talking about
anything plenoptic. Please disregard after that point as there are serious
factual errors, particularly regarding what Google did or didn't do with
Lytro.

(Source: I'm in the VR / plenoptic space, knew a bunch of people at Lytro,
some of whom are now at Google. Timelines and facts do not match this
article's assertions.)

~~~
jacquesm
That comment would be a lot more valuable if you corrected the record. To just
gainsay what is written here accompanied by a strong reference to your
authority is not how it is done imo, and if you can't or don't want to talk
about it then you also shouldn't comment like this.

~~~
ryandamm
Fair enough. It would take a really long time to make a point by point
rebuttal, but here are a few:

Phase detection is not the same thing at plenoptic; the article implies that
phase detection operates in the same way as a microlens array. It doesn't;
microlens arrays for plenoptic imaging have many pixels underneath, and
perform different operations on them. This line:

"With only two pixels in one, there's still enough to calculate a fair optical
depth of field map without having a second camera like everyone else."

This is categorically false. You need dozens of pixels underneath a microlens
array to do digital refocusing; phase detection is not remotely the same
thing.

Later, the author conflates microlens-based refocusing with camera-array based
plenoptic imaging. These are wildly different disciplines; the work done in VR
does not include microlens arrays anywhere. Not any. But they do include large
arrays of cameras (see volumetric work from Microsoft HoloCapture, Intel's
capture studio in Manhattan Beach, and my own company Visby's work — cameras
at Radiant Images in LA... all arrays of cameras, no microlenses anywhere).

"Apparently, if you take only one central pixel from each cluster and build
the image only from them, it won't be any different from one taken with a
standard camera." This is false; taking the central pixel will give you an
aliased image (or an image as though taken with a very, very high f-number
lens, if the pixels correspond to the same zone of the aperture). To make an
image as though you didn't use a microlens array, you sum all the pixels under
the microlens to produce a macropixel. This is why light field cameras have
lower effective resolution than traditional cameras. The remainder of the
speculation about 'sneaky plenoptic JPEGs' has no basis in light field
imaging; refocusing is not achieved by throwing away (or binning) pixels, it's
done by summing different sets of pixels. Lytro founder Ren Ng's PhD thesis
was about a way to do it faster by using the Fourier Slice Theorem. (Beautiful
paper, if you have time to read it.)

Google did not "buy and kill Lytro." Lytro's assets were sold and Google
bought some of them, but none of that tech made it into the Pixel line of
cameras, period. The Google portrait mode is done using phase detect pixels as
a cue for segmentation, as discussed in the article this blog links to.
Nothing plenoptic required, and certainly no Lytro tech there. The blog post
implies that the Pixel is using Lytro-like techniques rather than phase detect
pixels to hint at segmentation for synthetic blur.

I have never heard of plenoptic cameras being used for image stabilization. In
my personal experience (5 years of stabilizing cinema cameras for two
different companies), virtually all stabilization issues derive from
rotational blur, not translational movement. Your ability to stabilize
translational movement would be limited to the aperture size, and translations
of that magnitude have vanishingly small effects on the image unless the
objects are very, very close to the lens (like macro). I have no idea where
the blogger came up with this idea but there is no reference provided and I
have never heard of it (nor would it work, for the above reasons).

The section "Fighting the Bayer Filter" miscasts the challenges of a Bayer
pattern under a microlens; naively summing adjacent pixels under other
microlenses would blur an image. You do get a boost to dynamic range from the
synthesized images — you're binning a bunch of pixels to create your image, so
you reduce noise as 1/root(n) in the number of samples. That could lead to
better color in dark, noisy areas, but it will not increase the gamut of your
sensor.

It goes on... but since this article was already off the front page by the
time I posed my comments, this is probably enough.

(As an aside, Jacques — I really enjoy your writing!)

~~~
jacquesm
Thank you for adding this, I only saw it much later.

