
3D Lightning Reconstruction (2013) - nkron
http://calculatedimages.blogspot.com/2013/05/3d-lightning.html
======
kastnerkyle
Eric Bruning gave a good talk on this kind of stuff last year at SciPy [1].
His primary work seems to be focused on 3D lightning mapping using VHF antenna
arrays - a slightly different approach than trying to take simultaneous
pictures from multiple locations, but you get many more datapoints, especially
in west Texas [2]. The downside is, you aren't getting a "true" 3D model of a
single strike - rather, a 3D model of a storm based on the samples generated
from the mapping data (IIRC)

[1]
[https://www.youtube.com/watch?v=0Z17Q22HEMI](https://www.youtube.com/watch?v=0Z17Q22HEMI)

[2] [http://pogo.tosm.ttu.edu/about/](http://pogo.tosm.ttu.edu/about/)

------
greenyoda
_" It is immediately clear that they are taken from about the same direction
but different heights: the second bolt looks squashed vertically."_

Isn't it also possible that the photos were taken at slightly different times
and the path of the lightning bolt shifted slightly over that period?

For example, if you look at this 9-second video of a high voltage electrical
arc[1], you'll see that its path shifts around quite a bit over time.

[1]
[https://www.youtube.com/watch?v=euW4NerLAPg](https://www.youtube.com/watch?v=euW4NerLAPg)

~~~
edmccard
>Isn't it also possible that the photos were taken at slightly different times
and the path of the lightning bolt shifted...

In the time it takes for the light from a lightning strike to fade, the bolt
basically doesn't shift at all.

There's a guy called Tom A. Warner who shoots 7207 fps video of lightning; if
you watch the first video on his page[1] you'll see that once the bolt reaches
the ground, it's path doesn't change.

[1] [http://www.ztresearch.com/](http://www.ztresearch.com/)

------
sytelus
I'm still not sure how this was done. Here's what I have gathered so far from
the post: The author has two images taken unknown location. He scaled them and
then marked points on each. Then he matched up points on two images manually.
Now he has dx and dy for each point on one image, relative to other. Now he
asserts that bigger dx means nearer to camera. So I'm thinking he takes some
proportionality constant to get z = c * dx.

But wouldn't that produce pretty arbitrary shape depending on value of c?

~~~
ddt83
A question I had was how he made the correspondences between points in the two
bolts. I have a hunch that he just traversed the two lightning bolts
separately and said point 1 in A corresponds to point 1 in B, ... Point N in A
corresponds to point N in B, etc. if this is the case, you would expect dx and
dy to grow as from top to bottom and hence his reconstructed depth to become
closer from top to bottom, which is what happens.

~~~
sp332
Why would you expect the bottom of the lightning bolt pictures to be more
different, just because he started labeling points from the top?

------
dllthomas
I like how it has a shadow.

