

Sony Sees Next Big Hit at 1,000 Photos a Second - cm2187
http://www.bloomberg.com/news/articles/2015-09-04/why-sony-sees-its-next-big-hit-at-1-000-photos-a-second

======
herendin
Isn't one of the big challenges of hi-speed video how to collect enough
photons in such a short time? Super-bright lights are often used to illuminate
the subject

'Shooting at a frame rate of 1000fps requires 5.25 times more light than at 24
or 25 fps' [[http://www.lovehighspeed.com/lighting-for-high-
speed/](http://www.lovehighspeed.com/lighting-for-high-speed/)]

~~~
bitL
AFAIK the problem with the old sensors was that they were manufactured at 100s
of nm which was really bad for low light. As Sony is now pushing sensor
manufacturing down to CPU-level, the sensitivity increases significantly. Once
they hit the current cutting edge 14nm process, we can expect significant low-
light gains as well.

~~~
acqq
Isn't it exactly the opposite? The smaller the area, the less photons land on
it in the given time, so the sensors with the bigger pixels can capture more
light, and the less nm features measure the more of noise and less of signal.

That's why if you want good pictures in low light you need a camera with as
big sensor as you can get.

[http://www.dpreview.com/articles/5496399487/canon-multi-
purp...](http://www.dpreview.com/articles/5496399487/canon-multi-purpose-
me20f-sh-camera-reaches-iso-4-million)

"At the core of the ME20F-SH is a 2.26 megapixel CMOS sensor, originally
announced in 2013, which has pixels measuring 19μm - 5.5X larger than what's
found on high-end DSLRs. This allows for 1080/60p/30p/24p (and PAL equivalent)
video capture in light levels as low as 0.0005 lux at a maximum gain setting
of 75 Db, which is equivalent to over ISO 4,000,000."

Note: the pixel is _larger_ and therefore the sensor more sensitive. Canon
made the _full frame_ sensor with only 2 megapixels and obtained the higher
sensitivity.

Comparing with the cameras in phones, the area of the whole sensor in iPhone 6
is 50 times smaller than the area of the full frame sensor.

~~~
_ph_
Yes, if you make the pixels smaller, the sensor will get less light sensitive.
But for most applications, the sensor size is given (e.g. 35mm cameras) and
smaller process sizes would mean, that more of the per-pixel area is taken for
light recording and less for the electronic elements on each pixel.

~~~
acqq
Looking at the sensor under microscope the amount of photons received is
determined more by the area of the pixel than the size of the features under
that area:

[http://petapixel.com/2013/02/12/what-a-dslrs-cmos-sensor-
loo...](http://petapixel.com/2013/02/12/what-a-dslrs-cmos-sensor-looks-like-
under-a-microscope/)

I'd still like to get a link in support of the claim of significant benefit of
lithography process improvement to the sensor sensitivity, unless it's about
the smallest sensors (as in mobile phones or smaller) and very high pixel
count.

~~~
dzhiurgis
From what I've watched about CMOS sensors and I could be mistaken, the
comparison could be similar to making your CPU registers smaller but ramping
the frequency of how often you access hit them (which is akin to increased
framerate). So the net amount of captured light is the same, but you have many
more frames now.

With sophisticated software (or onboard hardware) this can be reassembled to
the same picture as well. And you are looking at this a bit wrong way. On
mobile devices - your bottleneck is lens size, so you have to have smaller
pixels.

Now if your pixels are large, the register (I think they are called wells on
CMOS sensors) has to pull in much more information at higher power
requirements which introduces noise which you have to "mask" with even more
photon energy? Again, I could be getting this totally wrong and of course it's
best to have big lens with lots of pixels.

------
semi-extrinsic
When a Gopro 4 (that's been on the market for a year now) can do 240 fps at
720p resolution, I think it's a bit misleading to say the current state of the
art is 60 fps. And it's fair to assume next year's Gopro 5 will have a
significant increase on that again.

~~~
speedkills
What I really want, from go pro or Sony is a global shutter. Rolling shutter
needs to die.

~~~
semi-extrinsic
AFAIK, you can have global shutter on cameras today if you sacrifice low-light
performance and high framerates. You need a CCD sensor instead of a CMOS
(generally speaking), like you find e.g. on most cameras in the Canon
PowerShot line, which do feature global shutters.

------
rocky1138
Is there a less blogspammy version of this? The article mentions a YouTube
channel but doesn't give us the URL.

Edit: Did a bunch of digging and found it:
[https://www.youtube.com/channel/UCaq5dRB_CnZDC-
eQcsv7nZg](https://www.youtube.com/channel/UCaq5dRB_CnZDC-eQcsv7nZg)

------
DannoHung
That's nice, how about 14 bit depth raw images in your pro series cameras
though?

I mean, the 12+7 images look fine most of the time, but when you're paying
close to $3,000 for a body, there should be no compromises in terms of IQ.

------
dharma1
Sony owns image sensors atm. They are probably 3-4 years ahead of the
competition. They already have consumer 960fs pocket camera out in the market
- [http://www.sony.net/Products/di/en-
us/products/f5kd/](http://www.sony.net/Products/di/en-us/products/f5kd/)

~~~
seunosewa
Is 1000fps the next big thing, though? Probably not. (Never heard of the above
pocket camera, for example)

~~~
tommyd
I wonder if there are interesting applications of being able to capture images
at this kind of speed at high resolution. For example, using image analysis
between frames to get some kind of depth map, based on subtle differences
between each frame due to slight hand movements.

~~~
blincoln
I was thinking along the same lines. Another specific example would be single-
shot HDR or true post-shot-adjustable exposure. E.g. if the optimum exposure
for a conventional camera is 1/60th of a second, then instead capture 10-20
exposures whose total time adds up to 1/60th of a second, and sum various
numbers of the exposures in software.

Right now that's not practical because DSLRs don't have the hardware capacity
to pull that much data off of the sensor and store it quickly enough.

Seems like the additional data would also be very useful for removing motion
blur, automatically removing moving objects from e.g. architectural/landscape
shots, etc.

