2. What's the size difference as opposed to contemporary 5000+ FPS camera[s]?
With good Dynamic Ranging and affordability, it's good for independent film-makers?
I have no expertise in this realm but some of their bullet videos seemed a little “artificial” or CGI. Maybe it is that some parts of a frame had much more detail/precision than other parts of the same frame.
3600fps at 720p, raw 10 bit dng, £5000.
Some Phantom cameras have faster frame rates (https://www.phantomhighspeed.com/products/cameras/ultrahighs...), but you'll pay over $100k
>Yet what Dye seems most fascinated by is one of the Apple Watch's faces, called Motion, which you can set to show a flower blooming. Each time you raise your wrist, you'll see a different color, a different flower. This is not CGI. It’s photography.
"We shot all this stuff," Dye says, "the butterflies and the jellyfish and the flowers for the motion face, it's all in-camera. And so the flowers were shot blooming over time. I think the longest one took us 285 hours, and over 24,000 shots."
He flips a few pages further into the making-of book, onto the first of several full-page spreads with gorgeous photos of jellyfish. There's no obvious reason to have a jellyfish watch face. Dye just loves the way they look. "We thought that there was something beautiful about jellyfish, in this sort of space-y, alien, abstract sort of way," he says. But they didn't just visit the Monterey Bay Aquarium with an underwater camera. They built a tank in their studio, and shot a variety of species at 300 frames-per-second on incredibly high-end slow-motion Phantom cameras. Then they shrunk the resulting 4096 x 2304 images to fit the Watch's screen, which is less than a tenth the size. Now, "when you look at the Motion face of the jellyfish, no reasonable person can see that level of detail," Dye says. "And yet to us it's really important to get those details right."
Does anyone know if this exists?
 - https://wikipedia.org/wiki/Kinetic_inductance_detector
 - https://web.physics.ucsb.edu/~bmazin/mkids.html
> A modulo camera could theoretically take unbounded radiance levels by keeping only the least significant bits. We show that with limited bit depth, very high radiance levels can be recovered from a single modulus image with our newly proposed unwrapping algorithm for natural images.
I posted my comment yesterday.
Unfortunately not quite at "buy to play around with" prices, but for the right application within the realm of affordable.
I work on underwater computer vision and would love one of these. Limited lighting = long exposure times. Motion blur can be mitegated by moving slowly, but then you have all sorts of lighting phenomena which makes not over/under saturating the image a real problem.
It seems to me that an "explosion" would generate a lot of events.
Also, in what way is this different from a codec, i.e. isn't this just a much simpler form of mpeg?
And will this produce artifacts, or drop events?