Hacker News new | past | comments | ask | show | jobs | submit login
Scientists use camera with human-like vision to capture 5,400 FPS video (petapixel.com)
107 points by bookofjoe 5 days ago | hide | past | web | favorite | 27 comments





Haha, I worked with a relative of this camera during my bachelors :-) the DVS, e-DVD and DAVIS are some of the more useful results of neuromorphic engineering research, and really shine if you combine them with NNs or other noise-tolerant signal processing methods

Have they been combined with NNs? If so, how?


1. Why didn't they compare it with contemporary 5000+ FPS camera[s]?

2. What's the size difference as opposed to contemporary 5000+ FPS camera[s]?

With good Dynamic Ranging and affordability, it's good for independent film-makers?


Yeah. That comparison video is annoying. At one point they have three videos side-by-side with their video in the middle. In the next comparisons, theirs is on right. Aaaaaggh. Little things like that drive me nuts.

I have no expertise in this realm but some of their bullet videos seemed a little “artificial” or CGI. Maybe it is that some parts of a frame had much more detail/precision than other parts of the same frame.


Some "normal" cameras for comparison:

https://www.slomocamco.com/cameras/

3600fps at 720p, raw 10 bit dng, £5000.

Some Phantom cameras have faster frame rates (https://www.phantomhighspeed.com/products/cameras/ultrahighs...), but you'll pay over $100k


https://www.wired.com/2015/04/apple-watch-design/

>Yet what Dye seems most fascinated by is one of the Apple Watch's faces, called Motion, which you can set to show a flower blooming. Each time you raise your wrist, you'll see a different color, a different flower. This is not CGI. It’s photography.

"We shot all this stuff," Dye says, "the butterflies and the jellyfish and the flowers for the motion face, it's all in-camera. And so the flowers were shot blooming over time. I think the longest one took us 285 hours, and over 24,000 shots."

He flips a few pages further into the making-of book, onto the first of several full-page spreads with gorgeous photos of jellyfish. There's no obvious reason to have a jellyfish watch face. Dye just loves the way they look. "We thought that there was something beautiful about jellyfish, in this sort of space-y, alien, abstract sort of way," he says. But they didn't just visit the Monterey Bay Aquarium with an underwater camera. They built a tank in their studio, and shot a variety of species at 300 frames-per-second on incredibly high-end slow-motion Phantom cameras. Then they shrunk the resulting 4096 x 2304 images to fit the Watch's screen, which is less than a tenth the size. Now, "when you look at the Motion face of the jellyfish, no reasonable person can see that level of detail," Dye says. "And yet to us it's really important to get those details right."


Yeah the Phantoms get a lot of use in sports and nature documentaries. The level of detail is really a sight to behold . These guys work magic with them in food advertising:

https://foodfilm.fr/


Several years ago I had an idea for a camera made from harmonically oscillating pixels that shift frequency based on intensity. It would have many of the advantages of event cameras, but with extremely accurate absolute brightness measurent as well.

Does anyone know if this exists?


Sounds like MKIDs - microwave kinetic inductance detectors. They're arrays of little LC oscillators where the inductance changes when a photon hits. They can actually resolve frequency and time of arrival of a photon, which is super useful, especially for photon-limited astrometry. [0] [1]

[0] - https://wikipedia.org/wiki/Kinetic_inductance_detector

[1] - https://web.physics.ucsb.edu/~bmazin/mkids.html



This isn't really the same thing (and I think compressed sensing might be the direction of research to check out further) but you might find it interesting - check out the modulo camera; https://web.media.mit.edu/~hangzhao/modulo.html

> A modulo camera could theoretically take unbounded radiance levels by keeping only the least significant bits. We show that with limited bit depth, very high radiance levels can be recovered from a single modulus image with our newly proposed unwrapping algorithm for natural images.


You would still need to digitize the signal coming from the pixels, and the bandwidth required for each pixel would be commensurate to the rate at which you wish to capture tge information.

That seems like a lot of pixel busses.

Actually the one nice thing about this architecture is that if each pixel had its own oscillator you could dump them all on to the same bus until you reach the bandwidth of your ADC.

Only in your imagination, it seems

OT: but does anyone else see OP's post as posted 1min after his responder's?

Yes

Something weird with timestamps is going on. If I look at my comment stream the correct time is listed.

I posted my comment yesterday.


dang, is this something you need to be aware of?

It's an artifact of the submission re-upping system as described here: https://news.ycombinator.com/item?id=19774614

Righteo, thanks for clearing that up.

https://inivation.com/buy/ ($3000-$6600 usd)

Unfortunately not quite at "buy to play around with" prices, but for the right application within the realm of affordable.

I work on underwater computer vision and would love one of these. Limited lighting = long exposure times. Motion blur can be mitegated by moving slowly, but then you have all sorts of lighting phenomena which makes not over/under saturating the image a real problem.


This is revolutionary, even if only from a dynamic range standpoint. It's somewhat disappointing to see the first comment under their article is "Looks like VHS" ... really?

So the actual framerate depends on the number of events that take place in a time interval?

It seems to me that an "explosion" would generate a lot of events.

Also, in what way is this different from a codec, i.e. isn't this just a much simpler form of mpeg?

And will this produce artifacts, or drop events?



awesome, now just waiting some manufacturer to embed this in a cellophane



Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: