Camera freezes time at 10 trillion frames per second (inrs.ca) 208 points by craftyguy 10 days ago | hide | past | web | favorite | 91 comments

 “It’s an achievement in itself,” says Jinyang Liang, the leading author of this work, who was an engineer in COIL when the research was conducted, “but we already see possibilities for increasing the speed to up to one quadrillion (1015) frames per second!”Just getting started... o_O
 Honest question, what can be seen at one quadrillion fps that 10 trillion cannot already see? This question is out of pure ignorance and wonder, like "Why would you ever want to move faster than a horse". But this is at a scale I just cannot think in anymore.
 At that speed you can measure the "echos" of light, using femto-photography and some computing, it's possible to image objects around corners, outside of the cameras line of sight.More info: http://web.media.mit.edu/~raskar/cornar/Theres a few videos around of Raskar demonstrating this.
 Wow. Thought this would be used for experimental physics, not another spying device! Cool stuff though.
 Years ago I saw an atomic explosion film taken with nanosecond frames and the comment was, it wasn't fast enough since each frame showed entirely different images
 Here's a great example with something people have a really hard time wrapping their heads around too.Let's say that you have a train moving at a constant velocity, v. Then a switch is flipped and turns on a light that is in the exact center of the room. Which wall does the light hit first? [0]Seen from on the train[1]: might as well be seen as if you were in a stationary room. The light hits both walls at the same time.Seen from off the train[2]: The speed of light is constant. Since the train is also moving the light can't travel at v+c. So it hits the back wall, which is traveling towards the light at v, first.This is a famous thought experiment and in practice would be difficult to perform, even with such a camera. But I'm saying it because it illustrates that we can actually observe relativistic effects. Things act extremely differently than what we're used to when small or moving fast. Assuming you had a really high resolution, something like length contraction could be observed, and measured, in normal conditions.So there are actually a lot of weird things going on that we wouldn't be aware of. These ultra high speed cameras allow us to observe some of these strange phenomena.
 I imagine that at least the techniques and underlying technology would be incredibly illuminating inside particle colliders.
 Imagine expirmental physics. Being able to watch things like explosions in super minute detail. Watching light propogate.
 "Watching light propogate"Gave me shivers.
 Umm good luck getting enough photons to see anything. Shot noise
 So shoot at 1 quad, and average 1000 frames to one output frame? Light will still be moving less than 1mm per frame.
 Pretty sure the light is integrated over a frame anyway, so that will get you the same results as just shooting at the lower framerate.
 Where is the video? That's what I'm looking for
 There's some cool videos at the end of this article. Not the article posted, but similar concept if I understood correctly.Observation of laser pulse propagation in optical fibers with a SPAD camerahttps://www.nature.com/articles/srep43302http://i.giphy.com/3og0IQ5k6PP9KspN9m.gif (gif version of one of the mov files)
 I believe this one is related. 1 trillion fps. https://m.youtube.com/watch?v=EtsXgODHMWk
 The paper is OA, and the supplemental has the videos. S5 and S6 are the best.
 Here’s a video from 2011, but it’s only at 1 trillion FPS using the same laser pulse approach - https://m.youtube.com/watch?v=-fSqFWcb4rEAlso, the music is pretty cringey, so I’d just leave it muted...
 Watching this confuses me as to the timing of it all... if the conceit is that we're watching the bundle of photons move through the bottle that's because the photons from the source are hitting the camera, right? So is it the light moving through the bottle + the travel time to the sensor? Should refraction and reflection across the surface cause a lot of weird visual interference (as the bottle's size is no longer insignificant the time scale for lights velocity)? Is that what we're seeing there?
 When the light reaches the camera, it has already moved roughly the same amount forward through the bottle. So my guess is yes, each frame is visibly showing "the past".My guess is also that, if the laser is really focused, and if the bottle wasn't in the way, refracting some light back to the camera, we wouldn't even be able to see anything.
 I had this similar realization when I watched that video from 2011 the first time around (hard to believe it was 7 years ago).It also gave me this odd sense of blindness, in that I cannot actually see what's in front of me, only what literally interacts with my retina, almost like it's made of tastebuds for light. Still weirds me out when i think about it.
 It kinda is, but as far as i know they're actually taking the multiple photos separately of different pulses and editing them into one video.
 Is there another method of making a video?
 Yes, normally to video a car going past I'd take a series of photos as it goes past once.The approach here would be to take a photo of a car, then drive the car past again from the start and take another photo slightly later, then again, and again driving it past hundreds of times.They split this up even more, but I think the basic analogy holds to show the key difference.
 The team behind the video of the coke bottle has a TED talk that explains it in detail: https://www.youtube.com/watch?v=Y_9vd4HWlVAEdit: oops I misread your comment
 taking one video of one pulse, and not editing it
 I believe the approach in your video relies on a single laser pulse per frame, with many laser pulses combined to create the series of images for the video. The new approach, in contrast, emphasizes that the capture is done using a single laser pulse captured sequentially in real time.
 Only 1 trillion FPS? Yawn
 They're uploading the raw files, it's only 37 exabytes, but it'll be a while.
 It makes you wonder what the memory bandwidth is like in their capture rig
 Reminded me of this TED talk on femto-photography: https://www.ted.com/talks/ramesh_raskar_a_camera_that_takes_...
 "This new camera literally makes it possible to freeze time"It literally freezes time? Come on..
 How close is this to planck time?
 Not close. https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)Femto = 10^-15, Plank = 10^-44
 in orders of magnitude: about 10^13 * 10^-44 = 10^-31 ... not very close. If you amplify planck time to 1 s, a frame of this cam would take about 10^14 times the age of the universe (10^17 s) in comparison.
 Hijacking this as it’s getting intelligent responses: what’s the greatest resolution we could measure today, and what’s physically possible?
 Well, let's do some quick and dirty math.Let's say you have a very bright light source, and that light source results in shining a full 100 Watts of light into the sensor of your camera (I'm using this not as a typical example but because it will make it easy to scale the answer). Photons of visible light have an energy of at minimum 1.5 electron-Volts (800nm red light), which means that 100 Watts of light represents 4.2e20 photons per second.And that means that with only 100 Watts of light reaching your sensor you cannot attain an fps higher than 4.2e20, because at that speed you'd only get on average around one photon per frame. More realistically you need tens of thousands to millions of photons per frame to have some meaningful level of dynamic range and spatial resolution, which limits the fps to around a quadrillion fps per 100 Watts of light falling on the sensor.Though once you get into that range you also have problems of signalling, we don't really have electronics that work at those speeds.
 the fastest time-resolution that is generated is in a niche field, attophysics, where they can get a pulse in the low hundred attosecond range, 10 ^ -16 or so.the signal is extremely weak, the conventional 'shortest-pulse' is around 5 femtoseconds.
 No but light only travels .03mm per frame which isn't bad.
 I imagine this would give discoveries in the LHC a boost, or at least some amazing footage of atoms colliding and exploding :) That would be amazing!
 No, for a few reasons:First they are producing too many collisions (probably many quadrillion of collisions, I can't find the exact number). Most of the are boring, so they have a lot of filters to select and save only the slightly interesting collisions, because otherwise it would be impossible to save the data. IIRC they only got a few thousand of Higgs bosons, he signal to noise ratio is almost 0.Second to film something using light, the object must be bigger than the wavelength of the light. There are some tricks to reduce this a little, but you can't film elementary particles directly. The old method was to let the particles create small bubbles or drops of water inside a recipient, and then photograph the chain of dots with visible light. In the new equipment the process is more complicated, but the interesting part of the collision is too fast and too small. They only measure the particles that are flying away after the interesting part happened, they only measure the leftovers and try to reconstruct the actual collision.Third, the interesting collisions have strong quantum effects, and the quantum effects "disappear" when you are using light that is strong enough to get an accurate position of the particles. You can imagine that if the light is strong enough, then the light itself will bounce against the particles and modify the direction and the collision. It sounds somewhat like magic, but it's possible to write this correctly and precisely using a lot of math and is one of the bases of quantum mechanics. https://en.wikipedia.org/wiki/Uncertainty_principle
 How many frames per second would you need to film at for a photon of light to be in an identical position on 2 frames?
 A photon needs to actually hit the sensor in order to be recorded, so if we're considering the photon to be a particle (rather than a wave) it would have to be in the same position in order to be in two frames at all.
 For 635nm red laser light, you'd need to be sampling somewhere in the order of 9.4 x 10^14 times a second to get two samples per cycle. Based on roughly 300000000m /635nm x2 but then the question comes down to how many cycles make up a photon and does that question even. Make sense in the first place.
 >9.4 x 10^14Can you explain how did you get this number for the red wavelength?Also can we fundamentally "see" a photon?
 I was working from the idea of a nyquist or sampling rate needed to "perfectly" reproduce a wave form. I know this works for lower frequencies but I have no idea how well it translates to something like optical frequencies/light. You can find frequency from speed / wavelength: f = v/l or in our case f = c / l since we're dealing with light.I was using 635nm as the wavelength (basically a red laser). That gives you:3 x 10^8 / 635 x 10^-7 = 4.7 x 10^14Which should be about 470 terahertz (4.72 x 10^14) give or take a bit. To sample that "perfectly" you'd need to sample at twice the frequency or about 940 terahertz or (9.4 x 10^14).As to your second question, I know that there are single photon detectors. Past that, you've got me. I don't know if that can be classified as "seeing" or not. As to size, there's https://briankoberlein.com/2015/04/14/thats-about-the-size-o... but that might or might not make sense.That's about the limit of what I'm willing/able to say on the subject.
 I apologize for my uninformed question.
 No apology needed. When things get that small and you're dealing with light, it's a legitimate question. See: https://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality
 Speed of light ~300 million km/s.10 trillion frames per second.Light would have travelled 3cm on each frame.
 If you change the km to m: 299,792,458 metres per second
 Δd = 1cm = 0.01m v = 3e8 m/sv*Δd = 3e6 frames/sFor the camera to pick up 1cm of difference between frames, you have to capture 3x10^6 frames per second. You can do the limit for how much you want your delta to be just by multiplying by the speed of light.
 Light travels roughly 300 billion m/s, so at 10 trillion fps, light moves about 3cm.
 Joking? Wikipedia: 299,792,458 metres per second
 Wouldn't it be fantastic if it traveled that fast though! The benefit to computing would be enormous.
 Wonder how the universe would look like, then, through our telescopes.
 This phys.org "article" is simply a copy of the INRS press release you can find here: http://www.inrs.ca/english/actualites/worlds-fastest-camera-...Actual paper: https://www.nature.com/articles/s41377-018-0044-7Website for the INRS lab's project on this: http://coilab.caltech.edu/research_6.html (includes PDF of the above paper and others)As almost always, the phys.org "article" adds nothing of value to the press release it reproduces.
 Thanks! We've updated the link from https://m.phys.org/news/2018-10-world-fastest-camera-trillio....
 Thanks for the links! I see your point, but lots of people wouldn’t know about this if it was just a press release. I know I don’t have enough time to check for them.
 Yes, of course. The person posting the link, however, should do even the tiniest bit of legwork.
 It doesn't really capture 10 trillion frames per second. The laser pulses at a regular interval, and the combination of the regular pulsing + multiple pictures taken at different points along those pulses makes for 10 trillion distinct frames captured within the timeframe of 1 second. There is no actual video captured.I get that being a tech journalist is difficult, you have to juggle between the tech and the layman. But after writing a headline, read it back to yourself and ask: will this put the wrong idea in people's minds? If the answer is yes, rewrite it... even if it sounds less cool.
 I don't quite understand how your original point refutes the headline. They are saying that it captures frames at a rate that, measured in seconds, would be ten trillion frames per second.The headline seems entirely accurate.
 It does not capture events as they occur. You can't do a 10Tfps recording of a lightning strike with it for example because every lightning is different. It only allows you to synthesize an averaged video of repeatable events.
 Did you read TFA? It seems that you are describing prior techniques for ultrafast image capture, whereas this technique is for capturing single, non-repeatable events.> Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones. [...] The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.
 How long can they maintain that frame rate for?
 From what I understand, it is a camera, so as long as you can record data fast enough. I didn't read any limitation of the current setup across time (there is one across space though). So far, it is only 25 frames captured, it is fundamental research, not yet a product people can get value of.Note that what we want to observe with those cams are very short, transient phenomenons. When I was doing my internship at a particle accelerator called GANIL, we were only recording 0.5 second, which already represented close to 1TB worth of raw data. It takes months to interpret and analyze results.EDIT: typo
 That's incredible. How long did it take to write 0.5s of data to disk? I'm guessing there's no way to sustain this as you'd be so far behind after only a single second. I'm pretty sure we can still only store a few gigs per second. Please correct me if I'm wrong. Very interesting though!
 The best way to think of this is it might take 100 seconds to 'record' those 10 trillion frames that occur in 1 second.That doesn't seem to make sense, but imagine this. You want to shoot 100 frames of the first millisecond of an airsoft pellet leaving a gun, but you have a camera that only shoots around 2 frames per second.Your airsoft gun shoots 1 ball exactly (1ns accuracy) every second, exactly the same velocity and direction.You have your camera, that only captures 2 frames every second, but this camera has an insane shutter speed, 1 microsecond, and has a shutter with that you can time to the gun exactly.You can also delay the release of the shutter by 1 microsecond increments.So, you start by taking 1 picture, 10 microseconds after you shoot your pellet. Then in the next second, 20 microseconds, you do this 100 times. You stitch this all together, and you have a video in super slow motion of an airsoft pellet leaving a gun. It just happens to be 100 different airsoft pellets.
 I agree that the article isn't very clear on this, but I believe you're describing the previous work.> Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones.The new innovation here actually records the frames right after each other of one single event:> The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse’s shape, intensity, and angle of inclination.
 I'm more curious about how they can store it. even if each frame could somehow be encoded as a single byte, you're looking at ~10TB/s, which afaik exceeds even the bandwidth between L1 cache and the execution core of a modern cpu.
 It may take 100s of seconds to capture the data.
 The article mentions 25 frames in 400ns...
 How is that relevant?
 Just trying to understand.
 Sorry, I thought you were challenging the notion that it's 10 trillion frames per second because it's not a full second (someone else did that elsethread)Apologies for my tone
 The technique of making "ultrafast" movies of repeating phenomena by imaging "one pixel" at a time and then assembling the raster-scan of all the pixels has been used for a long time.The technique described in the paper is different from that....`````` > Thus far, established ultrafast imaging > techniques either struggle to reach the > desired exposure time or require repeatable > measurements. We have developed > single-shot 10-trillion-frame-per-second > compressed ultrafast photography (T-CUP), > which passively captures dynamic events with > 100-fs frame intervals in a single camera exposure.``````
 I feel like many readers of news carry this misunderstanding, so just to clarify: journalists very often do not get to write their own headlines. Many times they aren't even consulted on the headline. They may be just as surprised and dismayed as a reader/critic.It is frequently editors who write the headline, and it might even be editors who weren't involved in the reporting. Editors are always writing headlines with a balance of different motives and goals. Strict accuracy isn't always on the top of that list.
 In this case, the ambiguous statement is repeated 3 times in the article.
 That is not the case, from the article that you criticized:> The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.Obviously they're only recording extremely small timeframes with this setup, but it is indeed real time.
 Note that a rate such as "x frames per second" doesn't mean it needs to be able to operate for the full second. That would be like saying you can sprint at "12 miles per hour" means you can run all of 12 miles in an hour.
 Watching a 1 second video shot at 10 trillion fps would make for a very boring lifetime or 15. A 240Hz display would take ~1300 years to display all 10,000,000,000,000 frames.
 "20 mph" (slower than the fastest 100m sprint, but faster than the fastest half marathon) would be more clear of an example.
 Some people can...
 That's irrelevant to the point.The fact that "statement A does not imply statement B" itself does not mean "statement A implies statement B is false".Just because the phrase "X can run at 12 miles per hour" does not imply that "X can run 12 miles in an hour", it does not mean the ummonk was claiming that no one can run 12 miles in an hour.
 "The first time it was used ... the process was recorded in 25 frames taken at an interval of 400 femtoseconds"1 / (400 femtoseconds) = 2 500 000 000 000 hertzSo definitely in the "trillion fps" for a very short period of time with multiple frames.
 What is confusing is why they call it 10THz when it appears to be (on average 400/25=) 16fS/frame or closer to 60THz.It's almost like they are underselling it?
 They mentioned that in some of the use cases, the thing they're imaging is so fragile that it can only tolerate one laser pulse. So I don't think you're correct.
 There's even an easy analogy for this! Putting strobe lights on fast, repetitively moving machines allows you to see their motion slowed down.EDIT: I'm responding here to the parent poster's claim, not to the original article. For more information on the technique used in the article, see this other article[0] about "CUP" vs this article about "T-CUP". In CUP they use a random 2D sampling of a scene projected onto a streak camera.
 That's not what they're doing. They're seeing the actual motion of something that doesn't necessarily move repetitively.
 Agreed. We work with traditional high speed cameras (up to 1Mfps) and I thought, wow, someone is going several orders of magnitude faster??Turns out the research is super cool and the paper is well written and interesting but it's not what I thought.
 It would be cool to record single photon splittings along some path, as they happen in real time [0].

Search: