Just getting started... o_O
More info: http://web.media.mit.edu/~raskar/cornar/
Theres a few videos around of Raskar demonstrating this.
Let's say that you have a train moving at a constant velocity, v. Then a switch is flipped and turns on a light that is in the exact center of the room. Which wall does the light hit first? 
Seen from on the train: might as well be seen as if you were in a stationary room. The light hits both walls at the same time.
Seen from off the train: The speed of light is constant. Since the train is also moving the light can't travel at v+c. So it hits the back wall, which is traveling towards the light at v, first.
This is a famous thought experiment and in practice would be difficult to perform, even with such a camera. But I'm saying it because it illustrates that we can actually observe relativistic effects. Things act extremely differently than what we're used to when small or moving fast. Assuming you had a really high resolution, something like length contraction could be observed, and measured, in normal conditions.
So there are actually a lot of weird things going on that we wouldn't be aware of. These ultra high speed cameras allow us to observe some of these strange phenomena.
Gave me shivers.
Observation of laser pulse propagation in optical fibers with a SPAD camera
http://i.giphy.com/3og0IQ5k6PP9KspN9m.gif (gif version of one of the mov files)
Also, the music is pretty cringey, so I’d just leave it muted...
My guess is also that, if the laser is really focused, and if the bottle wasn't in the way, refracting some light back to the camera, we wouldn't even be able to see anything.
It also gave me this odd sense of blindness, in that I cannot actually see what's in front of me, only what literally interacts with my retina, almost like it's made of tastebuds for light. Still weirds me out when i think about it.
The approach here would be to take a photo of a car, then drive the car past again from the start and take another photo slightly later, then again, and again driving it past hundreds of times.
They split this up even more, but I think the basic analogy holds to show the key difference.
Edit: oops I misread your comment
It literally freezes time? Come on..
Femto = 10^-15,
Plank = 10^-44
Let's say you have a very bright light source, and that light source results in shining a full 100 Watts of light into the sensor of your camera (I'm using this not as a typical example but because it will make it easy to scale the answer). Photons of visible light have an energy of at minimum 1.5 electron-Volts (800nm red light), which means that 100 Watts of light represents 4.2e20 photons per second.
And that means that with only 100 Watts of light reaching your sensor you cannot attain an fps higher than 4.2e20, because at that speed you'd only get on average around one photon per frame. More realistically you need tens of thousands to millions of photons per frame to have some meaningful level of dynamic range and spatial resolution, which limits the fps to around a quadrillion fps per 100 Watts of light falling on the sensor.
Though once you get into that range you also have problems of signalling, we don't really have electronics that work at those speeds.
the signal is extremely weak, the conventional 'shortest-pulse' is around 5 femtoseconds.
First they are producing too many collisions (probably many quadrillion of collisions, I can't find the exact number). Most of the are boring, so they have a lot of filters to select and save only the slightly interesting collisions, because otherwise it would be impossible to save the data. IIRC they only got a few thousand of Higgs bosons, he signal to noise ratio is almost 0.
Second to film something using light, the object must be bigger than the wavelength of the light. There are some tricks to reduce this a little, but you can't film elementary particles directly. The old method was to let the particles create small bubbles or drops of water inside a recipient, and then photograph the chain of dots with visible light. In the new equipment the process is more complicated, but the interesting part of the collision is too fast and too small. They only measure the particles that are flying away after the interesting part happened, they only measure the leftovers and try to reconstruct the actual collision.
Third, the interesting collisions have strong quantum effects, and the quantum effects "disappear" when you are using light that is strong enough to get an accurate position of the particles. You can imagine that if the light is strong enough, then the light itself will bounce against the particles and modify the direction and the collision. It sounds somewhat like magic, but it's possible to write this correctly and precisely using a lot of math and is one of the bases of quantum mechanics. https://en.wikipedia.org/wiki/Uncertainty_principle
Can you explain how did you get this number for the red wavelength?
Also can we fundamentally "see" a photon?
I was using 635nm as the wavelength (basically a red laser). That gives you:
3 x 10^8 / 635 x 10^-7 = 4.7 x 10^14
Which should be about 470 terahertz (4.72 x 10^14) give or take a bit. To sample that "perfectly" you'd need to sample at twice the frequency or about 940 terahertz or (9.4 x 10^14).
As to your second question, I know that there are single photon detectors. Past that, you've got me. I don't know if that can be classified as "seeing" or not. As to size, there's https://briankoberlein.com/2015/04/14/thats-about-the-size-o... but that might or might not make sense.
That's about the limit of what I'm willing/able to say on the subject.
10 trillion frames per second.
Light would have travelled 3cm on each frame.
v*Δd = 3e6 frames/s
For the camera to pick up 1cm of difference between frames, you have to capture 3x10^6 frames per second. You can do the limit for how much you want your delta to be just by multiplying by the speed of light.
Actual paper: https://www.nature.com/articles/s41377-018-0044-7
Website for the INRS lab's project on this: http://coilab.caltech.edu/research_6.html (includes PDF of the above paper and others)
As almost always, the phys.org "article" adds nothing of value to the press release it reproduces.
I get that being a tech journalist is difficult, you have to juggle between the tech and the layman. But after writing a headline, read it back to yourself and ask: will this put the wrong idea in people's minds? If the answer is yes, rewrite it... even if it sounds less cool.
The headline seems entirely accurate.
> Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones. [...] The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.
Note that what we want to observe with those cams are very short, transient phenomenons. When I was doing my internship at a particle accelerator called GANIL, we were only recording 0.5 second, which already represented close to 1TB worth of raw data. It takes months to interpret and analyze results.
That doesn't seem to make sense, but imagine this. You want to shoot 100 frames of the first millisecond of an airsoft pellet leaving a gun, but you have a camera that only shoots around 2 frames per second.
Your airsoft gun shoots 1 ball exactly (1ns accuracy) every second, exactly the same velocity and direction.
You have your camera, that only captures 2 frames every second, but this camera has an insane shutter speed, 1 microsecond, and has a shutter with that you can time to the gun exactly.
You can also delay the release of the shutter by 1 microsecond increments.
So, you start by taking 1 picture, 10 microseconds after you shoot your pellet. Then in the next second, 20 microseconds, you do this 100 times. You stitch this all together, and you have a video in super slow motion of an airsoft pellet leaving a gun. It just happens to be 100 different airsoft pellets.
> Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones.
The new innovation here actually records the frames right after each other of one single event:
> The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse’s shape, intensity, and angle of inclination.
Apologies for my tone
The technique described in the paper is different from that....
> Thus far, established ultrafast imaging
> techniques either struggle to reach the
> desired exposure time or require repeatable
> measurements. We have developed
> single-shot 10-trillion-frame-per-second
> compressed ultrafast photography (T-CUP),
> which passively captures dynamic events with
> 100-fs frame intervals in a single camera exposure.
It is frequently editors who write the headline, and it might even be editors who weren't involved in the reporting. Editors are always writing headlines with a balance of different motives and goals. Strict accuracy isn't always on the top of that list.
> The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.
Obviously they're only recording extremely small timeframes with this setup, but it is indeed real time.
The fact that "statement A does not imply statement B" itself does not mean "statement A implies statement B is false".
Just because the phrase "X can run at 12 miles per hour" does not imply that "X can run 12 miles in an hour", it does not mean the ummonk was claiming that no one can run 12 miles in an hour.
1 / (400 femtoseconds) = 2 500 000 000 000 hertz
So definitely in the "trillion fps" for a very short period of time with multiple frames.
It's almost like they are underselling it?
EDIT: I'm responding here to the parent poster's claim, not to the original article. For more information on the technique used in the article, see this other article about "CUP" vs this article about "T-CUP". In CUP they use a random 2D sampling of a scene projected onto a streak camera.
Turns out the research is super cool and the paper is well written and interesting but it's not what I thought.