
Eulerian Video Magnification - smusamashah
https://people.csail.mit.edu/mrub/evm/
======
jphoward
So I've actually tried to implement Eulerian video magnification myself,
albeit using Python, because I believe when I first looked into this, ~5 year
ago, the code was MatLab only.

I was really excited about it, because, as a cardiologist, I immediately
thought we could use this to identify irregular heart beats in the patient
waiting room, for example. This was pre Apple watch etc. I was thinking people
could briefly sit in the chair in front of the camera if they are willing to
be recorded, as a quick screening whilst they wait for the doc.

Unfortunately, I found it incredibly difficult to reproduce their results,
even using similar data. I remember videoing my foot as I have a relatively
prominent posterior tibial pulse, which in good light I can actually make out
visually (but not in the video I took, as intended). The Eulerian
magnification didn't seem to do anything.

Has anyone else had better luck? I did wonder if maybe I had to use specific
video capture equipment, certain frame rates or filters, but if so that wasn't
clear from the documentation?

It's been 8 years since this paper came out, and yet I still haven't seen it
being used in the 'real world', so I am slightly suspicious that I am not
alone in my experiences.

~~~
somethingsome
A friend reimplementated it in python and it worked pretty well for analyzing
cardiac liver movements[1]!

I don't think he is willing to share the code, but my guess is that a python
implementation should work just fine, with very nice results :) (I saw the
results applied in his research on videos with impressive results). It took
him something like several months to have a code without major bugs.

[1] Hahn, S., Absil, J., Debeir, O., & Metens, T. (2019). Assessment of
cardiac‐driven liver movements with filtered harmonic phase image
representation, optical flow quantification, and motion amplification.
Magnetic resonance in medicine, 81(4), 2788-2798.

~~~
hervature
Doesn't want to share the code because it works or because it doesn't work?
Disgrace to science.

~~~
somethingsome
You are free to contact him! I'm not sure he is not willing to share.

Sharing code to the public in research depends on several factors, it can get
open sourced when the thesis is done, it may depends on who is paying the
research (eg: some industrial PhD are not public), long term plan exist to
publish the code but there are only 24h a day and a whole lot of work to do,
the author/uni may want to make a business after the thesis, and a lot of
other parametres. It gets even worse when the code is used in medical
applications as a validation process with all the relevant authorities has to
be done. This can take a lot of time and freeze the code in the waiting for a
certification. Also sharing code between researchers is very usual, it doesn't
mean the code has to go public. In this case I think it is part because the
original matlab code is patented, research is ongoing and as it is research
code it is not necessarily public ready.

It doesn't mean he will never release it.

If you desperately need it, the matlab code is available, and has the same
functionality, you are free to adapt it or use it directly (for research
purposes).

I must say I don't like that the original code is patented, but I see hundreds
of reasons why the authors patented it even if I don't particularly agree :)

------
slavik81
For the unfamiliar, 'Eulerian' and 'Lagrangian' are basically just names for
two different frames of reference.

Eulerian sampling is like dotting the countryside with weather stations. Your
measurement data shows the weather at each of those fixed points over time and
from that you can understand what was happening.

Lagrangian sampling is like sending up weather balloons. Your measurement data
shows the weather around the balloons as they float along with the wind. That
will give you all the same information, but from a different perspective.

The Eulerian perspective for weather makes it easier to understand what's
happening at a specific location, but harder to understand what's happening to
a specific particle of air. The Lagrangian perspective is the opposite.

There is an explanation of this in the paper, but IMO, it's a little more
difficult to follow than the classic weather analogy. Just wanted to demystify
the name a bit.

~~~
bollu
Thanks, this is nice. I presume this comes from fluids? I found this wiki
link:
[https://en.wikipedia.org/wiki/Lagrangian_and_Eulerian_specif...](https://en.wikipedia.org/wiki/Lagrangian_and_Eulerian_specification_of_the_flow_field).

Is this it?

~~~
slavik81
Yes and yes.

------
CretinDesAlpes
For those interested in the pulse rate extraction, there is actually a branch
of biomedical engineering/computer science called "remote
photoplethysmography" (rPPG).

I worked on it a bit at some point, it does work pretty well assuming you
don't move that much and the light isn't too bad. As you can imagine, it's
more difficult to estimate on non-white faces, particularly very dark skin
tones. All the algorithms (at least until 2016) differ in the way of combining
the RGB signals into a pulse signal by making different assumptions.

If you are interested more, Philips in the Netherlands are particularly active
in this research domain, one of the main application being non-contact pulse
estimation in hospitals for example.

------
scg
Related: "Learning-based Video Motion Magnification" in PyTorch.
([https://people.csail.mit.edu/tiam/deepmag/](https://people.csail.mit.edu/tiam/deepmag/))

Motion magnification" means you pick up and amplify small motions so they're
easier to see. It's like a microscope for motion.

Example videos reproducing these results:

[https://twitter.com/cgst/status/1210691577078636544](https://twitter.com/cgst/status/1210691577078636544)

~~~
drran
> It's like a microscope for motion.

Temposcope

------
tmabraham
William Freeman's group does lots of cool things in computational imaging and
sensing, as well as ML. Fun fact: his group contributed to the formation of
the first black hole image. Dr. Katie Bouman was his PhD student.

The whole field of remote photoplethysmography (rPPG) is quite interesting. I
have implemented a couple algorithms, but they are quite "brittle" if you
will, and only work well with certain videos. Many recent papers have
introduced deep learning techniques. Hopefully these techniques are more
robust to movement, skin color, lighting, and other natural conditions (unlike
the usual computer vision/signal processing algorithms previously used).

------
smusamashah
Someone implemented this in JavaScript to work with your webcam

[https://github.com/antimatter15/evm](https://github.com/antimatter15/evm)

Demo:
[https://raw.githack.com/antimatter15/evm/master/color2.html](https://raw.githack.com/antimatter15/evm/master/color2.html)

------
djmips
I found this other video on the topic that I wanted to share.
[https://www.youtube.com/watch?v=2XBQ_1t8SiQ](https://www.youtube.com/watch?v=2XBQ_1t8SiQ)

and Steve Mould's layman's video
[https://youtu.be/rEoc0YoALt0](https://youtu.be/rEoc0YoALt0)

------
darkstarsys
This is from 2012.

~~~
yorwba
Discussed at the time:
[https://news.ycombinator.com/item?id=4062216](https://news.ycombinator.com/item?id=4062216)

------
jmpman
I’d read that a blow to the chest during a specific phase of the heart
waveform can stop the heart. In theory, this video processing algorithm,
coupled to a computer fired beanbag type firearm could result in heart
stoppage that would look like a heart attack.

------
tim-fan
If you'd like an easy way to try this sort of thing on your own videos, they
have a demo web interface here:
[https://lambda.qrilab.com/site/](https://lambda.qrilab.com/site/)

I've tried it on a couple videos with fairly interesting results

------
phenkdo
Are there simpler ways of magnifying video? For example, converting to HSV and
amplifying the Hue scale inter-frame differences for example?

Sorry if that seem a very simplistic approach, but curious what other
techniques have been tried outside of Eulerian/Lagrangian magnification?

------
grugagag
I could not watch the video but found a better detailed explaination on
youtube:
[https://www.youtube.com/watch?v=DgK1Zl9asIQ](https://www.youtube.com/watch?v=DgK1Zl9asIQ)

------
thinkski
Has anyone tried to implement this as a gstreamer plugin?

------
sorokod
Can techniques like this one used to spot "deepfake" videos?

------
blobbers
I'm shocked this hasn't surfaced as a lie detector app on the iphone. I get
that it doesn't work very well, but it seems like it would sell. Maybe offer
some calibration steps.

------
nextaccountic
Is there an implementation on github of this?

------
anukin
Quite interesting

