
Show HN: See what makes presidential candidates' hearts beat faster (Live) - ororm
https://www.hearty.ai
======
tchadwick
I assume they are using the theory/algorithm from the MIT video magnification
work:
[http://people.csail.mit.edu/mrub/vidmag/](http://people.csail.mit.edu/mrub/vidmag/)

~~~
ororm
For this purpose, MIT's Eulerian Video Magnification is next to impossible
when you have both color changes and large motions (it is better suited to
magnifying small motions, or subtle color changes in unmoving patches). NASA
Glenn's thearn has developed another approach which has the same problems
([https://github.com/thearn/webcam-pulse-
detector](https://github.com/thearn/webcam-pulse-detector)). We developed our
own solution, involving (among other things) pixel-accurate facial feature
detection and filtering methods not subject to FFT's regularity assumption

------
ts314
There's now a blog post with some analysis on this:
[https://hearty785.wordpress.com/](https://hearty785.wordpress.com/)

------
fizzbatter
This is awesome. I fear how accurate it might be, though. Even if the
technology is accurate (i believe it is), this, like most things with this
presidential campaign, seems prone to fudging.

------
xchip
how do we know whether this is true or not?

~~~
ororm
You can try the algorithm yourself if you scroll down, then double check with
your webcam and your own pulse ;-). Note that motion compensation is disabled
for this demo, so it only works well if you do not move for a few seconds (the
reason being that the pixel-accurate facial feature and skin detection
algorithms would kill the server if 2-3 people try at the same time).

