I bet he isn't using the GPU though.
Also - a crappy webcam actually makes things computationally easier because there's less data to deal with
Perhaps, but lens distortion, motion blur and a rolling shutter don't make things easier.
Anyway, the inventor himself claims a phone implementation is feasible.
Also, completely agree on how camera blur would worsen the accuracy of said algorithm, I was trying to point out that it would run faster on a lower quality camera (with the caveat that it might not work nearly as well).