Hacker News new | comments | show | ask | jobs | submit login

It says he's running on a "Intel Core 2 Duo CPU 2.4 GHz, 2 GB RAM" according to his website

I bet he isn't using the GPU though.

Also - a crappy webcam actually makes things computationally easier because there's less data to deal with

Perhaps, but lens distortion, motion blur and a rolling shutter don't make things easier.

Anyway, the inventor himself claims a phone implementation is feasible.




Yep, I'm sure he isn't. I don't doubt that you could optimize this algorithm to run on a phone but that takes an insane amount of effort and expertise and is a feat in and of itself. The word lens guys, for example, spent about a year porting from an optimized C implementation on i386 to ARM for the iPhone - they even initially used the GPU but decided that the overhead of shuffling data between buffers wasn't worth the advantage gained by the iPhone's measly GPU (which only had 2 fragment shaders at the time I think).

Also, completely agree on how camera blur would worsen the accuracy of said algorithm, I was trying to point out that it would run faster on a lower quality camera (with the caveat that it might not work nearly as well).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: