Hacker News new | past | comments | ask | show | jobs | submit login

It's surprising how much attention this has had over the last few days, without any discussion of the downside: it's slow.

It's true that it is fast for the power it consumes, but it is way (way!) to slow to use for any form of training, which seems to be what many people think they can use it for.

According to Anandtech[1], it will do 10 GoogLeNet inferences per second. By very rough comparison, Inception in TensorFlow on a Raspberry Pi does about 2 inferences per second[2], and I think I saw AlexNet on an i7 doing about 60/second. Any desktop GPU will do orders of magnitude more.

[1] http://www.anandtech.com/show/11649/intel-launches-movidius-...

[2] https://github.com/samjabrahams/tensorflow-on-raspberry-pi/t... ("Running the TensorFlow benchmark tool shows sub-second (~500-600ms) average run times for the Raspberry Pi")




But all those other solutions will consume orders of magnitude more power, especially the GPU. It's actually impressive what can be achieved on 1W of power.


Yep. I think the niche here is battery-powered AI. Train on the desktop, deploy to the field on a USB stick.


The Raspberry Pi 3 uses around 4W so that is a lot less than an order of magnitude more. You need a host machine to use this too.

Yes the low power is great, though.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: