It's true that it is fast for the power it consumes, but it is way (way!) to slow to use for any form of training, which seems to be what many people think they can use it for.
According to Anandtech, it will do 10 GoogLeNet inferences per second. By very rough comparison, Inception in TensorFlow on a Raspberry Pi does about 2 inferences per second, and I think I saw AlexNet on an i7 doing about 60/second. Any desktop GPU will do orders of magnitude more.
 https://github.com/samjabrahams/tensorflow-on-raspberry-pi/t... ("Running the TensorFlow benchmark tool shows sub-second (~500-600ms) average run times for the Raspberry Pi")
Yes the low power is great, though.