To access the device, you need to install a sdk which contains python scripts that allow to manipulate it (so, it seems like it's a driver embedded in utilities programs). Source: https://developer.movidius.com/getting-started
This is sure to save me money on my power bill after marathon sessions of "Not Hotdog."
Interesting that you could use this to accelerate systems like the Raspberry Pi. The Jetson is a pain in the backside to deploy (at a production level) because you need to make your own breakout board, or buy an overpriced carrier.
EDIT: I use the Pi as an example because it's readily available and cheap. There are lots of other embedded platforms, but the Pi wins on ecosystem.
12 years ago you could have gotten a stack of 5-8 7800 GTX cards and had 1.5TFLOPS of single precision. 11 years ago you could have had a stack of 5 cards with unified shaders. It's not fair to compare against the significantly more complicated route of getting 100 CPU cores working together with only 1-4 per chip.
EDIT: Looks like the explanation is in a linked article: https://techcrunch.com/2016/04/28/plug-the-fathom-neural-com...
How the Fathom Neural Compute Stick figures into this is that the algorithmic computing power of the learning system can be optimized and output (using the Fathom software framework) into a binary that can run on the Fathom stick itself. In this way, any device that the Fathom is plugged into can have instant access to complete neural network because a version of that network is running locally on the Fathom and thus the device.
This reminds me of Physics co-processors. Anyone remember AGEIA? They were touting "physics cards" similar to video cards. Had they not been acquired by Nvidia, they would've been steamrolled by consumer GPUs / CPUs since they were essentially designing their own.
The $79 price point is attractive. I wonder how much power can be packed into such a small form factor? It's surprising that a lot of power isn't necessary for deep learning applications.
It runs pretrained NN, which is the cheap part. So this is a chip optimized to preform floating point multiplication and that's it.
It's true that it is fast for the power it consumes, but it is way (way!) to slow to use for any form of training, which seems to be what many people think they can use it for.
According to Anandtech, it will do 10 GoogLeNet inferences per second. By very rough comparison, Inception in TensorFlow on a Raspberry Pi does about 2 inferences per second, and I think I saw AlexNet on an i7 doing about 60/second. Any desktop GPU will do orders of magnitude more.
 https://github.com/samjabrahams/tensorflow-on-raspberry-pi/t... ("Running the TensorFlow benchmark tool shows sub-second (~500-600ms) average run times for the Raspberry Pi")
Yes the low power is great, though.
I think you're referring to USB 3.1 gen2, which would double the theoretical bandwidth to 10Gbps.
I went back, looked, and yes it does support USB 3.0. Actually given the chip itself also apparently supports GigE it's a shame there isn't the option with that brought out.
I still wish though USB connectors had some kind of rubber padding around them. Like the computer IEC power connectors -- no matter what you do it's virtually impossible to break cable or socket.
Even DB9 connectors were better. I never broke one in the many years I used them. Rock solid and you could even screw them in.
I initially had the same concern, but after using USB-C heavily for over a year now, not had one instance of a connector failing.
> I break a USB cable every few days
Either you're doing it wrong (and really careless / buy really cheap cables) or you're doing some highly specialized thing which means the feedback should be caveated saying I'm in "xyz field, which means i break far more USB cables than most ever will".
I never had this problem before USB.
I also break a lot of USB cables in the field, e.g. hiking, and that never used to be a problem with barrel connectors. USB connectors just are not designed for people who don't sit in an office all day.