
XNOR.ai frees AI from the prison of the supercomputer - dwynings
https://techcrunch.com/2017/01/19/xnor-ai-frees-ai-from-the-prison-of-the-supercomputer/
======
saycheese
Thought that Google's TensorFlow already does this and has since shortly after
it was released - what am I missing?

Example:
[https://www.tensorflow.org/mobile/](https://www.tensorflow.org/mobile/)

~~~
emcq
XNOR-Net is not currently part of TensorFlow, although there is a feature
request to support it. [0]

The main difference is that XNOR-Net uses 1 bit quantized weights and
XNOR+popcount to approximate the dot product of convolution, which can be
implemented very efficiently by using 64-bit arithmetic instructions on a CPU
to operate over 64 1-bit components in parallel. [1]

TensorFlow's optimization is also using discretized weights, but AFAIK
targeted at 8-bit quantized weights. [2]

[0]
[https://github.com/tensorflow/tensorflow/issues/1592](https://github.com/tensorflow/tensorflow/issues/1592)

[1]
[https://arxiv.org/pdf/1603.05279v4.pdf](https://arxiv.org/pdf/1603.05279v4.pdf)

[2] [https://petewarden.com/2015/05/23/why-are-eight-bits-
enough-...](https://petewarden.com/2015/05/23/why-are-eight-bits-enough-for-
deep-neural-networks/)

------
franciscop
The license specifies: The following license governs your NON-COMMERCIAL use
of the Software. Commercial use is strictly prohibited.

That's a goodbye for anyone wanting to build anything interesting in the
future with it. Such a pity. I couldn't find anywhere any kind of "commercial
license".

~~~
fifnir
Why is it that only commercial applications are interesting ?

~~~
franciscop
TL;DR, it is NOT that only commercial is interesting; it is that not
commercial allowed is really restrictive.

I do a lot of Open Source, so I know the troubles associated with GPL and the
like. This is a lot more extreme than that.

If you were to make a tool based on this a lot of your potential userbase
dissapears. Like most of the people.

Why? For using it, while I mostly use Open Source, learning/mastering a tool
only to discard it and then learn/master a different tool that does the same
for a commercial application is a HUGE waste of time.

If there was a way (2 licenses) to use this as commercial software then it'd
be a different story.

PS: I'm the author of _legally_ and I made it for this reason:
[https://npmjs.com/package/legally](https://npmjs.com/package/legally)

------
adultSwim
Disgusting appropriation of language to punch up a business headline

~~~
Hydraulix989
It's tech journalism, what did you expect?

------
gtani
recent discussion of running tensorflow/grad. descent on DSP chips:
[https://news.ycombinator.com/item?id=13379847](https://news.ycombinator.com/item?id=13379847)

This is a good writeup on mobile/low power devices, lower precision
arithmetic, compression, sparsity, hogwild type "naive" assumptions etc
[https://arxiv.org/abs/1612.07625](https://arxiv.org/abs/1612.07625)

------
swatkat
Uncanny Vision[1] have similar computer vision and deep learning libraries for
ARM chipsets.

[http://www.uncannyvision.com/](http://www.uncannyvision.com/)

------
shpx
Sending data to a server isn't that "comically clumsy", light speed is very
fast. [https://superuser.com/questions/419070/transatlantic-ping-
fa...](https://superuser.com/questions/419070/transatlantic-ping-faster-than-
sending-a-pixel-to-the-screen)

Unless you're talking about privacy, there I agree, I would much rather my
phone do voice recognition locally than send it to google servers (where it'll
be stored [0] so they can train better models later). That's a bit of a
catch-22 though, if people don't need to send you their data to use your model
then you won't get data for training your model.

[0]
[https://myactivity.google.com/myactivity?restrict=vaa](https://myactivity.google.com/myactivity?restrict=vaa)

------
Dzugaru
Inference (forward pass) in convnets is really cheap even in floats, there are
some awesome hardware like NVidia Jetson and most deep learning framework
(i.e. Caffe) support it right out of the box. So we were doing realtime
inference in embedded for years now, what am I missing?

~~~
general_ai
You're missing the fact that Jetson is $435, it draws a lot of power, and
requires cooling.

~~~
llukas
So where do I buy those xonr.ai, how much does it cost and when do they ship?

~~~
general_ai
Well, if you look at their website, they have an email there, which you could
use to ask them directly.

------
vavav123
There are lots of starputs like XNOR.AI for example:
[http://aipoly.com/](http://aipoly.com/)
[http://scorch.ai/](http://scorch.ai/) they have dealing with devices also

------
curuinor
Rumelhart was the first to note this, I think, although of course he didn't
have mobile phones. Implementation looks spiffy, but not spiffier than Han et
al's work

------
qeternity
Backprop is expensive. Not sure what the revolutionary aspect is here. Smells
a bit like the Theranos of AI.

~~~
hidden-markov
I am pretty sure this is about optimizing the graphs of trained models. That
is, training is not freed from "the prison of supercomputers".

~~~
qeternity
Exactly my point. Feed forward can run on much lower specs just fine. So their
whole "prison of supercomputers" analogy is for the wrong side of the
equation.

------
krosaen
Isn't this article conflating feed forward applications of trained conv nets
with the training of conv nets? The latter is what is so expensive, yet it
seems like the demo of object recognition is the former, and not a
breakthrough to run on a phone AFAIK.

