
NLP Architect by Intel AI Lab - tsaprailis
http://nlp_architect.nervanasys.com/
======
jph
This AI toolkit works on popular Intel CPUs, and is a big step forward for the
new Intel Nervana Neural Network Processor (NNP-I) hardware chip akin to a
GPU.

The Intel AI Lab has an introduction to NLP ([https://ai.intel.com/deep-
learning-foundations-to-enable-nat...](https://ai.intel.com/deep-learning-
foundations-to-enable-natural-language-processing-solutions)) and optimized
Tensorflow
([https://ai.intel.com/tensorflow/](https://ai.intel.com/tensorflow/))

One surprising research result for this NLP is that a simple convolutional
architecture outperforms canonical recurrent networks, often. See: CMU lab,
Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN)
[https://github.com/locuslab/TCN](https://github.com/locuslab/TCN)

If you're interested in Nervana, here are some specifics: the chip is for
hardware neural network acceleration, for inference-based workloads. Notable
features include fixed-point math, Ice Lake cores, 10-nanometer fabs, on-chip
memory management by software directly, and hardware-optimized inter-chip
parallelism.

I've worked for Intel, and I'm stoked to see the AI NLP progress.

~~~
azinman2
So does this not work if you don’t have fancy new intel hardware?

~~~
peter1983
Hi. I’m one of the authors of the library. The models work on every popular
CPU by Intel. Nonetheless, we’re supporting Intel Optimzed Tensorflow when
installing and in the future plan to add HW optimizations to the models. Stay
tuned :)

~~~
stochastic_monk
So this is aimed at inference rather than training. Does Intel have any plans
to produce chips which can scale training as well, or is that largely going to
be outsourced to GPUs for models of considerable size for the time being?

I imagine models need to be deployed more often than built, but I thought that
the pain point was usually the latter.

------
continuations
How does this compare to word2vec or fasttext?

~~~
yorwba
word2vec and fasttext are specialized tools for creating word embeddings, this
is a more generalist library. It's more comparable to PyText, AllenNLP or
Flair, the main difference appearing to be that the other three use PyTorch,
not Tensorflow.

~~~
___cs____
with the recent change for TF 2.0. If you would design something similar, will
you use TF or Pytorch? What I am trying to ask here is, Is TF 2.0 is
comparable to Pytorch when it comes to ease of use?

~~~
m0zg
Probably not. It has "imperative" mode, but it also drags in quite a bit of
API baggage that PyTorch just doesn't have. It's not that PyTorch is ideal,
but its main advantage is it "feels like NumPy". Google also has a library
that "feels like NumPy". In fact it kind of _is_ NumPy with hardware
acceleration, but it seems to be in very early stages. I heard from insiders
that there are only 2 people working on the project, if that. The name of the
project is JAX:
[https://github.com/google/jax](https://github.com/google/jax). It's arguably
a lower level framework on top of which something like PyTorch could be built.

------
___cs____
Yet another interface on top of Pytorch/TF/Gensim.

