
Intel AI open-sources library for deep learning-driven NLP - rcarndrums
https://venturebeat.com/2018/05/24/intel-ai-lab-open-sources-library-for-deep-learning-driven-nlp/
======
siscia
A while ago there was an article about creating a "wasteland" of
unprofittability __around __your core business so to become the monopolist and
extract the highest margin out of the industry.

The position of Intel/Nvidia is then quite simple, they will open source any
model, dataset, toolkit, library, etc that makes use of their hardware.
Training new AI will become simpler and simpler and they will extract high
margin from selling the hardware.

What about Google instead? They have the data and the engineering knowledge to
make complex AI works, however it seems quite unlikely that they will be able
to drive the price of AI hardware down. Moreover they are charging quite a lot
for the use of their custom TPU.

From this analysis it seems like Google is bound to fail in the long term in
the AI race.

Am I wrong? Why?

~~~
bitL
Google can compete for the most influential/promising ML researchers with
crazy salaries and be at the forefront of both best models as well as
immediately putting them into their cloud offering for others to use. And
their DNA revolves around ML whereas Intel's and NVidia's doesn't. Looking at
past 5 years at Intel, they still don't get it; NVidia seems much more
qualified and sensed the opportunity right away in a developer-friendly way,
just doesn't have muscles Google does and their ML business model is right now
under a threat of quick commodification - I'd be more worried about them than
about Google to be honest. Google's only threat is that they will self-implode
due to their inner culture and arrogance, but that's a long shot and they
manage it better than SUN did.

~~~
shaklee3
The question is really whether those high salaries are achieving long-term
payoffs. For certain things like image and audio processing, they've made some
remarkable strides. However, many of those strides have to be remarkable to
people outside the HN/AI community. For example, typing dog in Google photos
and having it find pictures of your dog is great. However, when it misses a
few and people notice that, they wonder why it's so bad. Almost as if Google
should correctly identify 100% of them. Someone needs to sink the money into
doing the grunt work of getting close to 100%, but the last few percent of
improvement cost a lot of money. And to productize that to a user base of
hundreds of millions of people is not cheap on the infrastructure, either.

~~~
bitL
I think one of the hiring strategies is not having those very talented people
outside doing damage to them, rather inside even if inefficient/underutilized.
They just need to be better than their competition and frankly, looking this
week at what Amazon is offering they can continue being in that mode for quite
some time.

General public more likely notices improvements in apps with immediate
feedback like what Snapchat/Messenger is doing.

There are already models surpassing human cognition in many tasks, maybe the
inferencing costs aren't economical yet?

~~~
glangdale
I remember people ascribing this hiring strategy to Microsoft (back in the
day): hiring up all the researchers to ensure there never was another
Microsoft.

------
syllogism
Looking forward to seeing the evaluation numbers!

I'm mostly curious about how their NER and parser compare against what I've
implemented for [https://spaCy.io](https://spaCy.io) . I've tried the
architectures they're using, and I've found they need very wide (and therefore
slow) hidden layers to get competitive accuracy.

I'm sure they have _some_ evaluations, right? I mean you can't really develop
these things without running experiments...

~~~
iloveluce
Also it seems like their reported NER is just serving up the spaCy NER.

“spacy_ner service which provides Spacy NER annotations.”
[0][http://nlp_architect.nervanasys.com/service.html](http://nlp_architect.nervanasys.com/service.html)

------
dmichulke
That title will make someone a proud winner of bingo.

------
s4chin
GitHub link - [https://github.com/NervanaSystems/nlp-
architect](https://github.com/NervanaSystems/nlp-architect)

------
sanxiyn
The more interesting announcement is that Intel Nervana, already postponed
multiple times, is again postponed to "late 2019".

One theory is that Intel Nervana outperformed Nvidia Pascal, but didn't
outperform Nvidia Volta, so it couldn't be released.

------
christophclarke
GitHub repos here:
[https://github.com/NervanaSystems](https://github.com/NervanaSystems)

------
make3
so just another deep learning wrapper, or does it do anything significant not
already done by 10 other more established libs?

------
jl2718
Link to repo?

~~~
Jorslu
[https://github.com/NervanaSystems/nlp-
architect](https://github.com/NervanaSystems/nlp-architect)

It's not linked directly in the article but the article does mention the name
of the software is "nlp-architect"

------
regnarg
An AI decided to open source something? Hmm... Perhaps it's trying to use
humans to further advance itself...

