
Intel Unveils Strategy for State-Of-the-Art Artificial Intelligence - NurAzhar
https://newsroom.intel.com/news-releases/intel-ai-day-news-release/
======
grabcocque
For a millisecond I thought Intel were doing something new and interesting.
Neuromorphic or neuromeristive architectures maybe.

Nope it's just some rebranded x86 cores.

~~~
lern_too_spel
Nervana TPUs aren't rebranded x86 cores. They aren't neuromorphic either, and
I don't know what neuromeristive is.

~~~
deepnotderp
Neuromorphic means brain like, so basically, super parallel.

Neuromemristive,although I haven't heard the term before, means using
memristors to compute artificial neural networks.

~~~
andrepd
>using memristors to compute artificial neural networks

What does that mean?

~~~
p1esk
That means memristors are used to store neural network weights, and a signal
passing through a memristor would be effectively multiplied (scaled) by the
weight value.

~~~
andrepd
Interesting. I wonder if the technology will ever be economically viable to
build chips out of.

------
arcanus
Hardware battle over AI platforms is heating up. Will be interesting if Intel
can catch nvidia, or GPUs in general.

~~~
zump
As someone who works in the semiconductor industry, I'm worried that joining
an AI hardware company is a risky move that will falter just like General
Magic and other hyped hardware startups.

~~~
astrodust
Intel itself was once a risky company to join, as the whole concept of a
microprocessor seemed both stupid and crazy at the same time. Their 4004
series calculator controller was a useless toy compared to more serious
computer hardware at the time and their 8008 was marginally less toy-like but
equally useless for enterprise computing.

We're in the 8088 stage of AI right now.

~~~
zump
Good point. I have a cognitive bias against startups because I think what
they're doing isn't complex, but it is fallacious.

~~~
barrkel
You shouldn't chase complexity; chase opportunity. For the same opportunity, a
non-complex approach will normally be superior to a complex approach.

~~~
ethbro
Also, complexity is _hard_. And even harder to do substantially better than
competitors.

Maybe I'm biased, but I can count far more company fortunes that were made
doing a simple thing (at the right time) than a complex thing.

------
mistermann
I would love to hear any opinions on how one might invest in this, the no-
brainer Nvidia has had a huge run already, although I wouldn't be surprised if
it was double from here in a year's time.

~~~
pjreddie
If Intel is smart they will invest heavily into library development.
Researchers use whatever is fast and right now cuda is fast, not just because
nvidia has the best GPUs (it's about even) but because the primitives in cuda
are so much faster. Matrix multiplication on the same hardware is like 3x
faster in cuda than opencl or competing libraries and using the neutral
networking primitives is even faster. Intel needs to invest in good, low level
libraries so researchers can hack on their platform, build new things, etc.
Ultimately I think researchers drive what platform gets widely used since
training takes so much longer than inference.

~~~
deepnotderp
No, researchers use whatever is fastest for development and flexible.
Otherwise we'd all be using neon.

~~~
auvi
i agree. development speed is more important. is there any benchmark that
compares neon to other systems?

~~~
deepnotderp
IIRC, because of Winograd kernels, 3x3 convolutions are amazingly fast and
almost all the nets now have switched to 3x3 convolutions.

------
jordz
They should call it "State-of-the-ARTificial Intelligence". Teehee.

