
The rise of AI is creating new variety in the chip market, and trouble for Intel - hackathonguy
http://www.economist.com/news/business/21717430-success-nvidia-and-its-new-computing-chip-signals-rapid-change-it-architecture
======
friedman23
> Instead of making ASICS or FPGAs, Intel focused in recent years on making
> its CPU processors ever more powerful

If only, intel has been abusing their market position and pushing out
"upgrades" that barely have a performance improvement over the previous
generation.

AI is not going to eat intel's lunch, all those computers still require cpus.
AMD on the other hand may eat intel's lunch by releasing powerful multicore
processors for half the price all because they don't waste space on the die
for things like integrated graphics.

~~~
coldtea
> _If only, intel has been abusing their market position and pushing out
> "upgrades" that barely have a performance improvement over the previous
> generation._

And why is that because of "abusing their market position", as opposed to
plainly and clearly being more difficult to get faster processors at 14 nm and
lower resolutions (and with the low power requirements of today)?

Besides, the trend the article points to is a BS fad as I see it (and I've
seen 5-6 of those play out in the last 30 years). ASICS and FPGAs wont even
come close to bringing in as much cash as general purpose CPUs do for Intel.

~~~
friedman23
>And why is that because of "abusing their market position", as opposed to
plainly and clearly being more difficult to get faster processors at 14 nm and
lower resolutions (and with the low power requirements of today)?

Okay, let's say that Intel hasn't been under investing in improving the
performance of their cpus. They still have 63% margins across their entire
product line, they still don't have 8 core cpu options below $1000. They still
don't have consumer cpu options that ditch the integrated graphics.

All of which is only possible because they have had a monopoly.

~~~
coldtea
> _All of which is only possible because they have had a monopoly._

You got it backwards: them having a monopoly is only possible because it
hasn't been possible for anyone else to produce more efficient desktop/laptop
CPUs. On the mobile, where there are competent competitors, it's a different
story.

It's not as if if they sacrificed their margins they'd magically be able to
produce faster processors. Just cheaper ones.

And why would they have "consumer cpu options that ditch the integrated
graphics"? What that would accomplish, when there's a ton of cpu being wasted
otherwise?

~~~
mee_too
>> You got it backwards: them having a monopoly is only possible because it
hasn't been possible for anyone else to produce more efficient desktop/laptop
CPUs.

Intel has patents over the x86 instruction set.

>> It's not as if if they sacrificed their margins they'd magically be able to
produce faster processors. Just cheaper ones.

They have produced slower processor on purpose. Instead of soldering the chip
to the heat spreader, they started using a very inefficient thermal paste.
Replacing it leads to 20K better temps. It hurts both the speed and the
expected life time of the processor. But what's wrong if the customer needs a
new CPU in 3 years ...

>> And why would they have "consumer cpu options that ditch the integrated
graphics"? What that would accomplish, when there's a ton of cpu being wasted
otherwise?

The graphics units occupy more than 50% of the die on recent Intel mainstream
CPUs. Instead of GPU they could put 8 cores in chips like 7700K and save
money.

~~~
inteld
Intel has patents over the x86 instruction set.

Not the x86 instruction sets - that's out of patent protection a while ago
(patents are max 20 years).

Maybe amd64 (which is more AMDs parents).

~~~
mee_too
Yes, one can legally build and sell an 8086 clone. But there are tons of
patented extensions in newer CPUs, e.g.
[http://www.patents.com/us-7499962.html](http://www.patents.com/us-7499962.html)
.

------
ghaff
AI/ML/etc. may be part of it. But the other factor is that you can't just wait
18 months any longer for a new generation of x86 to be a lot [EDIT] faster.
That was the big problem with specialized architectures historically. The
volume architecture would catch up soon enough without you having to rewrite
software to optimize for some different design of processor.

That's no longer the case so specialized designs for the compute-hungry
workload du jour (which happens to be AI at the moment) are starting to look a
lot more attractive.

------
Zenst
It is more a case most cpu intensive demands being addressed by dedicated
chips, as we always have had. Many area's of information technology move from
general cpu's towards dedicated silicon. Even CPU's adapt and add instructions
and with that small area's of silicon space for some dedicated demands (think
MMX, AVX, AES,...).

This is no change at all in what we already have. It is when we finally
dedicate all tasks down to dedicated silicon that the glue of a CPU processing
wise will diminish.

But then CPU's of today are constantly adapting and I'd say the C in CPU is
better defined as Centralised rather than Central.

For me, I'm looking forward to a AI grammar and contextual spelling checker
that will make all grama nazi's obsolete.

So my perspective upon this for intel is that I foresee no trouble for Intel,
who already adapt to change and are not to be dismissed any time soon, just
yet.

~~~
jdjebc82747
>For me, I'm looking forward to a AI grammar and contextual spelling checker
that will make all grama nazi's obsolete.

I'm not. I feel like human language is meant to be fluid and to evolve as we
do. This would potentially lead to more of a global monoculture than we are
already starting to get.

~~~
Zenst
That is a very fair observation and one in which I had not considered. I was
somewhat biased in thinking about the aspect that it would make universal
translation closer to becoming a reality. So it is somewhat a chaos aspect in
languages I suppose that does allow evolution of the language.

With that the whole aspect could voice translation add's another aspect to
this and perhaps tackling keyboard inputs upon a keyboard layout that is
designed to be the worst possible ever layout is an area due to die off sooner
than we think. Though the prospect of forcing generations to endure, even if
they do not know about typewriters does bemuse me.

~~~
jcranmer
In the Classical era, grammar study was focused on a few idealized languages
(Latin, Greek, Sanskrit); when people in the Early Modern decided to apply
these rules to vernaculars, they ran into the problem that many modern
languages don't follow such clean rules. The most common response was to try
to insist that things that didn't look Latin shouldn't be considered
"grammatical." Only quite recently did people begin working out how to
describe the grammar of languages like English.

One consequence is that almost everything you're taught about English in
school is completely and totally wrong. There's the completely bogus
prescriptivisms that have no grounding (e.g., thou shall not end a sentence
with a preposition). But even basic things like "what are the parts of speech"
are pretty much wrong, being derived primarily from "this was what this Latin
scholar said 2 millennia ago about a language that has a distant linguistic
relationship to English."

It's also worth pointing out that the trend to conserve spelling and the
written form probably obscures the underlying grammar as it changes. The
French clitic pronouns seem rather more like recently-introduced inflections
to the verbs rather than clitics, and the 's of English acts rather more like
a particle than a genitive case marker.

------
jhj
The "3,854 cores" versus "28 cores" is dubious as always. 3,854 I think counts
just the individual fp32 ALUs; a true similar comparison would be number of
warp schedulers or maximum number of warps resident at once, or even just SM
count (which share a cache).

Apples to oranges (a super-hyperthreaded 1024/2048-bit wide vector machine
with minimal cache to a minimally hyperthreaded 128/256-bit wide vector
machine with lots of cache).

~~~
astrodust
Apples to oranges? It's more like how a hundred thousand squirrels can't write
a novel no matter how long they're given but one person can given a few
months.

Not all compute devices are equivalent and "core" vs. "core" is a totally
absurd comparison.

------
varelse
Crazy idea: buy the rights to sell AMD's Vega GPUs fabbed out of Intel and use
Intel's resources to build top-notch math and AI libraries for them.

Stupid idea: Keep insisting that x86-compatibility is the killer feature for
winning the parallel processor wars.

Stupidest idea: CS professors continuing to tell their students that learning
concurrent programming is too hard.

~~~
kogepathic
_> Crazy idea: buy the rights to sell AMD's Vega GPUs fabbed out of Intel_

1) I don't think Intel has a lot of spare fab capacity. Certainly not on the
nodes AMD is looking to produce Vega on.

2) Intel only just announced a deal to start manufacturing ARM chips on their
fabs. [0]

Honestly I can't believe it took Intel so long to wake up and realize that
their x86 business is okay, but if they want to survive long term they have to
accept that they need another business segment to bring in money after x86
stops being as relevant as it is today.

Just look at TSMC [1] if you want an example of why Intel is foolish to think
they can keep being top dog with only x86. TSMC was nobody in the 90's, and
now their market cap is within ~10% of Intel's [2] (TSMC @ 160B versus Intel @
175B).

TSMC doesn't even design their own chips. I'm not saying building
semiconductors is easy, or that TSMC has no R&D costs, but you're talking
about a company which specializes only in manufacturing some of the most
advanced chips on the planet, and doing it at volumes I doubt Intel can match.
I predict unless Intel does something major in the near future (<24 months),
TSMC will surpass Intel's market cap.

The former CEO of Intel Paul Otellini captured it best himself:

 _" It wasn't one of these things you can make up on volume. And in hindsight,
the forecasted cost was wrong and the volume was 100x what anyone thought."_
[3]

Intel still thinks they can kill it by selling expensive CPUs. TSMC is proving
that thinking is outdated. You don't have to have a 60%+ margin on your chips,
you just have to make it up in volume.

Where do you think the next billion chips are going to be sold? It's not going
to be $500 x86 CPUs. It's going to be <$5 ARM chips in embedded devices, and
that's exactly the market segment TSMC is appealing to.

[0] [http://www.theverge.com/2016/8/16/12507568/intel-arm-
mobile-...](http://www.theverge.com/2016/8/16/12507568/intel-arm-mobile-chips-
licensing-deal-idf-2016)

[1]
[http://www.google.com/finance?q=NYSE%3ATSM](http://www.google.com/finance?q=NYSE%3ATSM)

[2]
[http://www.google.com/finance?q=NASDAQ%3AINTC](http://www.google.com/finance?q=NASDAQ%3AINTC)

[3] [http://www.theinquirer.net/inquirer/news/2268985/outgoing-
in...](http://www.theinquirer.net/inquirer/news/2268985/outgoing-intel-ceo-
paul-otellini-says-he-turned-down-apples-iphone-business)

------
deepnotderp
Founder of a similar startup here.

The strategy that nervana is taking is to reduce precision to 16 bit fixed
point and then accumulate in 48 bits (which appears to be unnecessary and 24
bits should be sufficient).

I can answer any questions if anyone has any.

~~~
lowglow
Yeah, what's a good intro on understanding all of this? I've got an
EE/Chem/Math background.

~~~
deepnotderp
For deep learning or chips for deep learning? With an EE/Chem/Math background
you should be set to go :)

For deep learning, I highly recommend the cs231n course materials (available
for free online) and the Deep Learning Book by Goodfellow et al.

For chip design for deep learning specifically, it's a fairly new field, and a
lot of commercial interests, hence why a lot of it isn't available in the form
of an "Intro to Deep Learning Chip Design" course. But, the basic point is
that deep learning can both train and perform inference in astonishingly low
precision.

For training:
[https://arxiv.org/abs/1502.02551](https://arxiv.org/abs/1502.02551)

For inference, there are so many papers confirming this fact that there really
isn't one key paper to point to. This article is a good introduction however:
[https://petewarden.com/2015/05/23/why-are-eight-bits-
enough-...](https://petewarden.com/2015/05/23/why-are-eight-bits-enough-for-
deep-neural-networks/)

Other than that, stripping out the cache hierarchy, HBM memory, etc. are the
obvious steps to take in targeting deep learning. Note that it's my opinion
that deep learning chip startups (such as ours) need another "secret sauce"
beyond simply lowering precision, we're not like Intel which can just say
"okay, here's $100mil, make me an 8-bit GPU, Go."

~~~
lowglow
Thanks! Mind if I connect with you over email?

~~~
deepnotderp
Sure, my personal email is sixsamuraisoldier [at] g(mail) (dot) [com]

------
Filligree
The article is visible at first, but once the page loads entirely it
disappears.

~~~
SideburnsOfDoom
It happens to me too. I think it's The Economist's paywall in action. It's
pretty annoying that this article is linkable but not-really on the web.

~~~
Filligree
That's a broken paywall, then. I'm on mobile, so not using any ad blocker, but
there was no hint of explanation.

~~~
Baeocystin
I had the same thing happen, but it stayed open fine once I tried the link in
incognito mode. Probably just a mis-coded "you've reached your article limit
for the month" deal.

------
phkahler
Do most of these application (machine learning, vision, etc) rely on OpenCL?
It seems to me that GPUs are better suited to OpenCL than a regular CPU, but
if that's what all the excitement is about I suggest reading up on some of the
work on adding vector extensions to RISC-V and the corresponding flops/watt
they're may achieve. They are basing some of the work on results from here:
[http://hwacha.org](http://hwacha.org) although they make it clear that hwacha
will not be the standard vector instruction set.

------
TazeTSchnitzel
Intel must be regretting dropping their dedicated GPU project.

~~~
modeless
They didn't exactly drop it. It's the Knights series (Xeon Phi). They just
removed the graphics bits.

------
abhianet
> But the GPUs also have new destinations: notably data centres where
> artificial-intelligence (AI) programmes gobble up the vast quantities of
> computing power that they generate.

Should not it be "programs"? Or is "programmes" used in some dialect of
English I am not aware of?

~~~
tomatsu
"Programme" is used in British English for that broadcasting stuff etc. As far
as I can tell, they do use "program" for computer programs, though.

~~~
Wildgoose
That's correct. British English uses "programme" for everything except
computer programs.

