
Intel Discontinues the Intel Developer Forum; IDF17 Cancelled - randta
http://www.anandtech.com/show/11279/intel-discontinues-the-intel-developer-forum-idf17-cancelled
======
matt_wulfeck
From what I gather online, intel is entering into an exestential crisis as
homegrown chips and GPUs are replacing CPUs in the battle for tomorrow's data
center. All of the interesting research from Google and others all have one
thing in common: a custom asic processor.

~~~
eklitzke
The ASIC processors Google use are for _inference_ , not training. Having
efficient inference is great, but training is what uses up most of the compute
power.

The battle between GPUs and CPUs in the datacenter is real, but I wouldn't
count out regular CPUs yet. Even with GPU-ready frameworks like Tensorflow,
Theano, and Pytorch, it still requires a fair amount of domain expertise to
get good performance out of GPUs. These frameworks can automatically offload
compute kernels to the GPU, but actually getting good performance still
requires you to understand the memory model, what data to pin in GPU memory,
how to use the limited GPU memory effectively, etc.

The new instruction sets that Intel have been adding (e.g. all of the recent
vectorized FMA instructions) go a long way to making Intel CPUs competitive
for a lot of this ASIC/GPU stuff out of the box for things like neural
networks. Nvidia has been off to a really great start in this area, and they
have carved out a very respectable lead, but I think it's premature to say
that Intel (or AMD) are having an existential crisis.

~~~
puzzle
Correction: the ASIC processors that Google used in 2015 and just talked
about, two years later, were for inference. We don't know what they use now
and how. The paper hints that training might happen in a later revision.

You have to wonder why they started with inference and not the other way
round. Perhaps it's the case that you train a model once, then use it for
predictions many, many, many times, thus using more aggregate compute power.
(I used to work at Google, but the above is based on public information and
speculation alone.)

The paper does confirm that CPUs are closer to GPUs for smaller or latency-
sensitive workloads than most people assumed. At least on 2015-vintage
hardware, that is. And they're a lot less trouble to set up and provision.

~~~
jtmcmc
Training can happen asynchronously and in batch - inference needs to be low
latency and have massive parallelism because it's powering millions of android
devices voice to text recognition - for instance.

------
drawkbox
Intel has always been a great research company. They are responsible for
OpenCV[1] and largely the entire augmented reality craze is from that
sponsored work. They are an important part of technological progression in the
last couple decades, they even made Apple sales pick up with intel chips
(creating a killer *nix dev machine for a time '06ish) before the iPhone came
along.

It would be a shame if the bean counters got them into this with drawing back
r&d and are now taking r&d away. The engineers are definitely not in control
at Intel today, hard to quantify research projects and developer/engineer
outreach to non engineers. These types of moves have been known to kill off
companies.

[1]
[https://en.wikipedia.org/wiki/OpenCV](https://en.wikipedia.org/wiki/OpenCV)

~~~
doublerebel
My experience is that also Willow Garage is listed as the main author on many
of the (still current) AR tooling, but I don't know the story between them and
Intel. It's an incredible amount of open source work that enables the startup
I'm doing now, I can only hope to contribute as much back/forward from our
company.

------
pkaye
I'm guessing these yearly conferences will fade away as Moore's Law falls
apart and Intel and others will have a harder time coming up with something
new to showcase every year. No longer can you rely on process shrinks to
innovate.

~~~
komali2
People always talk about Moore's Law falling apart, and it seems like people
didn't see it coming. I mean, I didn't, I'm just a guy that reads a lot of
scifi. But I always assumed whiz-bang mathematicians had already figured out,
I dunno, the threshold at which a processor can be nanosized, or like how big
a processor could become before the distances became too great that due to the
speed of light further computational advancement wasn't possible. I mean, I
know nothing about this stuff, like I said, I just read a lot of scifi and
have a high opinion of all the smart folk researching this stuff. Would love
to learn more.

~~~
akuma73
Have a look at this photo:
[http://m.eet.com/media/1169843/120906_intel_22_3.jpg](http://m.eet.com/media/1169843/120906_intel_22_3.jpg)

That's a fin of a modern transistor (Intel 22nm). The dots in the photo are
_atoms_. It's fair to say that Moore's law is nearing an end or has already
ended.

~~~
Animats
Atoms are too big, photons are too big, electrons are too big, and the speed
of light is too slow. There's no longer plenty of room at the bottom.

Still, it's not like the physical size of semiconductors is the problem. A CPU
today is maybe 100mm^2 of silicon. 4U of rack space can easily hold tens of
thousands of CPUs, if you can power them, cool them, and connect them up in a
useful way.

~~~
akuma73
_if you can power them, cool them, and connect them up in a useful way._

Good luck with that. Power costs go up linearly at best with cores if you
don't have any more transistor scaling.

~~~
sbierwagen
Power will always scale linearly with cores, of course, but you could make
cores less power-hungry with reversible computing:
[https://en.wikipedia.org/wiki/Reversible_computing](https://en.wikipedia.org/wiki/Reversible_computing)

Actually useful reversible logic is, as they say, an open problem.

~~~
akuma73
My point was that a new transistor shrink would give you lower power vs. the
same transistor in previously larger node. You get more compute-per-watt with
small transistors. If scaling stops, then all kinds of things get more
difficult.

------
m-p-3
I'm wondering why Intel is withdrawing so much of their money from different
projects like this and OpenStack.

Maybe AMD is putting a lot of financial pressure on them, and they have to put
more funds in R&D?

~~~
ksec
But these money are tiny compared to the Impact they will/should get from AMD
Ryzen.

To remain price competitive with AMD Ryzen, Intel's Gross margin will likely
take a dive from the current 60% to below 50% over the next few years. The
Data Center Group represent 30% of the Revenue, and my guess it represent 60%+
of the profits.

OpeX is increasing, R&D required for each node is increasing. They are not in
Smartphone, they are not opening up their Fab. Basically their future looks
pretty grim to me. They wont disappeared all of a sudden, they will likely
have 3 to 5 years more time to figure it out.

But, again none of this has to do with Intel withdrawing from so many project.
I cant believe saving money from these event / project are the motives.

~~~
joezydeco
Why can't that be the motive? Panic makes companies like Intel execute lots of
quick short-sighted changes in the hopes of turning the boat.

Something like cancelling a developer's conference would _totally_ fit this
scenario.

------
brendangregg
That's disappointing. It was a useful technical and marketing event to learn
about a variety of Intel technologies, some I wouldn't have otherwise been
exposed to, and certainly not had a chance to talk to the engineers. I'd
rather they moved it to somewhere cheaper than to cancel it.

------
pmarreck
I for one am not terribly upset that the Intel hegemony's time has run out

~~~
coldtea
Who said their hegemony has run out? The fact that AMD put out some also-run
processors?

~~~
pmarreck
The writing is on the wall. From ARM to incredibly powerful emerging GPU's to
interest in custom ASIC, the Intel instruction set's days are numbered

~~~
coldtea
I don't care what GPU my mobile phone runs. It's fast enough, I'm ok with it.

I also don't care about servers. I'm not a dev-ops/admin person.

On my laptop/desktop, where I DO care, I don't see Intel going away anytime
soon.

Even if we switched to some ARM devices, it would be a regression (if not for
anything else, for having to run most Windows/MacOS stuff under some kind of
Rosetta-like interpreter for many years).

Besides, as soon as ARM CPUs approach the limits of various manufacturing
processes, they will face the same issues Intel does. Moore's law is not
coming back.

------
Animats
Independent developers have fulfilled their function and are no longer needed.

~~~
CalChris
Dunno if you've been to an IDF, but I'd say that IDFs have fulfilled their
function and are no longer needed. I went to the Skylake IDF and after a very
predictable presentation there was an opportunity to ask questions. It went
roughly like this:

 _What time is it?_

 _The question was what time is it. Intel is an industry leading leader
committed to developing customer solutions. We are not disclosing what time it
is now. We may be making announcements related to time in the future._

This cost me a day. On the plus side of the ledger, I like Intel
documentation, especially their _Optimization Reference Manual_. But I don't
need to go to IDFs anymore and now I don't have to.

~~~
ghaff
An industry analyst of my acquaintance made a similar comment to me. He said
that Intel has become significantly less willing to disclose their future
plans with developers and others. He also said that there seems to be
diminished interest by developers in going to an Intel-centric show.

A show like IDF clearly makes less sense if you're increasingly unwilling to
say much about future directions. It's also entirely possible that the
projections for event sponsorships, etc. weren't looking to be where Intel
wanted them to be.

------
kensai
I think it's better for them. The IDF was used to showcase new products. If
they have no new significant products to show, it's better not to summon
anyone instead of getting the flak for introducing only minor evolutions as it
has happened in the past few years with the (dying?) tick-tock model.

------
residentx
Don't try to make too much of this. I had been working with Intel since
December on the issues of IDF. The vendor was not performing. They had no
website up and emails and calls were not returned. Also, when I finally did
reach the vendor, I asked them how they were going to handle the visa
logistics of China/Asia IDF attendees coming to the US. I asked them if 2-3
months would be enough to get Visas approve with the Trump Administration. A
few days here we are with the cancellation. This mess is purely Intel's fault.
I even emailed Krazanich about putting more eyes on this. But's let's move.
I'm heading to Ignite now....

------
wfunction
Anybody know if and when desktop CPU's might start getting FPGAs? Been waiting
for that for a while now...

~~~
rbanffy
Not before mainstream software supports it. We had 64-bit with Alpha, MIPS,
SPARC, but 64-bit x86 only happened when it could run 32-bit x86 code
flawlessly and it only got mass adoption when Windows went 64-bit.

~~~
bhouston
We only got Intel 64 bit CPUs once AMD got them first and people starting to
buy them.

~~~
wolfgke
> We only got Intel 64 bit CPUs once AMD got them first and people starting to
> buy them.

Intel came first with Itanium (June 2001:
[https://en.wikipedia.org/w/index.php?title=Itanium&oldid=775...](https://en.wikipedia.org/w/index.php?title=Itanium&oldid=775602971#Itanium_.28Merced.29:_2001)),
which clearly _is_ a 64 bit CPU. The first x86-64 CPU by AMD got released in
April 2003
([https://en.wikipedia.org/w/index.php?title=X86-64&oldid=7754...](https://en.wikipedia.org/w/index.php?title=X86-64&oldid=775455698#History))
and Intel's first x86-64 CPU came out June 2004
([https://en.wikipedia.org/w/index.php?title=X86-64&oldid=7754...](https://en.wikipedia.org/w/index.php?title=X86-64&oldid=775455698#History_2)).

~~~
rbanffy
Alpha, SPARC and MIPS had 64-bit offerings before that. I was reading e-mail
on a 64-bit machine when Itanium was called Merced.

~~~
wolfgke
My parent was talking about _Intel 64 bit CPUs_.

~~~
rbanffy
Sorry. I was comparing both sides as an analogy why FPGA's won't be available
on mass-market CPUs anytime soon.

The reason Itanium never achieved the mass acceptance Intel expected is
because, even with Windows running on it, it never ran x86 software at a
reasonable price/performance point. Java was Sun's attempt to make SPARC
viable by breaking with the ISA compatibility problem.

Today, with Linux, JVM, .NET Core and interpreted languages running a lot of
the server workloads we have, it'd probably be a very different story.

When compared with the other 64-bit architectures we had (I forgot IBM's
z/Architecture) amd64 is still an awful register starved overcomplicated
architecture.

~~~
wolfgke
> The reason Itanium never achieved the mass acceptance Intel expected is
> because, even with Windows running on it, it never ran x86 software at a
> reasonable price/performance point.

That was _one_ (and surely important) reason. There were lots of others
reasons

>
> [http://courses.cs.washington.edu/courses/csep590/06au/projec...](http://courses.cs.washington.edu/courses/csep590/06au/projects/history-64-bit.pdf)

(seriously: everybody interested in the history of Itanium should read this
text), for example:

"Davidson also pointed out two areas where academic research could create a
blind spot for architecture developers. First, most contemporary academic
research ignored CISC architectures, in part due to the appeal of RISC as an
architecture that could be taught in a semester-long course. Since graduate
students feed the research pipeline, their initial areas of learning
frequently define the future research agenda, which remained focused on RISC.
Second, VLIW research tended to be driven by instruction traces generated from
scientific or numerical applications. These traces are different in two key
ways from the average systemwide non-scientific trace: the numerical traces
often have more consistent sequential memory access patterns, and the
numerical traces often reflect a greater degree of instruction-level
parallelism (ILP). Assuming these traces were typical could lead architecture
designers to optimize for cases found more rarely in commercial computing
workloads. Fred Weber echoed this latter point in a phone interview.
Bhandarkar also speculated that the decision to pursue VLIW was driven by the
prejudices of a few researchers, rather than by sound technical analysis."

~~~
bhouston
There seems to be alot of ideas in VLIW that is used by GPUs to get high
degrees of parallelism and a hiding of the cost of conditionals.

~~~
rbanffy
Didn't Transmeta have a processor that used software decoding from x86 to an
internal VLIW?

------
SadWebDeveloper
Interesting move after "AMD Rizen" was launched, wonder how this is going to
be viewed on the stock market.

~~~
alayne
AMD has had cheaper/slower processors for many years. Is something different
about Ryzen?

~~~
my123
Yes, the Naples 32 cores/64 threads powerhouse is what's dangerous to Intel,
especially at its performance level. (datacenter is where Intel probably makes
most of its profits, and Naples is eating right into that market)

~~~
alayne
According to articles on Forbes and Extremetech, Intel currently has 99% of
the server market. This is a new product that may affect Intel, but it hasn't
done anything yet. So far this is just speculation then.

~~~
snovv_crash
It hasn't done anything yet because it hasn't been released yet. So of course
it is speculation. However, based on the downclocked Ryzen benchmarks I have
seen, it should significantly outperform anything Intel has in Perf/W at the
5W/core level. And Perf/W is what datacenters care about.

~~~
alayne
I'm just trying to relate OP's comment to changes in the conference or even
claims about a market effect. It seems like wishful thinking.

~~~
my123
AMD will have to also do chipsets right for the platform to have a chance -
they didn't really prove that yet.

------
rdslw
This for me is a clear sign of mobile (and their processors) steady growth
(and probably dominance) to (some) decline of Intel.

Android as an OS surpassing this March everything else on web-usage speaks the
same story in different words ([http://www.wired.co.uk/article/android-
overtakes-windows](http://www.wired.co.uk/article/android-overtakes-windows)).

