
Intel Completes $16.7B Altera Deal - spacelizard
http://www.eweek.com/servers/intel-completes-16.7-billion-altera-deal.html
======
nickpsecurity
Hell yes! Intel chips are about to get exciting again. SGI put FPGA's on nodes
connected to its NUMA interconnect with great results. Intel will likely put
it on its network on chip with more bandwidth and integration while pushing
latency down further. 90's era tools that automatically partitioned an app
between a CPU and FPGA can be revived now once Intel knocks out those
obstacles that held them back.

Combine that with OSS developments by Clifford Wolf and Synflow in synthesis
that can be connected to OSS FPGA tools to show even more potential here.
Exciting time in HW field.

~~~
iheartmemcache
Even more exciting is the OmniPath[1] stuff that came out as a result of the
Infiniband acquisition. RDMA + Xeon Phi + the insane number of PCI-e lanes[2]
available for those new M.2 SSDs which post just absolutely insane numbers[3]
all of which are supported by ICC[4] and you've got a really budget-friendly
HPC setup. I'm really hoping for IBM's OpenPOWER to gain traction because
Intel is poised to capture the mid-market in a dramatic fashion.

[1] See: IntelOmniPath-WhitePaper_2015-08-26-Intel-OPA-FINAL.pdf (my copy is
paywalled, sorry) [2]
[http://www.anandtech.com/show/9802/supercomputing-15-intels-...](http://www.anandtech.com/show/9802/supercomputing-15-intels-
knights-landing-xeon-phi-silicon-on-display) [3]
[http://www.anandtech.com/show/9702/samsung-950-pro-ssd-
revie...](http://www.anandtech.com/show/9702/samsung-950-pro-ssd-
review-256gb-512gb) [4] [https://software.intel.com/en-
us/articles/distributed-memory...](https://software.intel.com/en-
us/articles/distributed-memory-coarray-fortran-with-the-intel-fortran-
compiler-for-linux-essential) This is for Fortran, but the same remote Direct-
Memory-Access concepts extend over to the new Xeon architecture.

~~~
scurvy
Not to nitpick, but M.2 is just a form factor. The big gains come from being
NVMe PCIe, not the form factor. You get the same gains with NVMe PCIe in 2.5"
drive form factor.

~~~
nl
M.2[1] defines both the form factor and (importantly) the interface. While it
is true that NVMe PCIe is the interface that makes the difference here, the
standardization of both the interface and the form factor seems pretty
important here.

[1] [https://en.wikipedia.org/wiki/M.2](https://en.wikipedia.org/wiki/M.2)

------
ChuckMcM
So I have this vague recollection that Intel _had_ an FPGA division in the
early 90's that they spun off. Was that what became Lattice? Sad that the
Interwebs get really murky pre 1995

~~~
rgbrenner
Good memory. It was Intel's Programmable Logic Devices unit. First FPGA in 92,
and was sold in 1994 for $50m to Altera[0].

The processors were the FLEXlogic line. They only released a few (looks like 4
total[1]). Here's an announcement for one:
[https://groups.google.com/forum/#!topic/comp.sys.intel/YBUtO...](https://groups.google.com/forum/#!topic/comp.sys.intel/YBUtOwHXv08)

0\. [http://www.embedded.com/electronics-blogs/max-unleashed-
and-...](http://www.embedded.com/electronics-blogs/max-unleashed-and-
unfettered/4439610/How-will-Intel-s-purchase-of-Altera-affect-embedded-space-)

1\. [http://www.intel-vintage.info/timeline19901995.htm](http://www.intel-
vintage.info/timeline19901995.htm)

------
Cieplak
I'm hoping this will lead to improvements in their FPGA development
environments.

~~~
0xcde4c3db
I'm not holding my breath. Even if they decide to do it, I imagine it would
take a solid 5 years to flush all the crap out of the pipeline.

------
mozumder
OK now how quickly can FPGAs be adapted to search through Postgres indexes?

~~~
pjc50
I can't see how it would help - this kind of search involves almost no
computation and a lot of memory/disk bandwidth.

People need to remember that FPGAs are not a magic bullet, especially not for
throughput; they're better used for low-latency hardware interaction and
things where you need cycle-deterministic behaviour.

Crypto is a far more interesting potential case.

~~~
visarga
And low power computation

~~~
pjc50
Are you sure about that? FPGA MIPS/Watt tends to compare rather badly.

~~~
ajdlinux
What precisely is an "instruction" in the MIPS/watt rate here, given the FPGA
context?

~~~
moftz
He's right in the sense that an FPGA will use more power than a dedicated
chip. The logic elements are fairly large when compared to the ones in an
Intel CPU for example. FPGAs are good for when you know you will need to
change a design on the regular like prototyping or when the design will have
many updates. If you need it to be faster or you need more than a few hundred,
going with an MPGA (depending on the use) might be cheaper. These don't allow
change to the design as its baked into the chip but they use the same type of
logic as FPGAs and require less power as the logic elements are smaller.

------
cornholio
Intel CEO Brian Krzanich: "We will apply Moore's Law to grow today's FPGA
business, and we'll invent new products that make amazing experiences of the
future possible"

PHB, how you've grown !

------
vvanders
Not sure how they think FPGAs are going to reduce their "cloud workload".
FPGAs are pretty power hungry (aside from lattice) and only work well if you
have some unique requirements.

~~~
petke
Fast cores take exponentially more energy than slow ones. So the solution is
to use more slow and simple cores instead. We get more performance per watt
that way. On PCs we can use GPU's to make computations in parallel. I guess
this is like that, but for servers.

~~~
PascLeRasc
>Fast cores take exponentially more energy than slow ones

What's your source for this/why does this happen?

~~~
petke
From my fuzzy memory:

To make a cpu fast you need to shrink them. To the point that the "wires" in
the core are so close together that electrons jump from one another. So there
is lots of electrical interference. To overcome this voltage needs to be
increased. This takes more power and makes the cpu hotter (which requires
cooling).

But dont take my word for it. I did some quick Googling. Maybe you can find
some better source and explanation:

[https://en.wikipedia.org/wiki/Multi-
core_processor#Technical...](https://en.wikipedia.org/wiki/Multi-
core_processor#Technical_factors)

"For general-purpose processors, much of the motivation for multi-core
processors comes from greatly diminished gains in processor performance from
increasing the operating frequency. This is due to three primary factors:

\- The memory wall; [...]

\- The ILP wall; [...]

\- The power wall; the trend of consuming exponentially increasing power with
each factorial increase of operating frequency. This increase can be mitigated
by "shrinking" the processor by using smaller traces for the same logic. The
power wall poses manufacturing, system design and deployment problems that
have not been justified in the face of the diminished gains in performance due
to the memory wall and ILP wall."

------
comboy
OTA CPU upgrades? ;)

------
fizixer
Two words:

\- Neuromorphic.

\- Bye bye Xilinx.

~~~
PeCaN
Pretty sure IBM and AMD are both partnering with Xilinx, they're not going
anywhere any time soon. (Plus they also have more enterprise contracts than
Altera does.)

------
belleandsebasti
Fpgas really only accelerate parallel workloads, sequential computation is
done easier and just as good with a CPU.

Problem with massive parallelism becomes communication costs and spatial
routing. Nothing is free.

More excited about commodity chips with 100s of cores. Rather have something
that's easier to program with a faster dev cycle if I'm going to tackle
parallelism.

