
Intel Leverages Chip Might to Etch Photonics Future - jonbaer
http://www.nextplatform.com/2016/08/17/intel-leverages-chip-might-etch-photonics-future/
======
Animats
Caution, excessive buzzword density.

Summary: data centers expected to be using more optical interconnects within
the data center. Optical interconnects at chip/board level a possibility.
Problems of board assembly with fiber interconnects glossed over.

------
williamscales
> "The other commercially silicon photonics products out there use externally
> created lasers based on the 3/5 material system that are then completely
> processed off chip and are placed either down or next to a silicon photonics
> integrated circuit, and these have to be meticulously aligned with high
> precision instruments to couple the photonics to the circuit. It is not
> really a full silicon platform"

The fact that Intel can appatently fab the laser diodes in silicon on the same
die as a chip seems really huge. It was all the way down at the bottom of the
article but I think that's the really interesting part.

~~~
greglindahl
Intel's been making announcements about mastering silicon photonics for a long
time - but they've yet to ship anything. I suspect that Omni-Path & associated
switches are intended to be an implementation target this time around.

------
jostmey
It seems like Intel is working on everything except a 10nm transistor. I'd
invest in TSMC if I had money.

~~~
eloff
"TSMC’s 10nm shrink would retain a 20nm minimum feature size, while its 7nm
would deliver a 14nm minimum feature size (10/20 and 7/14, respectively)."[1]

Intel is the only company with a "true" 14nm and 10nm process.

Considering the number of people that don't seem to realize TSMC 10nm != Intel
10nm, Intel should change their marketing.

[1] [http://www.extremetech.com/computing/228806-arm-announces-
ne...](http://www.extremetech.com/computing/228806-arm-announces-new-artemis-
cpu-core-first-10nm-test-chip-built-at-tsmc)

~~~
DiabloD3
Intel has similar issues.

When 22nm came out (for Ivy Bridge and Haswell), replacing 28/32nm, the
density of the chip did not go up 45%, but the feature size dropped, thus
allowing better power efficiency and lower waste heat.

When 14nm came out (for Broadwell, Skylake, and now the upcoming Kaby Lake),
again, the density of the chip did not go up 57%, but, again, better power
efficiency and lower waste heat was a noticable improvement.

Chips are not getting faster realistically because of this.

The largest problem that people should be focusing on is not the difference
between fab techs (since GloFo and TSMC != Intel anyways), the problem people
should be focusing on is, until last year, the largest power user (in
desktops, workstations, game machines, and some laptops), the GPU itself, was
still fabbed at 28nm by both Nvidia and AMD.

Also, on the server side, the SoCs for HBAs and NICs, the SoCs for drives, are
still being fabbed at both 28nm and 45nm.

It isn't that TSMC and GloFo are slow to bring new fab sizes to market, it is
that they are prohibitively expensive early on due to volume. As new fabs are
constructed, only when "important" chips stop using old fabs do NICs, HBAs,
drive controllers, other tertiary chips finally move down to the next size.

And to play devil's advocate, unless you're doing insanity-tier stuff (four,
six 40/100gbit ethernet ports; three, four many-port HBA; several dozen
drives; "zero" power wireless SoCs running off button cells; high heat
tolerance industrial hardware), you don't NEED the improvement in thermals and
power usage; only CPUs and GPUs and other math-/parallel-heavy chips need to
continually chase that.

