
Laser may replace copper in chips for high-speed, low-energy data transmission - jonbaer
http://www.kurzweilai.net/laser-may-replace-copper-in-computer-chips-for-high-speed-low-energy-data-transmission
======
dogber1
No, it won't: on the length scale of a chip, the power efficiency of an
electrical-optical-electrical transmission line is an order of magnitude worse
than direct electrical transmission, regardless of the actual technology.
Also, optical transmission lines are huge in volume in comparison to
electrical lines.

~~~
thomasjames
The article makes it seem like this technology will completely replace copper,
which is obviously untrue. Silicon photonics for use in interconnects in
multicores, though, is very promising purely from the perspective of
bandwidth. HP and Intel have expressed interest, and there is a lot of
academic research at Northwestern and UCSB ECE, for instance. I do not know
what energy efficiency figures you were referring to, but emitter efficiency
is dependent on the band gap and other properties of the semiconductor and
lasing material, so the specific technology used does matter.

------
quarterwave
Processor clock frequency scaling is limited by two factors: (i) logic signal
delays, and (ii) processor power dissipation (heat).

In broad brush strokes, a processor core needs to shunt buses of signals
across several timing domains. Due to process, voltage, and temperature (PVT)
variations the wire delays exhibit variations at several levels - across chip,
chip-to-chip, supply voltage, etc. Think of football teams from different
cities taking multiple airline flights to get to the same ballpark, with the
possibility of bad weather causing a flight to be cancelled, etc. To ensure
that signal buses from different domains rendezvous 'on time', the season
schedule has to provide for 'timing slack'. The process of intentionally
adding little gate delays here and there to increase the timing slack across
the entirety of a large chip is known as _timing closure_ , and is achieved
using sophisticated design algorithms. As clock frequency increases, the
timing closure literally hits a brick wall - beyond some GHz number the slack
rapidly plummets to zero, goes negative.

Increasing supply voltage can make the transistors go faster. Think aircraft
flying faster by burning more fuel and you can see how that improves timing
slack, but also increases the power dissipation. As heat increases, there is a
vicious cycle - the leakage current of transistors in the OFF state increases
- which doesn't cause a core meltdown, but makes the power dissipation scale
viciously with voltage. Transistor reliability is also degraded, which is why
data centers can't afford to overclock the way a gaming enthusiast might.

Net is that in a synchronous design paradigm (multiple clock domains, but not
asynchronous logic), any easing of the timing closure brick wall is useful for
designs chasing frequency. Photonic buses, even at high baud rates, must still
be wide enough to avoid a serialization latency tax.

However, if the architecture prefers to keep the clock frequency lower and
increase core count within the same chip thermal budget (TDP), then photonics
may be limited to inter-core buses (which are not sensitive to timing on the
scale of a clock period).

I don't have much idea about how all this works with asynchronous logic,
perhaps someone else can comment.

