
Intel, TSMC and other chipmakers weigh extreme ultraviolet lithography - Lind5
http://spectrum.ieee.org/semiconductors/devices/leading-chipmakers-eye-euv-lithography-to-save-moores-law
======
skrebbel
ASML has been saying "we'll have EUV by next year!" for a whole bunch of years
now. Most people around the company believe they'll pull it off, but maybe
they won't. ASML has a culture of selling machines they can't produce (and
then working like _maniacs_ , 2000 engineers at the same time, to pull it off
anyway before the delivery date 6 months from sell-date, and then shipping a
half-working machine along with 5 engineers plus a remote force of 1000
engineers trying to get it to actually meet its requirements while it's being
built up in the customer's fab).

The sales folks don't care - they just sell. History has shown that time and
time again, if they promise to deliver, they do - through sheer force of will
and money (turns out the Mythical Man Month isn't so mythical if you're really
willing to put 30x the amount of cost/people on it as is technically
necessary). So when the researchers said they thought they could pull EUV off,
the sales guys went ahead and sold it.

And the customers bought it, as you can read in this article. But if ASML _can
't_ pull it off, then we'll see a bunch of really interesting changes in the
semicon landscape I think. ASML isn't the only company who made the full-on
bet on EUV. If ASML can't deliver, then Intel may have an even bigger problem
than ASML. I'm _really_ curious what would happen then, almost to the point of
hoping EUV will catastrophically fail.

ASML has no believable competition in this space.

~~~
agumonkey
First paragraph reminded me of most software project stories in the old days.
Fun.

There are a bunch of other roads that have been popping up recently IIUC.

------
Animats
The "light source" is a very hard problem. "Extreme ultraviolet" is really
"soft X-rays." Until recently, it took a synchrotron to generate those. Now
there's a complicated scheme where a laser vaporizes droplets of tin (at
200,000° C) and the plasma emits soft X-rays.[1] It's amazing that's usable in
a manufacturing process at all. It's more like a physics experiment intended
to run for short periods.

I had hope for e-beam lithography, which works fine and has been able to get
down to similar resolutions for years, but is just too slow for production. No
masks, just writing the wafer with a scanning beam under computer control.
Writing is one pixel at a time, which is why it's slow.

[1] [http://spie.org/newsroom/4493-making-extreme-uv-light-
source...](http://spie.org/newsroom/4493-making-extreme-uv-light-sources-a-
reality)

~~~
Baeocystin
individually tracked, double-pulsed droplets of tin...

I know the article you linked lists the many issues that arise, but to even
try such a technique- I think it demonstrates just how hard the problem of
further photolithography improvement is.

~~~
jacquesm
It also illustrates how desperate the industry is for another tick on the
clock and how much money stands to be made for the company that cracks this in
an economically viable way.

------
cushychicken
Man, this is a great followon to the article about Intel's business model that
floated up to the front page a few weeks ago. It really illustrates how Intel
is betting the farm, year after year, on reliable performance increases. Even
with how many semiconductors I build into products, I can't help but boggle at
the sheer scale of the semiconductor industry's investments. $1.38 BILLION
dollars committed to a research project that's not set to pay off for three
years is a rare thing in the private sector.

On a less fanboy note - I'd have loved to see a more detailed description of
how they ship the EUV tools. I can only imagine the logistical headache
involved for Intel - they spec out all of their fabs to be identical for
quality reasons. Shipping nine school-bus-sized, vacuum sealed containers to
some far corner of the globe has to be a _shitload_ of work. And then you have
to set it up when it finally gets there! In _vacuum_! And then when you're
done, you have to do it again for _each Intel fab!_

(Fanboy rant over now, I swear, I just get really excited about making all
these miniscule things for some reason.)

~~~
mjevans
Intel can afford to, and also has to, make that 'bet' because that is their
competitive advantage.

Intel has 'the best' chips because they literally pay the price for being, at
least on their headline grabbing products, a full process generation ahead of
more or less the entire rest of the semiconductor industry.

~~~
IBM
Does Intel still have a competitive advantage? It seems like they've hit a
wall and competitors like TSMC and Samsung have basically caught up.

I think they've acknowledged that they're slowing down with their new strategy
[1].

[1] [http://arstechnica.com/information-
technology/2016/03/intel-...](http://arstechnica.com/information-
technology/2016/03/intel-retires-tick-tock-development-model-extending-the-
life-of-each-process/)

~~~
deelowe
Won't those competitors hit a wall as well? Seems like Intel doesn't need to
be fast, just faster than the competition. I imagine the rate of change will
slow for the whole industry.

~~~
IBM
If they aren't a generation ahead as they have been then I think it ceases to
be a competitive advantage. Then they would have to start asking questions
like "Why not fab Intel chips at TSMC or Samsung if they offer better
pricing?"

They've already decided they're going to fab ARM chips, so I think they've
already come to this conclusion.

~~~
baq
Here's the trick: they already do. Only the biggest margin products get fabbed
at Intel fabs.

~~~
petra
Is there some x86 fanned outside Intel ?

------
narrator
What if computer technology now is like space technology in the late 70s?
After 40 years of continuous progress, suddenly the rate of improvement in
technology completely flatlines and all the future predictions of going to the
outer planets and such don't come true.

How will the future be different?

~~~
peller
The semiconductor manufacturing processes might be flatlining a bit, but
that's not entirely synonymous with "computer technology". One recent example
we've already witnessed was in the GPU sector. They (nVidia and AMD) were
stuck on 28nm for at least 5 years, and instead of just being able to rely on
the "standard improvements" (by way of Moore's law) to get better performance
at lower power, they had to rely on architectural improvements. nVidia fared
significantly better, managing to _double_ performance-per-watt _on the same
manufacturing node_ with Maxwell (GTX 980). Is such a feat easily repeatable?
Very unlikely. But it also goes to show that when these companies can rely on
improvements in the manufacturing process for their speed/efficiency boosts,
they don't necessarily put as much effort into architectural improvements as
they otherwise might.

If you haven't heard of the Mill CPU, it's one example of completely
rethinking things:
[http://millcomputing.com/docs/belt/](http://millcomputing.com/docs/belt/) Of
course, the problem there is the missing software/OS toolchain. But it also
points towards there being some huge inefficiencies present in "classic" von
Neumann architectures, and to me, the possibility that there's still room for
things to improve.

~~~
paulmd
The next "total rethink" of GPUs is multiple dies on a single package. Not
just for HBM, there's no reason you can't have the dies "hanging off" the
interposer (supported by an inactive substrate), only overlapping the
interposer on the regions necessary for chip bumps. Essentially each die
becomes a large SMX Engine or other hierarchal unit that is marshalled by some
central memory controller or other controller that presents the illusion of a
single GPU rather than SLI/Crossfire.

This is actually the direction AMD is going with Navi, not because of the
performance gains per se but because it helps yields big time. You can pre-bin
your chips, then stitch a bunch of small chips into medium chips while keeping
your yields high.

In theory you can scale up for quite a while. At the long term, you will
eventually be limited by clock degradation/signal propagation time however.

The short-term problem is heat and power. This doesn't help efficiency gains
per se. If you are stitching together four 600mm^2 dies then you are going to
be pulling 1000-1200W and dumping that back out into your cooling system. For
US consumers, their circuit breakers are a much more immediate limit. Most
household circuits are 15A @ 120V, and that's an _instantaneous_ limit. You
are not supposed to continuously pull more than 80% of a circuit's
instantaneous rating, so that's 12A (1440W at the wall). Factor in the losses
from the PSU's 80% efficiency and you are now talking 1152W continuously.
Again, that works out to about four 600mm^2 GPU dies, plus some power budget
for the CPU and so on. And you'd probably need to take drastic measures to
keep that cool - that's a lot of heat in a small surface area.

------
forbin_meet_hal
"Extreme Ultraviolet Lithography..."

Because saying "We're using x-rays" just sounds too scary right now...

~~~
mikeyouse
I doubt that has anything to do with it.. Probably due to the 15nm wavelength
they're using and a <10nm wavelength being the typical cutoff for x-ray.

------
ant6n
So one thing I wonder - if you loose 30% of the energy with every mirror, is
it really necessary to use a dozen mirrors between the laser and the waver?

Getting rid of two mirrors would double the energy at waver, removing 6
mirrors increases it by a factor of ten.

------
aceperry
EUV has been talked about for so long, I heard about it when I was in college,
in the 1990s, I'm just amazed that it's finally coming to fruition. But good
to see that the technology has made great strides.

