
World’s biggest chip created to meet demands of AI - banjo_milkman
https://www.ft.com/content/3ab2fe9c-c242-11e9-a8e9-296ca66511c9
======
pjc50
The general approach is called "wafer scale", and it's not new:
[https://www.extremetech.com/extreme/286073-building-gpus-
out...](https://www.extremetech.com/extreme/286073-building-gpus-out-of-
entire-wafers-could-turbocharge-performance-efficiency)

However, one of the longstanding problems is yield. A whole wafer will have a
number of defects on it, this is simply unavoidable. This requires that the
wafer scale system must be able to disable or disconnect faulty subsystems.

The use of this for AI raises the interesting possibility of "learning around"
some kinds of defect, although it will still be necessary to disconnect bits
with short circuits in them.

It's also quite expensive simply to buy all that area, at least $10k per
wafer. You save a bit on packaging and building a carrier PCB for it, but not
a great deal.

------
AdamJacobMuller
> It also eats up as much electricity as all the servers contained in one and
> a half racks

Seems like such a large chip is going to pose issues? 1.5 racks, lets
generously say they mean lower power racks and are perhaps talking about
7.5kW, in a single chip? Seems like it would require some kind of water block
with sub-zero cooling...

------
rwmj
Does anyone remember when "wafer-scale integration" was big - in the 1980s?
[https://en.wikipedia.org/wiki/Wafer-
scale_integration](https://en.wikipedia.org/wiki/Wafer-scale_integration)

------
givinguflac
This title would also make a good scifi movie subtitle.

------
bufferoverflow
Paywalled.

Try this:

[http://archive.li/2SOos](http://archive.li/2SOos)

