
Tearing Apart Google’s TPU 3.0 AI Coprocessor - tristanj
https://www.nextplatform.com/2018/05/10/tearing-apart-googles-tpu-3-0-ai-coprocessor/
======
jacksmith21006
The TPU 2 was about half the cost of using Nvidia for same work.

[https://medium.com/@8fee9a760280/c2bbb6a51e5e](https://medium.com/@8fee9a760280/c2bbb6a51e5e)

Be interesting how much further ahead Google is now with the 3.

But most impressive is able to offer Wavenet at a competitive cost to the old
technique for TTS used by everyone else.

16k through a NN in real time is just hard to believe possible. Nvidia has
their work cut out for them.

------
tehsauce
Their version of fp16 (bfloat as they call it) is very interesting. 7 bits of
mantissa is only about 2 decimals of precision!

------
godelmachine
This is a very interesting read. I wonder if Google has an in-house
fabrication facility for all of its TPU's.

~~~
wmf
They don't; they probably use TSMC.

