
Compiling Julia to TPUs [pdf] - KenoFischer
https://arxiv.org/abs/1810.09868
======
kwertzzz
This is a very impressive work. I am wondering why it was compared against
PyTorch (just on the CPU) and not against Tensorflow (on CPU and TPU) or do I
miss something?

~~~
chrispeel
Good question.

I wonder if the point of this article is that other tools can be used to
access TPUs other than the nominal Tensorflow-based workflow. I.e. the article
is to show that another workflow exists to access the capabilities of the TPU,
which are (to some degree at least) already known. Keno Fischer says [1]
they'll have other benchmarks out soon.

[1]
[https://twitter.com/KenoFischer/status/1054932277824774145](https://twitter.com/KenoFischer/status/1054932277824774145)

~~~
kwertzzz
OK, I see. In any case, even if the performance would not match Tensforflow on
TPU this is a major achievement. After all, Google made the TPUs specifically
for Tensorflow. I would not have imagined that it was so "easy" to replace
LLVM by XLA.

------
balsam
Still using LLVM as backend (ultimately)?

~~~
KenoFischer
The XLA:TPU compiler is not open source, so it's anyone's guess (unless you
work at Google), but my understanding is that it does NOT use LLVM under the
hood.

