There are tons of interesting things going on in TensorFlow from a programming language perspective. It has optimization at the low level, like CUDA and SIMD back ends via the Eigen library [1] (which is pretty crazy C++ metaprogramming in its own right).
But it also has optimization at the high level / cluster level, e.g. deciding which nodes to put computations on, to minimize data movement across various networks, etc.
It also has multiple front ends. Python is the main one, but IIRC people were developing others (maybe not at Google).
I worked adjacent to the TensorFlow team, and A LOT of people had BOTH ML and PL skills [2]. It's not an either-or thing. It's best when you have the same person with both sets of expertise.
I think a big problem with large parts of the academic PL community is they're not exposed enough to real applications. Difficulty isn't proportional to real-world benefit. It's certainly difficult to invent type systems to statically detect minor problems, but that doesn't mean it's important for creating and maintaining software. (Sorry, had to rant about that.)
Machine learning is a domain rife with programming language problems, but of course it takes a long time to develop that expertise. I'm sure Lattner would be a good person to synthesize knowledge in the different domains.
[2] edit: I should really say ML and distributed computing skills. But most people with distributed computing skills know a decent amount about programming languages; they overlap in MapReduce-type big data frameworks too.
Jeff Dean got his PhD under Craig Chambers doing PL work; Sanjay Ghemawat was also a compiler guy, they went on to mapreduce and I think, at least Jeff Dean, is heavily involved in Tensorflow.
Google hires a lot of PL PhDs to do work that isn't very related to PL (we joke about PL PhDs working on ads a lot). One could guess that they just make good developers because of their experience, but the specific knowledge might not be much in demand.
Right and Craig Chambers later joined Google. He led/developed Flume [1] (a Java framework on top of MapReduce) and the unfortunately named Cloud Dataflow [2] (which aims at unifying streaming and batch computation)
I say unfortunate because pretty much all the big data frameworks could be called "cloud dataflow", including TensorFlow itself.
People using Google Cloud might recognize the name of VP Urs Hoezle, who worked on the OO language Self in the 90's with a lot of the same people [3].
And AFAIK that's where v8 came from... Urs hired his former colleague Lars Bak and told him to write a fast JavaScript interpreter for Chrome.
Wasn't V8 developed by Bak & co in a separate company, which was bought by Google? (Didn't find any source, but based on what I thought I read in the news at that time ...)
v8 wasn't developed by a separate company, but Bak did bring at least one of his colleagues along to Google (I'm pretty sure just as new employees to Google). And he had another company that worked on a SmallTalk VM called OOVM, targeted at embedded devices, right before he joined Google:
The question was why a PL developer has seemingly turned away from the domain in which he's prominent, to work on machine learning, where he has no prominence to my knowledge. His resume does not explain it very well, or at all.
That's what TensorFlow is. It happens to use the Python interpreter as the front end, and for metaprogramming, but the language has its own semantics (see the paper above).
You could probably an invent your own syntax for it (and I'm sure someone has), but that's a small part of the picture.
For example, see "A Computational Model for TensorFlow" https://research.google.com/pubs/pub46196.html
There are tons of interesting things going on in TensorFlow from a programming language perspective. It has optimization at the low level, like CUDA and SIMD back ends via the Eigen library [1] (which is pretty crazy C++ metaprogramming in its own right).
But it also has optimization at the high level / cluster level, e.g. deciding which nodes to put computations on, to minimize data movement across various networks, etc.
It also has multiple front ends. Python is the main one, but IIRC people were developing others (maybe not at Google).
I worked adjacent to the TensorFlow team, and A LOT of people had BOTH ML and PL skills [2]. It's not an either-or thing. It's best when you have the same person with both sets of expertise.
I think a big problem with large parts of the academic PL community is they're not exposed enough to real applications. Difficulty isn't proportional to real-world benefit. It's certainly difficult to invent type systems to statically detect minor problems, but that doesn't mean it's important for creating and maintaining software. (Sorry, had to rant about that.)
Machine learning is a domain rife with programming language problems, but of course it takes a long time to develop that expertise. I'm sure Lattner would be a good person to synthesize knowledge in the different domains.
[1] http://eigen.tuxfamily.org/index.php?title=Main_Page
[2] edit: I should really say ML and distributed computing skills. But most people with distributed computing skills know a decent amount about programming languages; they overlap in MapReduce-type big data frameworks too.