The biggest barrier to deploying this kind of thing is probably ML skepticism in the compiler world.
* has a 22.6% higher average reduction in cost compared to LLVM compiler (as measured with its own cost model); and
* achieves a geometric mean runtime speedup of 1.015× on the NAS benchmark suite when compared to LLVM’s SLP vectorizer.
In short, the DNN achieves better performance in an NP-Hard problem than human-tuned heuristics and human-designed algorithms.
I find the big problem with optimizers these days is not performance anymore. I'd just so gladly trade even 2-3% performance for assurance that it's going to actually work, even if I move bits of code around a little bit. That it won't slow down by a factor 2-3x.
It would be illuminating to see approximate compile times. The 1.015 number comes from 10 rollouts. Speedups are < 1 for 1 rollout. How long does goSLP take vs. LLVM?
The actual title is "Compiler Auto-Vectorization with Imitation Learning". It is currently titled "MIT researchers develop new technique to generate compiler optimizations [pdf]". Please don't do this.