
Polytope Model - rayxi271828
https://en.wikipedia.org/wiki/Polytope_model
======
6700417
See also: [https://polyhedral.info](https://polyhedral.info)

The ideas here go back a long way. As mentioned on Wikipedia, the Polly
project for LLVM uses this approach.

There is also work for MLIR to use the polyhedral model
([https://github.com/tensorflow/mlir/blob/master/g3doc/Rationa...](https://github.com/tensorflow/mlir/blob/master/g3doc/RationaleSimplifiedPolyhedralForm.md)).

~~~
macawfish
This is really cool and good to know about. I hadn't heard of this technique
before, but have caught some interest in SPIRV lately while learning about
WebGPU. Well I just noticed that Polly is used in the LLVM SPIRV backend...
Makes sense! [https://github.com/KhronosGroup/LLVM-SPIRV-
Backend/blob/mast...](https://github.com/KhronosGroup/LLVM-SPIRV-
Backend/blob/master/polly/lib/Analysis/PolyhedralInfo.cpp)

I'm very excited about the potential for writing GPU code from whatever high
level language is convenient. (Which will not be WSL, by the way).

~~~
jdoerfert
It would be news to me that the PolyhedralInfo interface is actually used
anywhere. As the file comment says, this was "work in progress" a few years
ago and it shares the same drawbacks as Polly (unfortunately).

Polyhedral development in LLVM stagnated a while ago, maybe we find the people
and time to actually land the PolyhedralValueInfo (see for example this talk:
[https://www.youtube.com/watch?v=xSA0XLYJ-G0](https://www.youtube.com/watch?v=xSA0XLYJ-G0)).

~~~
Cladode
Albert Cohen from Google gave a really good lecture on polyhedral compilation
in the real world at PLISS [1] this year: slides [2], lecture videos [3, 4].

A core problem of the polyhedral approach, is that the thing that makes it so
appealing (it being expressive enough to enable the many interesting program
transformations), is costly for large programs making, the overall
optimisation process and hence the compiler hard to scale. Naturally, this
difficulty makes for interesting research problems.

[1] [https://pliss.org/](https://pliss.org/)

[2]
[https://pliss2019.github.io/albert_cohen_slides.pdf](https://pliss2019.github.io/albert_cohen_slides.pdf)

[3]
[https://www.youtube.com/watch?v=mt6pIpt5Wk0](https://www.youtube.com/watch?v=mt6pIpt5Wk0)

[4]
[https://www.youtube.com/watch?v=3TNT5rFVTUY](https://www.youtube.com/watch?v=3TNT5rFVTUY)

------
aasasd
> _no iteration of the inner loop depends on the previous iteration 's
> results_

> _a[i - j][j] = ... + a[i - j][j - 1]_

Not seeing how the iteration for _j_ doesn't depend on the result for _j - 1_
if it's invoked right there in the expression.

~~~
rustybolt
This is hard to grasp without a diagram. The detailed example is probably
easier to understand.

The result computed in the current iteration is a[i - j][j]. The result
computed in the iteration before that is a[i - (j - 1)][j - 1].

~~~
aasasd
I guess the page would benefit from listing the entire thing in the
replacement code, instead of just the inner line. Then non-mathematical people
like me would maybe have a chance at getting it without ramping up caffeine
dosage to dangerous levels, so as to see the missing parts in the fabric of
mathematical possibilities.

------
Reventlov
That's fun, a few years ago in one of my course our teacher explained this
model to us. He then basically said to us « well we're like 50 to work on this
in the whole world, so it's a really small community and everyone knows
everyone » (as everywhere, I guess).

And, indeed, he is in the community page ( Christophe Alias )

------
etaioinshrdlu
This might be the most interesting compiler optimization technique I've ever
heard of.

