
Spsc - A Small Positive Supercompiler in Scala, Haskell, Python & Ruby - thesz
http://code.google.com/p/spsc/
======
gte910h
Can someone explain the use case for this?

~~~
rwar
The "simplifications" in the program that can be achieved by a supercompiler

Redundant code elimination. Performing operations on data that are known at
the time of supercompilation. Removing intermediate data structures.
Transforming multi-pass algorithms to one-pass algorithms.

[http://sites.google.com/site/keldyshscp/Home/supercompilerco...](http://sites.google.com/site/keldyshscp/Home/supercompilerconcept)

~~~
gte910h
So basically, complicated optimizations that will often be opaque to a human
(not trying to be adversarial, just understand it)?

~~~
scott_s
Quite the opposite, if I understand the concept. It's more trying to apply the
kinds of optimizations that a human would make when reasoning about how a
program actually runs. When we read code, we often think "Oh, I get it - it's
just trying to do _X_." A good supercompiler would just emit the code to do
_X_.

I can see this coming up when you, say, make library calls in certain ways
that don't need the full functionality of the algorithms. Maybe one paramater
is fixed, or the sequence of the calls is really to achieve something else.

The original paper is freely available off of citeseer, "The Concept of a
Supercompiler":
[http://citeseer.ist.psu.edu/viewdoc/download;jsessionid=B373...](http://citeseer.ist.psu.edu/viewdoc/download;jsessionid=B3735067C269499893CDA1F61B98F942?doi=10.1.1.128.6414&rep=rep1&type=pdf)

I'm currently reading through it. I find it useful to forget the term
"supercompiler," since that implies to me a "meta-compiler," which I don't
think is an accurate way of describing the technique. This could be
incorporated into a regular compiler, you'd just want this as the first
optimization phase. (I think.)

~~~
gte910h
Thanks for going through it; I'd honestly like a less researchery article than
that one to read. That wasn't my field when I was a researcher, and I'm not
particularly interested in wading through articles written for a academic
audience which may or may not allow me to draw the correct conclusions from
trying to read through it.

"Yet another optimizer phase" is what I'm readying here.

~~~
scott_s
Except I've only made it through the introduction, and I'm still not clear on
whether or not their technique implies a compile, execute, profile, re-compile
cycle.

One major difficulty with academic articles is that they have to "sell" their
contribution in order to get the paper accepted. This entails taking the
approach to its logical extension, and talking up the impact it can
potentially have. Otherwise, reviewers may reject it for not being novel
enough. But, this sort of approach is often detrimental to understand what,
exactly, the researchers actually did. I know that when I read a research
paper, there's often an "aha" moment when I realize "Oh, they just took XYZ
and did ABC to it" which can either be followed up with "that's lame" or
"that's neat!"

~~~
crux_
I'm pretty certain its role is that of a transformer from programs in an
intermediate language to optimized programs in that same intermediate
language. Hence, a pipeline internal to a compiler.

The use of the word "traces" may be the source of your confusion in light the
rise of tracing JITs (there that actually execute a program). I think a better
word would be "abstract interpretation".

~~~
scott_s
These are the passages from the intro that made me say that:

"So a supercompiler does not transform the program by steps; it controls and
observes (SUPERvises) the running of the machine that is represented by the
program; let us call this machine M,. In observing the operation of M1, the
supercompiler COMPILES a program which describes the activities of M1, but it
makes shortcuts and whatever clever tricks it knows in order to produce the
same effect as M1, but faster."

"A supercompiler would run M, in a general form, with unknown values of
variables, and create a graph of states and transitions between possible
configurations of the computing system. However, this process (called driving)
can usually go on infinitely."

This implies, to me, a compile, execute, profile, re-compile cycle. But, that
conflicts with the understanding of the technique that I described above. If
that does _not_ describe a compile, execute, profile, re-compile cycle, then
I'd like to know both what, exactly, the author meant by the above, and why he
chose to phrase it as such.

~~~
crux_
Keep in mind the date (and origin) of the paper -- expect some vocabulary
mismatches.

In particular: "run M, in a general form, with unknown values of variables"
would _probably_ be called abstract interpretation today.

(I'm only fudging because I haven't read the rest yet myself. :)

Edit to add something I forgot to mention, but should: There's a(t least one)
useful PDF of slides in the repository from the originally linked project.

I did go through that, in something of a hurry -- it walked through the
rewriting of a naive function to append 3 lists [ app3(a,b,c) =
app2(app2(a,b),c) ]into one that executes with a single pass.

