

An LLVM backend for GHC: implementation and benchmarks: PDF  - dons
http://www.cse.unsw.edu.au/~pls/thesis/davidt-thesis.pdf

======
yan
That paper is for a BS thesis. Some MS students in my old school would
struggle to even understand it. There were always pockets of us that did
interesting things on the side, but this would go far and beyond above the
expectations for the average student.

~~~
Locke1689
Interesting, because I initially thought it was an MS thesis and was a little
less than impressed. I will say that that was a lot of work for a BS thesis.
Porting the GHC is not a simple task!

------
jmah
Registered Nofib:

"The native code generator offers the best performance of the three but its a
fairly close race, with the LLVM back-end less then 4% percent behind and the
C code generator less then 7%. While this is slightly disappointing that LLVM
doesn’t provide a performance advantage over the NCG back-end, it is very
promising given its early stage of development that it is able to produce code
of the same level of quality as the NCG and C back-end. Both of these back-
ends benefit from years of work and optimisation, while the new LLVM back-end
has only been in development for around 3 months and only just recently
reached a usable state."

Data Parallel Haskell:

"[...] the LLVM back-end shows a very impressive performance gain for all the
tests, bringing an average reduction of 25% in the runtime of the tests. The
reasons for this are two fold. Firstly the LLVM optimiser and register
allocator simply outperform the NCG. Secondly and of far greater impact is the
use of the custom calling convention used to implement registered mode, as
outlined in section 3.3.4."

Also interesting:

"As can be seen from the table, the LLVM optimiser produces no noticeable
effect on the runtime of the program, indeed for the nofib benchmark suite it
actually slowed it down."

------
pieter
The article uses an interesting way to note speed differences, using relative
(-20%) changes rather than absolute (0.8) differences. I've not seen that
before and I'm not sure I like it.

For example, something -99% is 100x as slow as the original, while something
+100% is only twice as fast. I think especially the average is wrong. With the
above example, the average would be 0%, or 'no change', which IMHO isn't fair.

~~~
Smirlouf
He uses the geometric mean (the Nth root of the product of N numbers), which
is mathematically correct in this case.

