Hacker News new | comments | show | ask | jobs | submit login
An LLVM backend for GHC: implementation and benchmarks: PDF (unsw.edu.au)
44 points by dons on Nov 9, 2009 | hide | past | web | favorite | 8 comments



That paper is for a BS thesis. Some MS students in my old school would struggle to even understand it. There were always pockets of us that did interesting things on the side, but this would go far and beyond above the expectations for the average student.


Interesting, because I initially thought it was an MS thesis and was a little less than impressed. I will say that that was a lot of work for a BS thesis. Porting the GHC is not a simple task!


Interestingly here in .au MS degrees are generally quite "soft" - seemingly anyone can get it part time. The bachelor ones require a lot more work. It used to be that a masters was a step towards a doctorate, but now its more of a business qualification thing (think MBA) for those who didn't do the right undergraduate degree the first time and don't have 3 or 4 years to spare.


Really, some MS students would struggle to understand it? I'm an undergraduate myself and I have no real knowledge of Haskell and I found it to be a pretty easy read. Major congrats to the student who worked on it though.


Registered Nofib:

"The native code generator offers the best performance of the three but its a fairly close race, with the LLVM back-end less then 4% percent behind and the C code generator less then 7%. While this is slightly disappointing that LLVM doesn’t provide a performance advantage over the NCG back-end, it is very promising given its early stage of development that it is able to produce code of the same level of quality as the NCG and C back-end. Both of these back-ends benefit from years of work and optimisation, while the new LLVM back-end has only been in development for around 3 months and only just recently reached a usable state."

Data Parallel Haskell:

"[...] the LLVM back-end shows a very impressive performance gain for all the tests, bringing an average reduction of 25% in the runtime of the tests. The reasons for this are two fold. Firstly the LLVM optimiser and register allocator simply outperform the NCG. Secondly and of far greater impact is the use of the custom calling convention used to implement registered mode, as outlined in section 3.3.4."

Also interesting:

"As can be seen from the table, the LLVM optimiser produces no noticeable effect on the runtime of the program, indeed for the nofib benchmark suite it actually slowed it down."


The article uses an interesting way to note speed differences, using relative (-20%) changes rather than absolute (0.8) differences. I've not seen that before and I'm not sure I like it.

For example, something -99% is 100x as slow as the original, while something +100% is only twice as fast. I think especially the average is wrong. With the above example, the average would be 0%, or 'no change', which IMHO isn't fair.


He uses the geometric mean (the Nth root of the product of N numbers), which is mathematically correct in this case.


Relative is better than absolute. If you have a benchmark like this:

    for(i=0; i<100; i++){ do something }
Then it could just as well have been this:

    for(i=0; i<10000; i++){ do something }
This change makes that benchmark 100x more important relative to the other benchmarks.

IOW, the relative importance of benchmarks in an average is arbitrary. If you measure speedups you give every benchmark the same weight.

The "average" he uses, the geometric mean, doesn't have the problem you describe. The geometric mean of a 99% slowdown and a 100% speedup is computed as sqrt(1/100 * 2) ~= 0.141421356. This means that the "average" slowdown is 1/0.141421356 ~= 7.07x.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: