Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you look through that thread, you'll find much of the community is pretty skeptical too (including me, same name). Even my citing of 1.5x - 2x was intended to be quite conservative; I'd be happy with rough-factor-of-magnitude. Being willing to add primops is a bit of a wildcard, though; since that means changing what the compiler outputs it means that with enough work, you could juts theoretically convince the compiler to emit the same code an optimized library does. It's not just "Haskell of today" that you're looking at (which I think we'd all trivially agree could not come close in performance on this particular task), it's "Haskell of tomorrow with newly-added features for this task" that we're looking at, and that's harder to guess what its performance could be. (That's nothing to do with Haskell, of course; any language implementation that went that route would see hard-to-predict performance jumps.)


Yes, I understand. I did a quick benchmark to get a ballpark figure for his 10 large integer multiplications, and using GMP's basecase code (which is much slower than the asymptotically faster algorithms it provides) I get about 2ms on my machine. Assuming his machine is a similar speed to mine, that puts him only a factor of about 4 behind GMP. That's actually pretty good. You need a pretty modern GCC to do that in C. So I will be interested in seeing what he finally releases.

The addition timings don't seem realistic. GMP can add 1000 small integers in 10 uS without even getting out the assembly primitives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: