Hacker News new | past | comments | ask | show | jobs | submit login

Are there benchmarks of SBCL against LispWorks and Allegro?



Generally they are in the same league. I would expect that SBCL is faster in some benchmarks and applications, but slightly slower in real life Lisp applications. Allegro CL and LispWorks have been used for some very large commercial applications - where the application developers demanded special optimizations for those over the years. The implementors actually don't put too much attention to benchmark performance - but application developers pay for real-life performance.

I would expect that SBCL might have an advantage in some areas where it's easier to write fast code, because of its type inference and compiler hints. That's very useful.

Some benchmarks on smaller machines like Macs and ARM-boards. http://lispm.de/lisp/benchmarks.html


Thanks for the insight.


Not that I know of, but LW 64bit and SBCL are comparable in my experience. Allegro I have heard is slightly slower but with a better memory usage (so maybe like CCL?)


I see.

Just mentioned them, because many ignore them as they are commercial products, yet I would consider them the surviving ones from Lisp Machine days developer workflows.

So I would assume, parallel to the the graphical developer tooling, they also offer quite good compilers.


The compiler is only 1/3 of the equation. One needs also a fast runtime (memory management, signal handling, implementation of language primitives like bignum arithmetic, ...) and a fast implementation of the core language library - since that is also mostly written in Lisp.

Additionally there are roughly three usage modes for the larger CL implementations:

1) interpreter, often used for development/debugging

2) safe compiled code with full error checking and full generics, often with lots of debug info

3) optimized code, with various degrees of unsafeness, with no debug information, often with limited or no generics

Often applications are a mix of 2) and 3), where often 3) is limited to the portion of the code that actually needs to be VERY fast. This means, that the large majority of the code is fully safe and fully generic code -> thus it has a lot of influence how fast the language/application feels.

For example when one uses CLOS (providing a lot of generic and extensible machinery), a CLOS implementation will use a lot of caching to make it run fast -> caches cost memory. Now, when one starts a CLOS-based application these caches need to be computed and filled - which might make the application at start up feel a bit slow or sluggish. So it makes sense when generating an application to save it with the caches pre-filled - then at application start, caches are simply already loaded and there is no performance hit at startup. That's not something one can see in a simple micro-benchmark or which depends on the 'compiler' (the part which compiles code and generates the machine code) - that's performance from the wider CL system architecture -> one needs to be able to provide that to CLOS applications to improve user acceptance.


Which is one of the reasons why CLOS is better than any of the bolted on object systems of scheme (coops, goops etc).

I really want to prefer CL, but I always end up trying to write scheme which kind of works, but quickly becomes weird.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: