More precisely, if you tell the SBCL compiler to trust that all data types are as declared and omit type checks, it gives you code that's faster than gcc with the options at the bottom of this page: http://shootout.alioth.debian.org/u32/benchmark.php?test=spe...
These are "inner loop only" compiler settings, at least the way I'd use it --- but it's still nice to see concrete demonstrations that you don't have to drop down to C code to get maximum performance.
EDIT: (declaim (safety 0)) also omits array bounds checks, and checks for undefined variables.
1) Here are the timings for the C program you linked (x86 Ubuntu one core) -
spectral-norm C GNU gcc #4
N CPU Elapsed
500 0.09 0.10
3,000 3.31 3.32
5,500 11.13 11.14
2) Here are the corresponding timings "if you tell the SBCL compiler..."
spectral-norm Lisp SBCL #2
500 0.06 0.16
3,000 4.64 4.70
5,500 15.69 15.72
3) Is the program on the page you linked to faster or slower than the SBCL program ?
From the original post:
gcc 0.15u 0.00s 0.17r
sbcl 0.08u 0.02s 0.21r
gcc 5.60u 0.00s 5.69r
sbcl 5.18u 0.01s 5.41r
gcc 18.81u 0.01s 19.12r
sbcl 17.42u 0.02s 17.76r
No he doesn't.
The spectral-norm Lisp SBCL #2 program is Lorenzo Bolla's program - look at the program source code
>>he does in fact get different timing with the optimization in place<<
He get's different timing but the only explanation he provides is "So, different numbers on different boxes, which is not at all unexpected."
Maybe you should ask Lorenzo Bolla if he was trying to create misunderstanding by posting one of his old (December 5th, 2010) blog entries to HN ;-)
The benchmarks game website has been showing Lorenzo Bolla's spectral-norm Lisp SBCL #2 since December 8th 2010.
Re Clojure: "This is a 'babel' plot to destroy lisp."
"Pocket Forth is a free Forth interactive-interpretor that runs fine on my Macintosh "Performa 600" (68030-CPU) System 7.5.5."
"The Mac is a desktop-publishing 'appliance' --- considering
that you don't have a laser-printer, a Mac is about as useful to you as a bicycle is to a fish. Besides that, you don't seem like the desktop-publishing type of guy --- that is mostly a marketing-department girl thing."
"I really foresee the collapse of civilization. The majority of people in America are motivated entirely by hate, fear, greed and envy, and this situation can't continue indefinitely. This is what I describe in my book, 'After the Obamacalypse,' which is included in the slide-rule package on my web-page."
Another time I was sitting in my van in a parking lot. A skinny
Jew walked up to the van, peered inside, then tried to open the door
but discovered that it was locked, so he walked away. I got out and
walked over to him, and I said: "What the hell do you think you're
doing?" He also said that he thought it was his friend's van, but he
didn't apologize at all, but became prideful and belligerent. When I
said, "I think you're a thief," he said: "Look at the way you're
dressed; you're the thief!" (I was wearing a hoodie). He told me that
if I continued bothering him, he was going to call the police, and he
got out his cell-phone. When I said, "I think you were looking for
something to steal," he said: "There is nothing in your van worth
stealing!" I beat him thoroughly with my fists and left him face down
on the sidewalk in his own blood. Somewhat belatedly, be began to cry:
"I'm sorry! I'm sorry!"
It seemed like for every Peter Seibel or Kenny Tilton, there were 8 people who had 10% of 100 projects done, were happy to tell you about the anti-lisp conspiracy, and also had alternative health advice.
Second, someone upvoted this??
2. Eh, cataloging the type of wackos that hang out on comp.lang.lisp is something a lot of people enjoy.
2: I just did not expect to be reading this sort of thing on a Tuesday morning on Hacker News. It caught me completely off guard.
Incidentally, this is probably the largest reason why so many people still use -O3 -- it wins in exactly the kind of programs that are used as simple and common benchmarks. It solidly loses on almost everything else.
Do you have any data on that? Most CPU bound programs should have pretty good instruction locality, negating the effects of smaller code. But without some numbers this is pointless guesswork.
The only justification for using -Os in a speed benchmark is "I tried it both with and without the flag, and it was faster this way". I don't see any such assertion.
Really? It seems to me that this is quite enough:
> I’ve just re-run the C benchmark without -Os (only -O3) but the results are the same.
As we know, however, benchmarking can often come down to tuning. If this most basic of compiler options has not been set to the obvious choice for speed, how can we have any confidence that the C code as written is written in an efficient way?
Are we comparing language against language here, or somebody's implementation in one language against somebody's implementation in another?
I note that there appear to be hand optimisations in the C code. Were these done well, or would the compiler have done a better job?
$ gcc -c -Q -Os -O3 --help=optimizers > Os-O3-opts
$ gcc -c -Q -O3 --help=optimizers > O3-opts
$ diff Os-O3-opts O3-opts
The decent ones posted at least bother to do a comparison with several pseudo-representative tasks. This one just goes "hey, I played around with this ONE SPECIFIC TASK NOBODY GIVES A CRAP ABOUT and IT RAN 0.006 MILLISECONDS FASTER THAN IN C! WOOOOOOOOOOOO!"
That said, yes, restrict was added for this kind of thing.
I'm sure he cackled into his pocket protector at the recent PHP/Java/gcc floating point parsing fiasco.
Unfortunately, "Am I FORTRAN?" is the question that goes hand in hand with "Am I realistic for scientific computing".
c'est la vie.
500 0.07 0.22
3,000 2.34 2.41
5,500 7.86 8.01
I mean, if I could really get C performance out of SBCL (and for my purposes, I can't), I'd sure as hell want to know.
Think cold fusion. Sure, "wolf" has been cried a lot of times, but you're still going to want to know as soon as it happens "for real".
If someone could show me that "yes, your Python programs are now AS FAST AS C!" then of course I'd be ecstatic to hear that; but the posts letting me know that "Python is as fast as C when approximating solutions to problem X, for some X you've never heard of and never will" get kind of old after the 137th time I read them.
For me this is comparable to someone posting about yet another problem in NP that is REALLY FRICKIN' HARD, so probably P=/=NP. I know many problems in NP are hard - you're not adding anything to the discussion by showing me yet another one. Let me know when you have an actual proof that P=/=NP.
If I can write my entire program in LANGUAGEX and just compile the inner loop a magic way and voila, the program runs at 85% C speed, we have a winner. We can use it in long-running programs which have a fierce compute time bounding.
This is an article explaining the magic way for a flavor of lisp.
And no criticism implied on the Shootout maintainers, I'm sure that dealing with the submitters is like herding cats :-)
(The project name changed nearly 4 years ago http://groups.google.com/group/haskell-cafe/msg/61e427146c8d...)
The last program Alexey Voznyuk contributed seemed mostly to be inline assembler with a hint of Lisp - off the deep end :-)