Hacker News new | comments | show | ask | jobs | submit login

Let's be realistic here:

http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

I would characterize the differences vs. C++:

1.) substantially slower in all but one case 2.) almost always uses more memory, sometimes drastically more 3.) marginally more compact




Let's be realistic here: Code wrapping graphic toolkits will not be a hotspot. Any heavy graphical computation is likely to be done in the graphic toolkit (or rendering engine, etc.) and called from Racket. That's the whole point.


I think your advice holds in general, but one interesting exception can be implementing a model, in the MVC sense. This can require a lot of communication across the language wrapping, and definitely become a hotspot. This is less of a problem in toolkits that retain state, but then you pay a cost in maintaining duplicate representations.


You linked to the Alioth Programming Language Game — even they admit it's very flawed for determining much beyond how certain programs implementing certain algorithms perform. The game's highly contrived circumstances are considerably less realistic than the parent's anecdotes or the OP's report of huge space savings with little slowdown. In particular, many of Lisp's code size benefits lie in scaling better than other languages. The Alioth game is possibly a pathological case for the things being measured here.


I don't want to get into a language debate here, but... if you put random people's anecdotes above objective measures than you can pretty much make any point you want to make. Alioth may be flawed, but it's not necessarily synthetic in the sense that the benchmarks are real problems or are very similar to real problems. More importantly, it's actually an objective measure of language performance, compactness, etc. rather than people's "impressions".


I have no interest in a religious war over languages either, so no worry about that. I just don't like to see Alioth's one small data point extrapolated into a trend.

The benchmark tests are toy programs that solve a small set of problems under constraints that create a known bias in favor of languages like C++. They are an objective measure of something, but that something is not language performance and compactness is real-world, non-toy programs that solve problems unlike the ones in the game.

Impressions are admittedly not the best way to gauge such things, but they're better than relying on a test that does not make any attempt to address the question at hand.

My personal heuristic is to assume Alioth is roughly right with a largish margin of error, and then look for anecdotal evidence of specific areas that the game does a particularly poor job of reflecting. For Lisps, code size appears to be a large blind spot based on everything I have seen. Lisp's ability to create mini-languages and very high-level abstractions — a large source of its brevity — is pretty much useless on the scale of the Benchmarks Game.


The explicit bias is that the benchmarks game measures time used by programs. If that bias favors languages like C++ so be it.


I don't know if you're trolling or cocksure, but no, that is not what I was talking about. I said the constraints create a bias, not the measurements themselves. For example, the performance measurements are biased against most garbage collected languages because the rules don't allow any options to fine-tune the GC's behavior (which can make a big difference). Obviously, there are no equivalent rules forbidding people from fine-tuning C++'s manual memory management.



Simply looking at the benchmarks game website shows that your general claim "the rules don't allow any options to fine-tune the GC's behavior" is wrong.

Do you have any other claims that can be checked?


No.

"They" say your thinking is broken:

- if you think it's okay to generalize from particular measurements without understanding what limits performance in other situations;

- if you think it's okay to generalize from particular measurements without showing the measured programs are somehow representative of other situations.


I don't have any experience with Scheme, but the argument was on whether a GUI written in Scheme would be unacceptably slow. A synthetic benchmark measuring AFAIK mostly number crunching suggests nothing about GUI performance.


The shootout is pretty much a worst case for Racket, code size wise.


How do you mean? Can you demonstrate this?

Not attacking you, I'm genuinely curious.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: