Hacker Newsnew | comments | show | ask | jobs | submit login

High level languages require less boiler plate, news at 11.

You could probably replace it with 10k lines or Python or Ruby too, but it would run like shit compared to the C/C++ version, but because this is HN and they are using lisp, this is news.




Define "run like shit". Would 1.5x - 3x the C++ speed be acceptable if your code was 70% more compact and proven secure, not to mention a joy to write?

Not everyone trades efficiency for reliability and comfort. Some people really get to have their cake and eat it too.

-----


Let's be realistic here:

http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

I would characterize the differences vs. C++:

1.) substantially slower in all but one case 2.) almost always uses more memory, sometimes drastically more 3.) marginally more compact

-----


Let's be realistic here: Code wrapping graphic toolkits will not be a hotspot. Any heavy graphical computation is likely to be done in the graphic toolkit (or rendering engine, etc.) and called from Racket. That's the whole point.

-----


I think your advice holds in general, but one interesting exception can be implementing a model, in the MVC sense. This can require a lot of communication across the language wrapping, and definitely become a hotspot. This is less of a problem in toolkits that retain state, but then you pay a cost in maintaining duplicate representations.

-----


You linked to the Alioth Programming Language Game — even they admit it's very flawed for determining much beyond how certain programs implementing certain algorithms perform. The game's highly contrived circumstances are considerably less realistic than the parent's anecdotes or the OP's report of huge space savings with little slowdown. In particular, many of Lisp's code size benefits lie in scaling better than other languages. The Alioth game is possibly a pathological case for the things being measured here.

-----


I don't want to get into a language debate here, but... if you put random people's anecdotes above objective measures than you can pretty much make any point you want to make. Alioth may be flawed, but it's not necessarily synthetic in the sense that the benchmarks are real problems or are very similar to real problems. More importantly, it's actually an objective measure of language performance, compactness, etc. rather than people's "impressions".

-----


I have no interest in a religious war over languages either, so no worry about that. I just don't like to see Alioth's one small data point extrapolated into a trend.

The benchmark tests are toy programs that solve a small set of problems under constraints that create a known bias in favor of languages like C++. They are an objective measure of something, but that something is not language performance and compactness is real-world, non-toy programs that solve problems unlike the ones in the game.

Impressions are admittedly not the best way to gauge such things, but they're better than relying on a test that does not make any attempt to address the question at hand.

My personal heuristic is to assume Alioth is roughly right with a largish margin of error, and then look for anecdotal evidence of specific areas that the game does a particularly poor job of reflecting. For Lisps, code size appears to be a large blind spot based on everything I have seen. Lisp's ability to create mini-languages and very high-level abstractions — a large source of its brevity — is pretty much useless on the scale of the Benchmarks Game.

-----


The explicit bias is that the benchmarks game measures time used by programs. If that bias favors languages like C++ so be it.

-----


I don't know if you're trolling or cocksure, but no, that is not what I was talking about. I said the constraints create a bias, not the measurements themselves. For example, the performance measurements are biased against most garbage collected languages because the rules don't allow any options to fine-tune the GC's behavior (which can make a big difference). Obviously, there are no equivalent rules forbidding people from fine-tuning C++'s manual memory management.

-----


If the rules don't allow any options to fine-tune the GC's behavior how do you explain the use of options to fine-tune the GC's behavior here:

http://shootout.alioth.debian.org/u32q/program.php?test=fast...

and here

http://shootout.alioth.debian.org/u32q/program.php?test=knuc...

and here

http://shootout.alioth.debian.org/u32q/program.php?test=mand...

and here

http://shootout.alioth.debian.org/u32q/benchmark.php?test=pi...

and here

http://shootout.alioth.debian.org/u32q/program.php?test=rege...

and here

http://shootout.alioth.debian.org/u32q/program.php?test=revc...

and ...

-----


Simply looking at the benchmarks game website shows that your general claim "the rules don't allow any options to fine-tune the GC's behavior" is wrong.

Do you have any other claims that can be checked?

-----


No.

"They" say your thinking is broken:

- if you think it's okay to generalize from particular measurements without understanding what limits performance in other situations;

- if you think it's okay to generalize from particular measurements without showing the measured programs are somehow representative of other situations.

-----


I don't have any experience with Scheme, but the argument was on whether a GUI written in Scheme would be unacceptably slow. A synthetic benchmark measuring AFAIK mostly number crunching suggests nothing about GUI performance.

-----


The shootout is pretty much a worst case for Racket, code size wise.

-----


How do you mean? Can you demonstrate this?

Not attacking you, I'm genuinely curious.

-----


Define "run like shit". Would 1.5x - 3x the C++ speed be acceptable if your code was 70% more compact and proven secure, not to mention a joy to write?

No...? Is there another acceptable answer for this? In which cases is such an "upgrade" slowdown acceptable to the customer?

-----


In the case that your app isn't CPU-bound, which is many cases.

[EDIT: Changed most to many. But I think arguments about whether or not speed matters are silly in the general case. Arguments about tradeoffs are always going to be so specific to your app that a general argument is fairly meaningless.]

-----


Most cases for whom? There are plenty of fields in which being cpu-bound is the norm rather than the exception. There are a whole load of assumptions that go into any general advice like this, consider making fewer of them today!

-----


That's like objecting to the statement "Most people have two arms, two legs and one head" by replying "For whom? There are plenty of demographics, such as amputees, where having a different number of arms or legs is the norm!" The fact that unusual things are normal for some subset defined by those characteristics is tautological. It doesn't make the general statement false.

-----


> In which cases is such an "upgrade" slowdown acceptable to the customer?

In exactly those cases where the customer cares more about the added benefits (e.g. security, maybe more features) than speed.

For example, I use firefox even though I've noticed that Chrome is snappier because Firefox comes with addons that provide features that I cannot do without. In this case, I have made a tradeoff between speed and features.

-----


Your customer doesn't give a crap if your program takes 0.03 seconds longer to execute, but he does care if you can't get the bug that keeps bringing his system down fixed in a timely fashion because your codebase is overwhelming.

CPU is vastly cheaper than programmer time. There is a point where the tradeoff becomes unprofitable, but to prioritize code speed over all else is a recipe for terrible software.

-----


When we're specifically talking about how fast your typical GUI app can render toolbars and such, and the app in question sits idle waiting on user input 99% of the time, then that's a wonderful tradeoff.

If you're doing number crunching or heavy 3d, then it requires further thought, but if the code in question isn't one of your CPU usage hot points the conciser code is totally worth it.

-----


There is no general useful answer to your question.

It depends on how the slowdowns translate to absolute slowdowns in human perceptible time. Which depends very much on the kind of problem, and whether a piece of code is on the "critical path" for anything.

-----


1) When the customer can't tell the difference 2) When the customer benefits outweigh the perceptible slowdown 2a) When the faster code is broken (security or other bugs)

kb

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: