Hacker News new | past | comments | ask | show | jobs | submit login

Good read. Whenever I read stuff like this, I always wonder if it is always true that dynamically typed languages are slower than statically typed languages ? Also, do we have to take this for granted that more higher level the language is, the slower it will be ? Or are there exceptions ?

Also, it is worth asking if for majority of use cases of python for data/analysis, the ease and flexibility outweights the slowness.




Always is a bit of a strong word but yes, it's always true.

Virtual functions in C++ which allow some form of dynamic behaviour are slower than static function calls because they inherently involve another level of indirection. Static calls are known at compile time, they can be inlined by the compiler, they can be optimized in the context in which they're called. Now, nothing prevents the C++ run-time from trying to do the same thing at run-time but you can relatively easily see that it'll have to make some other compromised to do so. Nothing prevents a C++ program from generating C++ code at run-time, compiling it, and loading it into the current process as a .so. Now that's a pretty dynamic behaviour but there's again an obvious price. You can also write self modifying code. At any rate, static languages are capable of the same dynamic behaviour that dynamic languages are capable of but you often have to implement that behaviour yourself (or embed an interpreter...).

Fundamentally, a dynamic language can't make the kinds of assumptions a more static language can make, it can try and determine things at run-time (ala JIT) but those take time and still have to adapt to the dynamics of the language. The same line of code "a = b + c" in Python can mean something completely different every time it's executed so the run-time has to figure out what the types are an invoke the right code. Now the real problem is that if you take advantage of that then no one can actually tell what this code is doing and it is utterly unmaintainable.

To compound the problems facing dynamic languages is the fact that CPUs are optimized for executing "predictable" code. When your language is dynamic there are more dependencies in the instruction sequence and things like branch prediction may become more difficult. It also doesn't help that some of the dynamic languages we're discussing have poor locality in memory (that's an orthogonal issue though, you could give a dynamic language much better control over memory).

EDIT: One would think it should be possible to design a language which has both dynamic and static features where if you restrict yourself to the static portion runs just as fast as any other statically compiled language and still allows to switch to more dynamic concepts and pay the price when you do that.


Apologies if you've already read this, but this paper was absolutely enlightening for me on what a more performant Python would look like: http://julialang.org/images/julia-dynamic-2012-tr.pdf especially section 2 on language design, which discusses how it's actually different.

(And yes, I know I'm just adding to the people plugging Julia here) Among other things, Julia actually avoids forms of dynamism that are in practice rarely employed by programs in dynamic languages. Existing code is immutable, but new code may always be generated if different behaviour is desired.

I'm not entirely sure if I could accurately say Julia allows one to trade between efficiency and dynamism when each is desired, but it does appear to be a much better balanced approach than is normal.


It's not always true in practice. See for instance: http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Common List on SBCL is often faster than F# on Mono, eve though Mono is statically typed, as the former runtime has had a lot more work put into optimising it.

Or, compare F# on Mono with Clojure on the OpenJDK: http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Clojure again is faster, due to a more optimised runtime.


Notably, tho, Both SBCL and Clojure have optional types. So the number of indirections necessary is already substantially reduced, as per the GP. For this to be the case in practice, tho, requires that type annotations actually be commonly used in both lisps.

A more interesting counter example would be LuaJIT, since it has to support completely dynamic code with a fair bit of monkey patching going on. But that is more of a showcase for tracing JITs being fairly powerful (for hard to predict code bases) than that indirections are cheap.


That's a good point. SBCL at least attempts type inference though, so I often find the code is quite fast without annotations. But then I suppose SBCL with type inference is probably closer to a statically typed language in that sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: