
On Being Sufficiently Smart - randomwalker
http://prog21.dadgum.com/40.html
======
ced
It's now generally accepted [1] that C code is faster than all but the most
carefully handcrafted assembly. How did that happen? C compilers got
"sufficiently smart".

Extrapolating, I posit that high-level languages will one day be _faster_ than
C, because of the additional implementation freedom they afford.

Consider maps. In C, there is no built-in map. I have to write one myself. So
I choose an algorithm (a hash-table, say) and implement it. The compiler is
then forced to do exactly what I asked for... But then, I wonder: would a
binary tree be faster?

Write, profile, debug, iterate ... Most of the time, that isn't done.
Performance suffers.

Now consider maps in Python. When I write d = {2:1, 3:3}, what algorithm does
Python choose? _It may use any data structure it wants_. And indeed, it
already has some heuristics to efficiently handle the most common usage
patterns. Conceivably, it could silently try many different data-structures,
profile the result, and choose the best one, JIT-style.

Python will one day be faster than "all but the most carefully-crafted C
code".

[1] But to be honest, I don't know how true that is.

~~~
mzl
While it is very common to use C as the canonical fast language, it has some
warts that really hinders the compiler from doing optimizations. The main one
is of course pointer aliasing. This, for example, stops the compiler from
unrolling loops that operate on two different arrays, since it can not know if
they are the same. Among people doing heavy numerical computation Fortran
still seems to be very much entrenched as the "fast language".

------
randomwalker
This was mentioned here <http://news.ycombinator.com/item?id=1836221> and I
thought it deserved its own thread.

