something of a side issue, but when was there a trade-off between static checking and performance? fortran and c have pretty much always been the fastest languages around, haven't they? is he referring to assembler?
I think what he's saying that some programmers have shifted to dynamic languages for productivity as faster processors have made the use of statically-typed languages less critical. He foresees a similar shift away from lock-based concurrency models due to the increased number of cores.
He may be right about the move away from the threads&locks popularized in the 1990's, but I agree with you that it's not such a great analogy.
> I think what he's saying that some programmers have shifted to dynamic languages for productivity as faster processors have made the use of statically-typed languages less critical.
Given that this guy seems to think having programs that result in wrong answers is generally acceptable, it's hardly surprising that he also seems to think dynamic languages took over at some point. However, most software written today, and nearly all software written today on a large scale, is still written in statically typed languages, sometimes with controlled dynamic elements. And much of it doesn't need to be parallelised within a process, because it's I/O bound one way or another anyway.
And of the major stuff that's heavily parallelized, basically all of what I've seen is written in static languages.
With good reason, too. I submit that when you're writing code like that, it's generally because you're trying to do a CPU-bound operation as fast as you can. Which means that unnecessarily spending cycles on type checks that could have been put by the wayside at compile time is probably not your cup of tea.
It's also what makes it possible for you to write, say, a high-performance floating point matrix multiplication function and have the compiler emit code accepts floating point inputs without having to consider the possibility that the arguments were supplied as strings.