Well right now the messy world of user requirements meets the even messier world of legacy languages and poorly behaving libraries. I'm not under any illusions that the first part would change; my hope only pertains to the second part.
For example, the behaviour of null/nil references in most typed languages mentioned in the article has nothing to do the inherent nature of messy user requirements. Same goes for the lack of sum types: infact they're perfect for modelling messy requirements, much better than simulating them with structs and forgetting a check somewhere (I'm just parroting the article here)
These issues aren't caused by messy requirements but by messy leftovers from legacy languages. Thats fine too, nobody expects us to get it right the first time: but that was 40 years ago and repeating the same mistakes in languages made in 2009 feels really sad.
I half agree. While new programming languages will make solving today's problems easier, there will be new problems that they won't solve, and so in 20 years people will be moaning about the problems the contemporary languages don't solve.
For example, one of the biggest problems with building websites ten years ago was that you had to slice everything into tiny images because of table-based layouts. Now we've got CSS3 with SASS and Compass and that's no longer a problem. But whereas then, 90% of users were using IE6 on a 15 or 17 inch monitor, now my users are on a multitude of browsers and screen sizes.
If your language is versatile enough, has adaptable syntax, and powerful at DSLs, it can adapt to future requirements quite well.
Haskell is great at concurrency and parallelism not because the language was designed around those at all - but because the language is well designed, and focuses at communicating the programmer's intent, rather than implementation details. This intent is more directly translatable to correct concurrent code than an imperative spec which is inherently lower-level.
For example, the behaviour of null/nil references in most typed languages mentioned in the article has nothing to do the inherent nature of messy user requirements. Same goes for the lack of sum types: infact they're perfect for modelling messy requirements, much better than simulating them with structs and forgetting a check somewhere (I'm just parroting the article here)
These issues aren't caused by messy requirements but by messy leftovers from legacy languages. Thats fine too, nobody expects us to get it right the first time: but that was 40 years ago and repeating the same mistakes in languages made in 2009 feels really sad.