Academics don't care so much about things like programming speed. And the academic environment focusses on proof; a type-safety property will seem much better than a unit test even when both took the same amount of time and have the same practical value.
(For all that, I agree with them though. Maybe the reason programming languages researchers think that is because it's true? Do you have reason to believe they're wrong?)
My experience with unit tests for dynamic languages as a replacement for proper static typing done at the compiler level, is that it just doesn't work.
Sure it works for the lonely coder or small shops, but as soon as the project scale increases anything goes.
Managers will push for features over unit tests, no one will refactor code without taking a few days with the caveat not all places will be touched and the team atrition will bring in new devolpers forced to read implementation code to be sure how to properly call the functions/methods.
Advanced static typed languages with HM type inference are a better option for large scale development.
Hence it may be that is harder to get funding for other types of research.
Other issue is that dynamic languages fo their very nature make tooling harder problem to solve, hence less interesting for many researchers, pressed with delivering quick results.
I don't necessarily think they are wrong, but I also think that (as with most "silver bullets") type-safety is not without it's shortcomings.
For example, consider the exponentiation function. When we raise an Integer to a positive Integer (2^2), the result will always be a positive Integer. You would expect the type of such a function to be: Integer -> Integer -> Integer
Except, that's wrong. When we raise an Integer to a negative Integer (2^-2), the result will be some fractional value. If we want to retain full numeric precision, it'll have to be something like a Rational. So now your function's type signature is: Integer -> Integer -> Rational.
But that's also wrong. So what are we to do? Maybe positive and negative integers should be separate types? PositiveInt and NegativeInt, let's say. But then what about subtraction? Should it's type signature be: PositiveInt -> PositiveInt -> PositiveInt? What about 2 - 3? Whoops!
So, there's no way, with types alone, to simply and accurately encode all basic math operations. All you can do is throw a Domain Error when someone violates certain boundary conditions, and tell them to check their arguments before deciding which function to call.
...meanwhile, dynamic languages will happily return whatever type is most appropriate for the arguments given.
Any property that you can actually be confident about, you can encode into the type system. If you care that x ^ (y - z) is an integer, you'd better have some reason to believe that y > z. If you don't care or can't be sure, let x ^ (y - z) return Integer | Rational, and pattern-match for the cases (or treat them as some supertype, Numeric or some such). Or if you really can't figure out the proof but still think it's correct, you can cast, and then your program will error out at runtime if you're wrong, which is at least safe.
In a dynamic language, your mistake passes silently; the result you thought was an integer is actually a fraction, but your program carries on going. Maybe it turns it into a string and uses it as a file path, so something you thought was a simple filename has a slash in it. Maybe that breaks your security model. Pretty unlikely in this specific case, but that's how bugs happen.
Right, but it's also much simpler and faster for the programmer to write one function with one return type, and later modify it to add a return type if the requirements change. I think this is obviously why dynamic languages rose in prominence coincidentally with the rise in rapid prototyping and agile development. That said, this is also why large companies with codebases that need to be maintained over multiple decades will always go with static languages.
What really interests me are languages like Julia or Dart, with their gradual typing, or Clojure and core.typed. You retain the flexibility and speed of development of a dynamic language, with the possibility of later adding type safety guarantees to your code without having to rewrite it from scratch.
> it's also much simpler and faster for the programmer to write one function with one return type, and later modify it to add a return type if the requirements change.
How is this different? When you compile, your compiler will say "Hey, the requirements have changed, your code is wrong here, here, here, and here," and you modify those things, and you're done.
What is the alternative? You change the code, the behavior gets changed but the compiler doesn't warn you. At this point you have two possible paths:
1- You wrote tests which start failing. You must now spend time changing them. You don't manage to avoid working in order fix your safety net (be it static type checking or your test suite).
2- You didn't write the right tests, and don't notice the change in behavior. The program later fails in production.
I find the type system a great help with remembering what I was actually trying to solve. I can "start at the end" and then work backwards, solving one compile error at a time until there aren't any left, at which point I'll know I've done what I wanted to.
> Any property that you can actually be confident about, you can encode into the type system.
The big theoretical drawback is that you get an unbounded increase in program length. The reason is that programs in a typed language are proofs, and proof length cannot be bounded above by any computable function of proposition length.
The big practical drawback is that we just don't know how to explain proofs to a computer very well.
It turns out that it's quite possible to express almost all of the invariants you describe in a type system, and we've done it in Typed Racket. Vincent St-Amour wrote a paper about it: http://www.ccs.neu.edu/racket/pubs/padl12-stff.pdf
It can express that 2^2 is positive and an integer, that 2^-2 is positive and a rational, and that 2-3 is an integer but not necessarily positive.
Citation needed. The argument for Haskell is usually about correctness; industry perception and my experience is that writing a Haskell version of a program will usually take longer than an "acceptably buggy" Python or Ruby equivalent.
Of course, there was no Ruby back then. The comparison was against C++, Ada, Awk (!?), something called "relational Lisp" and a variety of in-house languages. Haskell comes just after Lisp in development time, with a mind-boggling 8 hours! (No C++ times were reported... heh heh). The Haskell solution also got top marks in general, not just in development time. As a downside, some of the reviewers didn't understand it because they weren't well-versed in functional programming, and felt some of the Haskell code was "tricky".
(For all that, I agree with them though. Maybe the reason programming languages researchers think that is because it's true? Do you have reason to believe they're wrong?)