This is why I think a type system--at least one like Haskell's or OCaml's--isn't just a way to catch certain bugs. It's more of a different basis for programming: your program grows from the types rather than vice versa. Confusing behavior like this simply couldn't arise. And in the cases it could it wouldn't--it simply doesn't fit with the philosophy.
So the difference is not that the type system would catch this sort of behavior or even that it would prevent it--the difference is that it wouldn't arise in the first place. Sure, this is more a difference of perspective than of actual behavior or features, but I think it's incredibly important nonetheless.
Thinking about types in this different way is actually quite a deep topic; I should write a blog post about it for that blog I keep on meaning to start :P. One day. At the very least, writing it all down would help me get my own thoughts in order.
To expand a bit on with tikhonj is saying, consider the type signature of map:
map :: [a] -> (a -> b) -> [b]
In an implementation of map with this signature, the only way to get a list of b's is to use the function passed in. So, there's no way to do the array unwrapping shenanigans from the js example.
> So the difference is not that the type system would catch this sort of behavior or even that it would prevent it--the difference is that it wouldn't arise in the first place.
Or perhaps the program wouldn't even get written in the first place. The problem I have with straight jacket languages, I have to think long and hard about how to do something that the type system doesn't allow easily, especially when you consider all the premature commitments being made. You can argue that this is good, that the program will be more robust for it, but...often worse is better.
> The problem I have with straight jacket languages, I have to think long and hard about how to do something that the type system doesn't allow easily
Can you give an example please because I'm struggling to think of one aside working with dates and time but those can be a pain on loosely typed languages as well (Perl, I'm looking at you).
Sub-type polymorphism of course, layered designs that go against the type class grain, heterogeneous collections that require tagging be setup ahead of time, and anything heterogeneous in general.
I like static typing, I program in C#, but I use escape hatches a lot in my designs. Type systems just can't be expressive enough (even say Scala).
A good enough type system you will be able to express abstractions so powerful that it turns the "straitjacket" into a suit of power armour. Instead of restricting the programmer, a good type system like Haskell's allows you to express your intent and program with certainty that many errors (like forgetting to handle an exception) are simply impossible.
There are also some things that are very cumbersome to write in dynamic languages. For example in Python it is a complete pain to write code that is datastructure-agnostic. A while ago I was working on some code where I had nested dictionaries with lists in them that contained single values and more lists/dictionaries, and found myself wanting to transform all the lists inside the thing in a certain way. In Haskell, this would've been trivial using the fmap function (type f a -> (a -> b) -> f b) which is a generalized map that works for any functor... The python solution ended up being a fragile hack of manual type checks and dispatch, since there is no uniform interface that all the data structures can implement.
I feel like dynamic programming languages are a good solution for prototyping and small programs/modules (eg. UI and scripting systems), but as systems grow larger, the amount of things you have to worry about grows exponentially, and a type system (+ immutability) helps keep it all together by minimizing the amount of parts that a change can affect.
Even there, I rarely run into problems that can't be easily worked around with structures and what not. The exception I come across is when pulling values from a database when I don't know the format they're stored in at the table level, but then that's pretty bad coding practice anyway
I will concede that you've probably written more sophisticated routines than I so it may just be a case that I've not ran into problems because I've not needed to run into problems (if that makes any sense).
Say a function requires an object of type t or it will fail, in a dynamic language it is often fine to assume if the object is not a t it is a valid initializer for a t instead. Eg a string could be used as an initializer for a date. The problem with the map example is the ambiguity.
But that's just poor coding practices as you're letting any old garbage in as data without first validating it (and that's assuming such data was user generated. If the garbage was generated within your code itself then you have even bigger problems).
Don't get me wrong, I'm not against loosely typed languages (eg in Perl it's very handy being able to use strings and integers as booleans), but that example you've given would scare me in any language as you shouldn't be making any assumptions about data unless the program generated that data itself (and even then, I'd still run a few checks if just to catch unconsidered exceptions / security vulnerabilities)
Can you give a concrete example? Generally if there's something that Haskell's type system won't allow you to do, you shouldn't be doing it in Python either.
Just to relate this to known concepts, this is basically Type Coercion, aka Implicit Type Conversion, at library level. Or it can be framed as method overloading with some magic to neutralize the parameters.
I don't think magic like this is out of place with JS; that's the zen of the language. Similarly for Ruby. Whereas in Java or Python, it would be out of place. Now you might use that as an argument against JS, but that's a separate debate.
"This is annoying at the level of Arrays, but gets more difficult with more complex types, and function interactions. The recent brouhaha around the Promise/A+ highlights one such example: It is difficult to return a Promise of a Promise as a value from onFulfilled because then duck-wraps the return value as described in the Promise Resolution Procedure"
Could anyone share a pointer to this brouhaha around the Promise/A+ [spec, I assume]?
So the difference is not that the type system would catch this sort of behavior or even that it would prevent it--the difference is that it wouldn't arise in the first place. Sure, this is more a difference of perspective than of actual behavior or features, but I think it's incredibly important nonetheless.
Thinking about types in this different way is actually quite a deep topic; I should write a blog post about it for that blog I keep on meaning to start :P. One day. At the very least, writing it all down would help me get my own thoughts in order.