One of the areas where this really shows up is with higher-order functions. One way of ensuring that statically typed code can't fail is to do dynamic type checks at the transitions between dynamic and statically typed code. That way, the statically typed code can always assume that type declarations are correct, and type errors can always be traced back to dynamically typed code. But what happens when dynamically typed code passes a function, the type of which it does not know, into code that expects, say, a (fixnum -> fixnum)? You can't in general type check the domain and range of an arbitrary function.
One of the key differences between optional type annotations and gradual typing is what happens in that case. In Dylan or Common Lisp, any call to a function where the target is not statically known will pollute the type information. I.e. the type-checker cannot trust that a function annotated (fixnum -> fixnum) will in fact always return a fixnum. It will insert a dynamic type check on the return value. In a gradually-typed system, the compiler can make that assumption, and not insert the type check. In purely statically typed code, the type checker can ensure the type of the function when its identity is taken as a value. If the function comes through dynamically typed code, the compiler will wrap the value in a thunk at the transition point. The thunk will do the type check, and if it fails, you can trace the error not to the statically typed code where the function was called, but back to the dynamically typed code that passed in a function of incorrect type.
 In Dylan, you cannot even give a type for a function more specific than 'function'.
Edit: Also, the gradual typing theory as described by Siek in a language with a Hindley-Milner like static type system supports type inference, so it will automatically use as much type information as possible.
More on this, including how to smartly compile gradually typed languages so that functions don't explode in size when they are being passed between statically and dynamically typed code, in the paper:
Blame, coercions, and threesomes, precisely
Jeremy Siek, Peter Thiemann, and Philip Wadler. Draft, March 2014.
Perl5 also has optional types since 15 years, just the implementations (checkers and optimizers) were broken in the last big refactoring around 5.10, and there's no interest in enabling it again. The optimizing perl5 compiler B::CC honors optional types (besides doing type inference for core types, IV, NV and PV) and gets a typical 2-5x speedup. http://act.yapc.eu/ye2013/talk/4637
BTW: This is a very old article, and was posted here some months ago already.
AFAIK there's no optimisation based on that, just typechecking.
 it's interesting in that where most typecheckers restrict language semantics as they require a form of correctness proof, dialyzer is optimistic and only errors out if types are clearly inconsistent
So when I use Scala I'm frustrated that I have to dance for the type system to make the damn thing do what I want, and when I use Clojure I occasionally get lost down long holes of confusion only to realise that the map with params I passed to some function somewhere had an element misspelt slightly (an hour lost because I used :url instead of :uri once, no joke).
I've heard good things about core.typed, but have yet to find the time to try it out.
Edit: I've also found that in all the static languages I've tried (Scala being the one with the most comprehensive type system) you get to a point where you can't describe what you want with your type system because it's too conceptually complex, and then all your assumptions about what the type system gives you falls away.
Funnily, that is one of the problems I've experienced with gradually typed (and optionally typed) systems; you need to have strong discipline to get the most of out them. My personal way of tackling it (in Hack these days) is to strictly type everything, and only rely on dynamic typing where it's absolutely needed.
Success typing is optionally refined through typespecs in the code and checked by the Dialyzer tool:
Success typing is open to gradual typing of legacy code, as more typespecs are added, constraints gradually tighten, and Dialyzer finds the inconsistencies.
The Perl 6 compiler will not only shift various warnings/errors from run time to compile time, it will also emit more efficient code!
It would if you used Python 3 or a new-style class.
I want a knob I can turn at compile time. "Compile this dynamically, so I can run it and play with the algorithm. OK, now I think I've iterated the algorithm enough times that I'm fairly confident in it, now turn on type errors as warnings. Now that I've cleaned up all of that, let's go back to the default, which is to compile with type issues being errors."
Take for example Facebook and their HHVM. They can run the existing code, and then slowly update the code with types, and write new code with types. The alternative would be years and years of rewriting the whole codebase.
Personally though, I'm more a fan of the type-checking done in languages like Haskell, F# and ML.
Firstly, PHP and HHVM have three "levels" of types:
From a type-theory perspective, these languages are "unityped", ie. all values have the same type. We might write this as:
Any = Bool + Int + Float + Resource + String + List ((String + Int) * Any) + (Any -> Any) + ...
From an OO perspective, the types are the classes.
PHP already allows a very limited form of type annotation: function/method arguments can be annotated as arrays or as objects of a particular class/subclass/interface-implementation. These are non-nullable, unless null is set as a default value.
These are checked dynamically rather than statically, and a failing check triggers a fatal error. This makes them path-dependent, requiring an extensive test suite to minimise the number of fatal errors in production, and overall makes them feel more like a very limited form of assertion rather than type-checking.
As far as I understand it, the typing features offered by HHVM over PHP are the ability to use "non-OO types" like int, string, etc. in annotations; annotate more things (like object properties); use generics in classes (a poor man's parametric polymorphism); and for annotations to be nullable without requiring a null default value.
Overall, I wouldn't say that HHVM adds gradual typing to PHP. If we're charitable and say that HHVM has gradual typing, then really it's just slightly better gradual typing than what PHP already has.
>If we're charitable and say that HHVM has gradual typing, then really it's just slightly better gradual typing than what PHP already has.
Isn't a mix of dynamic typing and static typing, being optional which you use, the definition of gradual typing? Wiki also seems to agree with this.
I'd say HHVM greatly improves on PHPs type hinting, which is rather basic, unless you want to recreate every basic type as an object. Also, with PHP type hinting, you can't specify types of Object attributes/variables, which you can in Hack/HHVM.
If one uses a decent type system that has user defined types and interfaces then that gradual stuff disappears.
I wonder why it's not more popular,it seems it has been around for years.Maybe it needs to be embraced by a big company first.I heard its creator works at Facebook though.
Not the original creator Walter Bright, but "second in command" Andrei Alexandrescu works at Facebook.
Besides, "static methods" are a hack:
If it's a behaviour of the class rather than its instances, then make it a method of the class. You'd only need static methods for this if your language doesn't treat classes as objects (ie. it's not a particularly OO language). The example is in Python, where classes are objects, so no statics are needed.
Alternatively, it may be a completely standalone piece of code, in which case it has no business being inside a class. Just make it a standalone function, and stick it in a module to avoid name clashes. You'd only need static methods for this if your language doesn't allow standalone functions, or doesn't have a module system. Since Python has both, there's no need to abuse classes like that.
Still, it's just a Java quirk that's irrelevant to the article.
Even if the type checker passed the compilation would fail because it's trying to call a non static method from a static method
However, failing compilation because you're calling a non-static method from a static context is also a kind of static checking, so the author's example still works, just in unintented ways.
There seems to be a lot of wooly-thinking going on throughout the article, which unfortunately seems typical of many pro-dynamic-typing pieces (I used to be guilty of this too, although I've since switched sides). For example, we can compare this gradually-typed Python implementation to Haskell's Data.Dynamic module (where the article's "?" is equivalent to Haskell's "Dynamic").
> the argument bound to x will be checked at run-time to make sure it is an integer before the addition is performed.
How can we "make sure" the argument bound to x is an integer? It's not specified.
We could treat the value as a blob of arbitrary bits, representing some integer in binary; however, that's not really "checking to make sure", that's "forcing it to be". It's equivalent to coercing a void* to an int* in C.
The other way, which is actually the case, is that values of type "?" don't just contain data of some unknown type, they also contain a 'tag' which indicates the type. When we "make sure it is an integer", what he really means is that we check whether the tag stored in x matches the tag we're using for integers.
This has two important consequences:
Firstly, the types in "dynamically typed" programs never change; they're always (tag, data) tuples. They are, in fact, static. We can write them easily in a statically-typed language like Idris:
data Tag : Type where
int : Tag
bool : Tag
function : Tag -> Tag -> Tag
string : Tag
error : Tag
void : Tag
-- We can compare Tags for equality
instance Eq Tag where
int == int = True
bool == bool = True
-- ...and so on
_ == _ = False
typeOf : Tag -> Type
typeOf int = Int
typeOf bool = Bool
typeOf (function x, y) = (typeOf x) -> (typeOf y)
typeOf string = String
typeOf error = String
typeOf void = ()
Dynamic : Type
Dynamic = (t : Tag, val : typeOf t)
null : Dynamic
null = (void, ())
true : Dynamic
true = (bool, True)
false : Dynamic
false = (bool, False)
someDynamicValues : List Dynamic
someDynamicValues = [true, false, (int, -123), (function int int, (+))]
-- Function application
applyDynamic : Dynamic -> Dynamic -> Dynamic
applyDynamic (function i o, f) (t, x) | t == i = (o, f x)
Secondly, what happens when the tags don't match? All the article says is this:
> Allowing the implicit converson[sic] from ? to int is unsafe,
> and is what gives gradual typing the flavor of dynamic typing.
Being unsafe is a bit crap, considering that (with a bit of squinting) we can do all of this statically. In fact, does that mean we've managed to break the type system of Idris?
No; the reason this is unsafe has nothing to do with types, it's a matter of totality.
If we look at the definition of `applyDynamic`, we see it's only defined in the case where the first argument is a function (from `i` to `o`) and the second argument is a valid input to that function (`t == i`). In all other cases, the result is undefined. Hence "dynamic type errors" are actually unhandled cases (in Haskell-speak "non-exhaustive patterns"). Note that it's trivial to catch these errors statically (eg. the Glasgow Haskell Compiler gives warnings for them).
This the crucial realisation for me: dynamic typing turns ill-typed functions into a well-typed, partial functions. It lets us trade-off between rich types and expressive code, while remaining well- (statically-) typed.
With this in mind, it's clear that dynamic languages don't actually implement dynamic types very well: how do we specify what to do for unhandled cases? There are various approaches, most of which seem to be hard-coded into the language and require new, special features to be added to the interpreters. For example Python throws "TypeError" exceptions; PHP will sometimes convert values of one type to another, and other times trigger a notice/warning/error. SmallTalk's "doesNotUnderstand" approach is quite nice, but it still requires special-case handling and comes associated with a whole load of OO baggage.
In a static language, unhandled cases are trivial to fix: just wrap the result in a `Maybe` and return `Nothing` from all the unhandled cases!
applyDynamic : Dynamic -> Dynamic -> Maybe Dynamic
applyDynamic (function i o, f) (t, x) | t == i = Just (o, f x)
applyDynamic _ _ = Nothing
dynamicMaths df dx dy = do f <- fromDyn df
x <- fromDyn dx
y <- fromDyn dy
return (f x + f y)
So really, dynamic typing is just a design pattern for statically typed programming. If we look at the virtues of dynamic typing listed in the article we see:
> Dynamic type checking doesn’t get in your way: you can immediately run your program
> without first having to change your program into a form that the type checker will accept.
As we've seen, programming with dynamic types is writing your program in a form that the type checker will accept! Specifically, we end up with `Maybe` types all over the place, which are trivially satisfied by `Nothing` (the monad takes care of that for us).
> Dynamic type checking makes it easy to deal with situations where the type of a value depends
> on runtime information.
This is another wooly statement, since it conflates static types with dynamic types. As we've seen, the static types never depend on runtime information; they're all just `Dynamic` (AKA "?"). Since the type-checker only cares about the static types, it can all be inferred and checked without any effort from the programmer.
The dynamic types (the Tags) are just data, which we can handle just like anything else. Those 'situations where the [dynamic] type of a value depends on runtime information' are just situations where one element of a tuple depends on another.
Since dynamic languages don't let us access these tags directly, they force us to go through special-purpose reflection APIs like "typeof", "instanceof", etc. which I would argue is harder.
What about the drawback of static types that's mentioned?
> Static type checking makes you think more seriously about your program
> which helps to further reduce bugs.
That's a bit of a misnomer; static type checking allows you to think more seriously about your program, if you want to. You might decide that rich types aren't worth using and follow a dynamic typing pattern instead, but at least you get to choose. Of course, that's a false dichotomy anyway; we can have some parts of our program use weak, dynamic types and others use strong, rich types. It's all still static.
The point is, in statically-typed languages, dynamic typing (and gradual typing) is just a design pattern. Not only is the expressiveness problem a strawman (we can write dynamic code in a static language, or trivially parse a dynamic "front end" language into an equivalent static form), but static languages give us more expressiveness (dynamic types are just regular data, "dynamic type errors" are just regular branches), more safety (by limiting which parts are dynamic, checking (static) types and totality automatically, etc.) and require no special language features. We can do this right now with existing languages.
> > Static type checking makes you think more seriously about your program > which helps to further reduce bugs.
That wasn't mentioned as a drawback; it was mentioned as an argument for static types that people give that the author thought was an invalid argument.
> The point is, in statically-typed languages, dynamic typing (and gradual typing) is just a design pattern. Not only is the expressiveness problem a strawman (we can write dynamic code in a static language, or trivially parse a dynamic "front end" language into an equivalent static form), but static languages give us more expressiveness (dynamic types are just regular data, "dynamic type errors" are just regular branches), more safety (by limiting which parts are dynamic, checking (static) types and totality automatically, etc.) and require no special language features. We can do this right now with existing languages.
Yes and no. Yes, all the things you said are formally equivalent. In practice, though, they are very different. Sure, you could make them equivalent, but who actually writes programs that way? And if you don't actually write that way, then the practical difference really matters.
Jeremy Siek (the author) published two papers last decade: "Gradual Typing for Functional Languages" (http://www.cs.colorado.edu/~siek/pubs/pubs/2006/siek06:_grad...) in 2006 and "Gradual Typing for Objects" (http://ecee.colorado.edu/~siek/gradual-obj.pdf) in 2007.
Those papers are not saying "we're the first to combine static and dynamic typing." Those papers are saying "we're the first to formalize the combination of static and dynamic typing into an overall sound type system."