
Seven deadly sins of talking about “types” (2014) - dkarapetyan
http://www.cl.cam.ac.uk/~srk31/blog/2014/10/07/
======
fsloth
I think the argumentation here is a bit a skewed towards 'what static types
can and cannot do for the compiler' \- i.e. how they can help the computer in
constructing formal proofs. For me another almost as valuable thing is how
static types help _me_ think about and design the code. The static type
information is always there to remind "this entity belongs to this set of
objects" which frees me from the cognitive burden of maintaining this
constantly in my working memory. With specific static types the function
signatures now document the tranforms defined for those entities . This aspect
of static types as a cognitive tool is highly personal, and not same for
everyone, unlike the language and the compiler, though.

------
cbd1984
> It's also nothing to do with static checking—“type safety” holds in all
> “dynamically typed” languages.

This. Exactly true, and often forgotten.

It's useful here to distinguish between conversions and casts. Conversions use
type information to ensure computations remain sensical; the canonical example
is a conversion between integers and floating-point values which makes sure
you never end up feeding invalid bit patterns into the relevant arithmetic
hardware. An example of a cast is taking a bit pattern you _had_ been treating
as an integral value and then telling the language it's a floating-point
value, and there not being an error at that point.

 _Conversions don 't break the type system; they use the type system._ Casts
throw types away.

~~~
kyllo
Casts don't only throw types away, they can also add type information to
otherwise untyped input, such as reading data from a file or an HTTP request
payload.

~~~
cbd1984
> Casts don't only throw types away, they can also add type information to
> otherwise untyped input, such as reading data from a file or an HTTP request
> payload.

This is true. A more interesting example might be a program which creates
database queries from user input _casting_ input strings to the type
"unescaped string", and then _converting_ them to type "escaped string" by
performing a string-escaping process on them. (A _cast_ from "unescaped
string" to "escaped string", which is done implicitly very often in languages
where those types are not machine-checkable, is often disastrous.)

~~~
kyllo
Right, but I don't think this sort of type "conversion" (e.g. some function
with type signature like this in Haskell)

    
    
        escapeQueryString :: ByteString -> Query
    

is really adding or throwing away type information, it's just a function that
accepts one type and returns another type. So it's not related to casting,
except that it's the _opposite_ of casting. But what that gives you is the
ability to compose it with any other function that you define as accepting a
`Query` type, and the program will refuse to compile if you ever accidentally
try to pass it a ByteString. So it's about enforcing an invariant, in other
words.

------
fsloth
This topic of "what are static types good for" continued to bug me because I
could not formulate a concrete answer to myself. I found the following Reddit
discussion

[http://www.reddit.com/r/programming/comments/lirke/simple_ma...](http://www.reddit.com/r/programming/comments/lirke/simple_made_easy_by_rich_hickey_video/)

Which had eventually the specific thing I was after in a comment by Rich
Hickey which helped to nail the concept I was after

"Statically typed programs and programmers can certainly be as correct about
the outside world as any other program/programmers can, but the type system is
irrelevant in that regard. The connection between the outside world and the
premises will always be outside the scope of any type system."

Which is precisely why I enjoy types: Self referential consistency makes sure
I do not make errors within the logical model I have set for myself. It does
not help me build better models but once those models are built it helps me to
compose programs that do not break the invariants in the chosen formalism.

Which, I think tells exactly what static types are good for: For those sort of
problems that benefit from having a formal specification. There are a lot of
programs that have this quality.

So the argumentation is a bit skewed off. It should not be about whether
static or dynamic types are better but more focus on what are those
identifiable problem patterns where a dynamic type system helps programmer
create more value than a static one and vice versa.

~~~
shadowfox
> Self referential consistency makes sure I do not make errors within the
> logical model I have set for myself. It does not help me build better models
> but once those models are built it helps me to compose programs that do not
> break the invariants in the chosen formalism.

I think this is a valid point.

But I am less convinced about the second part. I feel that better type systems
do help one model the world better (while retaining the consistency checking
that you mentioned). In my mind at least having more/better "type tools" is
like having differently shaped pieces in your lego tool box. You _can_ create
better models for yourself, because the pieces allow you to. And the more
experience you have with them, your modeling abilities also improve.

------
wyager
I emphatically reject the many of the conclusions of this article.

>Let me say it simply. Type systems cannot be a one-stop solution for
specification and verification.

Wrong! Any correctness property can be encoded into a sufficiently powerful
type system. Which brings me to...

>Curry-Howard is an interesting mathematical equivalence, but it does not
advance any argument about how to write software effectively.

This is absurd, of course. The isomorphism between type systems and proof
systems allows us to build lots of impressive systems like Agda and Coq.

>No sane person can argue that we want to bifurcate a programming language
into distinct base-level and type-level fragments.

That's actually quite a reasonable desire.

>Personally, I want to write all my specifications in more-or-less the same
language I use to write ordinary code.

To the author, this seems to mean "therefore, we shouldn't have types!". In
reality, a sufficiently flexible programming language allows you to write
type-level code in the same language as value-level code. See Agda.

>Rich Hickey's transducers talk gave a nice example of how patterns of
polymorphism which humans find comprehensible can easily become extremely hard
to capture precisely in a logic.

Lots of people have typed transducers just fine...
[http://blog.podsnap.com/ducers2.html](http://blog.podsnap.com/ducers2.html)
[http://conscientiousprogrammer.com/blog/2014/08/07/understan...](http://conscientiousprogrammer.com/blog/2014/08/07/understanding-
cloure-transducers-through-types/)

~~~
AnimalMuppet
> >Let me say it simply. Type systems cannot be a one-stop solution for
> specification and verification.

> Wrong! Any correctness property can be encoded into a sufficiently powerful
> type system.

I'm writing a factorial function. But, because the caffeine hasn't kicked in
yet, at the recursion step, I add one instead of subtracting one. The result
is that the program will run in an infinite loop until either the fixed-size
Int rolls over to 0, or the bignum-style integer consumes all available
memory. That's certainly incorrect behavior. But I'd like to see your
"sufficiently powerful type system" that will handle that.

Then there's numerical accuracy. Here the results are not "wrong wrong",
they're just "a little bit wrong" or "less accurate than they could have
been". Again, I'd like to see your type system detect that.

In my world, embedded systems, you can get hard-real-time situations, which
means that the right answer, too late, is wrong. I believe that there are
formal verification tools, but they don't look anything like a type system.
And even those tools depend on a very deep knowledge of the processor that the
code will be running on ( _not_ compiled on). Once more, I'd like to see your
type system that's going to handle this.

> > Curry-Howard is an interesting mathematical equivalence, but it does not
> advance any argument about how to write software effectively.

> This is absurd, of course. The isomorphism between type systems and proof
> systems allows us to build lots of impressive systems like Agda and Coq.

And if Agda and Coq were being used to write much software, you'd have a
point. But if I understand correctly, they're mostly used for automated
proofs. The point is that the program compiles, and therefore something is
proven. The point isn't to get a runnable program.

~~~
codygman
> The result is that the program will run in an infinite loop until either the
> fixed-size Int rolls over to 0, or the bignum-style integer consumes all
> available memory. That's certainly incorrect behavior. But I'd like to see
> your "sufficiently powerful type system" that will handle that.

I believe that Agda could do this.

------
hawkice
I have always gotten the impression there was a failure of communication when
(without lose of generality) Haskell programmers would tell (also, without
loss of generality) Python programmers that Python only has one type,
'object', which appears to violate the understanding of the Python programmer
thoroughly enough that there isn't really even a conversation about what is
meant by the comment.

The first point dissolves this conversation (because clearly neither the
Haskell nor Python programmers are confused about the execution of actual
programs). Notably, it dissolves it in a way where that pesky third issue
doesn't pop up (which is perhaps the worst of them all, sans _maybe_ the
last).

~~~
hcarvalhoalves
Well, that's an interesting comment. I have almost a decade experience in
Python, and having recently picked up Haskell, I clearly see the value of
having (or not) a static type system.

I have noticed that during all these years I would just "type-check" the
various APIs in my mind, or with the aid of the REPL, but it's not as easy
when programmers with less experience join the project. I would often have to
explain tracebacks about type errors (e.g., tried to call a missing method on
`None`, but the programmer didn't expect that API to return `None`). Strong
typing avoids errors from propagating, but without static checking only gets
you half-way there.

I would still use Python (or other dynamic language) to introduce someone to
programming - not having to reason about types lowers some barriers - but
nowadays I'm not sure it pays off for a project expected to be maintained.

~~~
crdoconnor
I was always on the fence about static typing because I figured that it
automatically led to more verbose code (clearly a significant down side), but
after looking at Haskell and F# I'm not so sure that's true for them, even
though it definitely is for C++ and Java.

I'm still unconvinced that additional testing is required to overcome bugs
that a static type checker would catch, though. In my experience all of those
"call a missing method on None" issues get caught relatively quickly by a test
that you would have to write anyway even if you were using a language with
static typing.

~~~
eru
> In my experience all of those "call a missing method on None" issues get
> caught relatively quickly by a test that you would have to write anyway even
> if you were using a language with static typing.

There are lots of bugs that you still have to test for in a language with a
typesystem like Haskell's. But it's not this class of bugs. Pattern matching
(and completeness checks) should take care of that.

~~~
crdoconnor
I know, but my point was that in my experience _this_ class of bug is nearly
always a problem at its heart in requirements anyway (i.e. an unspecified edge
case), meaning you _have_ to test it in whichever language you use to be
confident of the program's correctness.

E.g. you'll still need to write a test for your haskell code to determine that
the behavior is correct if the IP address is missing, and in your python code
if you don't specify behavior it is the kind of thing that will likely lead to
one of those nasty "cannot call a method on none" errors.

Since the act of testing _which is necessary anyway_ catches it or confirms
correct behavior, the additional benefit of a static type checker confirming
the presence of such a bug is perhaps lower than it might appear at first
glance.

This is clearly _not_ true of weakly typed languages where you have to write a
myriad number of extra tests just to achieve the _same_ confidence in your
code to cover all the unexpected use cases caused by weird type conversions
you didn't even realize you were doing.

~~~
mightybyte
I believe eru's point was that those kinds of tests are not necessary in
Haskell because if a function takes something of the type IpAddress, None
simply isn't possible. So everywhere IpAddress is used, you don't have to do
any of those tests.

It sounds like your point is that even in Haskell there will inevitably be
boundaries where the None case is an issue. And that's true, namely the point
where you call:

    
    
        parseIpAddress :: String -> Maybe IpAddress
    

Somewhere your app will be sucking in a string and you need to have a test for
the empty string case. That's true. But the surface area of the application
that is susceptible to this problem is reduced (often dramatically). All code
that works with IpAddress is free and clear. Whereas in most other languages,
_every_ function that takes an IpAddress probably needs to be tested with the
None case. If you try to argue that you don't need to test every one, then
you're plagued by the difficulty of knowing which need it and which don't.
Haskell's types solve that for you completely.

~~~
crdoconnor
I get the point, but I still feel like the problem surface is not really
reduced that much by static typing it away, and the bulk of the problem is
simply moved elsewhere rather than solved outright since the root of these
issues is incorrect specification anyhow.

As in, I take issue with the word "dramatic", not the word reduce.

Additionally - it might just be my python bias talking - but I tend to find
that most useful "real-life" libraries need to have an opinion about what
happens when a variable is None, because it is almost always meaningful. I
guess if I were writing haskell my code would end up looking like a bad carly
rae jephsen song :)

~~~
mightybyte
> I get the point, but I still feel like the problem surface is not really
> reduced that much by static typing it away

Ok, that's a valid point, which means that we need to precisely characterize
what is meant by "problem surface". Let's continue with our IpAddress example.
Imagine you are the author of the IpAddress package. Imagine we have an
IpAddress type accompanied by an API of say 15 functions that take an
IpAddress and do something with it. How many of those functions need to have
tests that cover the case of passing None? I believe that number represents
our problem surface.

If you agree that this is the problem surface, then I can confidently say the
reduction is dramatic. (If you don' agree, then I refer you to the previous
point about Maybe making it very clear what needs None tests.) The Python
IpAddress package has to have None tests on every single function that takes
an IpAddress. The Haskell package has to have None tests on zero. It only
needs the equivalent of a None test on the functions that return a Maybe
IpAddress. This is going to be a very small number. This brings us to your
other point.

> Additionally - it might just be my python bias talking - but I tend to find
> that most useful "real-life" libraries need to have an opinion about what
> happens when a variable is None, because it is almost always meaningful. I
> guess if I were writing haskell my code would end up looking like a bad
> carly rae jephsen song :)

I'm pretty sure this is indeed largely caused by your Python bias. In my
experience, Haskell code does not end up littered with Maybes. I think you
don't appreciate the full weight of the statement that when you have an
IpAddress, you _never_ need to worry about the None case. Never ever. Period.
There are cases where you need to deal with a Maybe IpAddress. Those cases
don't end up polluting all your code, contrary to what you seem to think. In
fact, it's the other way around. The pure cases tend to be the ones that are
the most common, making the Maybe cases clearly visible.

Even in the situation where they do start to pollute some things, Haskell
gives you dramatically better tools for dealing with the problem. Yes, you'll
still have to have a test for it, but the code you're testing will have been
much more concise and less prone to error. Furthermore, Haskell can make it so
you don't even have to think about testing that case. You write an Arbitrary
instance for IpAddress and the Arbitrary instance for "Maybe IpAddress" comes
for free! This means that your QuickCheck test for the Maybe IpAddress
function will automatically be sure to try it with Nothing.

From what you've said it is clear to me that your python bias is somehow
making it hard for you to see/believe the benefits that we're talking about.
I'm concerned that this gap seems to be so big and we seem to be having such a
hard time crossing it. Give Haskell a try and see for yourself. But don't go
off into a cave and play with it alone. There are abstractions and best
practices for dealing with some of these things that took awhile to emerge. So
you should be sure to talk to people who have experience using them while
you're learning.

~~~
hcarvalhoalves
> I'm pretty sure this is indeed largely caused by your Python bias. In my
> experience, Haskell code does not end up littered with Maybes.

I think the Maybes and other optional/sum types tend to be limited to the
boundaries, where you do IO or handle errors, while your internal functions
can deal with the concrete type. A code base littered with Maybes is certainly
a code smell.

~~~
codygman
> A code base littered with Maybes is certainly a code smell.

It's not because Maybe (and usually more practical Either) have functor
instances. What this means is you can use fmap to apply pure functions to
Maybes, Eithers, or any other type that has a Functor instance.

------
tome
I'm going to make a few comment and nitpicks.

"Patronising those outside the faith ... Please drop the condescending tone."

This is a great sentiment, but rather jarring to have it followed up by "No
sane person can argue that we want to bifurcate a programming language into
distinct base-level and type-level fragments".

"Equivocating around “type-safety”"

Surely it's reasonable to use the terminology "type safety" to describe a
range of properties that code may have beyond the (supposed) original meaning
of "memory safety".

"If everyone would just use a modern language with a fancy type system, all
our correctness problems would be over, right? ... Type systems cannot be a
one-stop solution for specification and verification."

This is an enormous straw man. No fan of strongly typed languages claims that
they solve all correctness problems. Furthermore, the fans of strongly typed
programming are the most likely to appreciate the additional forms of
verification that Stephen outlines.

~~~
AnimalMuppet
> This is an enormous straw man. No fan of strongly typed languages claims
> that they solve all correctness problems.

I have read the claim "If it compiles, it works" (or minor wording
variations), on this board, more than once (in reference to Haskell). Fans of
strongly typed languages do in fact make claims that sure sound like they are
saying that.

~~~
tel
It's said semi-tongue-in-cheek. The original statement was _filled_ with
caveats. [0]

Sometimes it's said _less_ tongue-in-cheek. Then it usually has to do with the
facts that large refactorings can be handled by merely breaking things and
rewiring until types check. This generally _does_ work since types typically
guard "wirings".

But taken out of contexts and as the meaning is extended bit by bit it becomes
truly ridiculous and nobody would disagree.

So, don't just propagate the rumor. Nobody is claiming miracles. It just
sounds like that at around the 8th iteration of the telephone game.

[0]: As far as I'm aware, this is the origin of the idea:
[http://r6.ca/blog/20120708T122219Z.html](http://r6.ca/blog/20120708T122219Z.html)

~~~
dllthomas
The origin is definitely further back. I recall hearing "it takes longer to
get it compiling, but when it compiles it works" about OCaml more than a
decade ago. When I mentioned this to my father at that point, he remarked that
the sentiment goes back further.

~~~
Jtsummers
It goes back to discussions on Ada as well. I, personally, qualify my advocacy
with a probably. As in, "if it compiles, it's probably correct."

~~~
dllthomas
I wouldn't be terribly surprised if he'd been thinking of Ada evangelists,
though I think there are other candidates too. Unfortunately, I can't ask him
to clarify...

------
qznc
The coda at the end is nice tool in general. Whenever you find yourself in a
discussion, where it seems people talk past each other because some words have
a fuzzy meaning, try to avoid the word completely. Works good for discussions
about "free will", "love", "intelligence", "luck", "conciousness", and
apparently "types".

~~~
pseudonom-
[http://lesswrong.com/lw/nu/taboo_your_words/](http://lesswrong.com/lw/nu/taboo_your_words/)

------
tel
With deference to Dr Kell, I think this is overwrought.

He has some good points, points made happily against a particular form of
"type cultist" who oversimplifies argument toward making types out to be
something magical. They're not, no sane person would think so, but they are _a
static, syntactic proof system pertaining to the meaning of your programs_ and
there's a lot to be said for such technology.

I am happy to quibble exact points made in this rant, but I'm on my phone so
for now I'll merely say that there's a large design space of programming
languages and a similarly large (perhap today larger) space of type theories.
Matching them up to hit all of your tradeoffs is hard, but there have been
some great successes and I would happily contend that any professional
programmer owes it to themselves to explore this design space "in anger".

I'm also happy to generally claim that faster, more complete feedback is
better than the opposite. Static systems of all regards are faster than
"dynamic" ones, and most of them are more complete. This is where you hear
about comparisons of types (especially dependent ones) and tests. It's _great_
stuff to think about and explore.

Further, I want to emphasize that SMT solvers are fascinating too and nobody
thinks that they're not. The style of proof they provide is more expressive
and less certain, though, so the right answer is to know how to combine all of
these modes of evidence.

------
sanderjd
> Type checkers are one way to gain assurance about software in advance of
> running it. Wonderful as they can be, they're not the only one.

This is true, but in my experience they are far and away the most common and
easily tooled _automatic_ way. Tests may run automatically, but they must be
manually written, and static analysis tools are not (yet?) very widespread.

------
colanderman
No, I think a separate type-level language is a _good_ thing, so long as my
data-level language remains Turing complete (and thus undecidable). I would
much rather be forced by the syntax -- by the construction of the language
itself -- into a guaranteed decidable subset, rather than have to reason
things out myself and get frustrated when the compiler doesn't agree with me
(or when it used to agree with me and doesn't any longer!).

Now, there's no reason that the type-level language can't be exactly a subset
of the data-level language. I'd just rather not play guess-and-check
decidability with the compiler the same way I currently have to play guess-
and-check optimizability.

~~~
eru
There are good arguments for not even getting Turing-completeness in your
value-level language. Totality would be a nice goal.

See [http://blog.sigfpe.com/2007/07/data-and-
codata.html](http://blog.sigfpe.com/2007/07/data-and-codata.html)

~~~
colanderman
Oh I totally agree. Unfortunately for many that's far too radical a notion,
despite how unnecessary Turing-completeness is for 99.95% of programs written,
and how desirable termination is. First you have to convince the
Ruby/Python/JS/Elixir crowd of the frivolity of code mutability.

------
mercurial
> The Sniveley/Laucher talk included a comment that people like dynamic
> languages because their syntax is terse.

Sure. Trying to have a strongly-typed Java program, for instance, means a lot
of boilerplate. As a consequence, it's often not done, or at least not done
enough.

> To suggest that any distaste for types comes from annotation overhead is a
> way of dismissing it as a superficial issue.

On a large codebase, there is nothing superficial about it. The more
boilerplate you have, the more difficult it becomes to understand the codebase
and to find code which actually does something.

> If everyone would just use a modern language with a fancy type system, all
> our correctness problems would be over, right? If you believe this, you've
> drunk far too much Kool-Aid. Yet, apparently our profession is full of
> willing cult members. Let me say it simply. Type systems cannot be a one-
> stop solution for specification and verification. They are limited by
> definition. They reason only syntactically, and they specify only at the
> granularity of expressions. They're still useful, but let's be realistic.

Let's be realistic, and not pretend this is a widespread opinion.

> Rich Hickey's transducers talk gave a nice example of how patterns of
> polymorphism which humans find comprehensible can easily become extremely
> hard to capture precisely in a logic. Usually they are captured only
> overapproximately, which, in turn, yields an inexpressive proof system. The
> end result is that some manifestly correct code does not type-check.

Possibly, but it's all a question of tradeoffs. There is a lot more incorrect
than correct code which doesn't typecheck.

~~~
mateuszf
> On a large codebase, there is nothing superficial about it. The more
> boilerplate you have, the more difficult it becomes to understand the
> codebase and to find code which actually does something.

In case of types it's true for humans. IDE will be able to build a very
precise model and ease navigation and code changes when it understands types.

~~~
superuser2
Why should this be a property of the IDE and not the language itself?

No, seriously. If the language sucks but your interaction with the IDE is
fine, why not have a language that is closer to your interaction with the IDE
to begin with, and use a text editor?

~~~
mercurial
It's the old "verbose getter and setters are not a problem because I can
generate them in Eclipse" argument. To some extent, over-reliance on an IDE
indicates issues with your language of choice, just like the overuse of design
patterns.

------
breckinloggins
Things that also get my goat (from a less type theoretic point of view):

\- Assuming that type checking is the only thing you might want to do with
your program at compile time. There are, in fact, arbitrarily many things you
might want to do at compile time. Textual preprocessing, expansion macros,
type checking, C++ template metaprogramming, etc. are all obvious examples.
The fact that most languages bolt all of these things on as one-off features
is a sign that something deeper is being missed (perhaps the 3-Lisp[1] idea,
for example, or just consider plain ol' Lisp macros).

\- Assuming that totality, soundness, and lack of side effects are always type
checking virtues. Sometimes I'd like a "typechecker" that isn't always
guaranteed to terminate. Sometimes I'd like my typechecker to launch some
missiles. This is really more about the previous point.

\- Assuming that type checking is NOT desired at runtime. Sometimes it is.
Clojure's core.typed has the property that it is "just code" and can therefore
be run at anytime and used for things other than checking that your code is
safe to ship. [2]

\- Assuming that a program that fails type checking shouldn't be run. Maybe it
should and the parts of the program that fail type checking should be run
inside some monadic context. Maybe we want not only gradual typing, but
gradual enforcement. Maybe if we had richer type data, then "failing type
checking" wouldn't be binary. Perhaps a program that fails certain type checks
has to add runtime support to enable certain features, but if the same program
type checked to be about pure memory locations, the runtime would vanish and a
"straight to the metal" program would remain. (In other words, maybe what we
currently call a language's "runtime" should just be a residue of things that
couldn't be checked at compile time, among other things.)

\- Assuming that types and the type checker should be a fixed feature of the
language. Perhaps type checking should be something that is built as a library
from language primitives. Take this far enough and you get something like
exotypes. [3] Which reminds me...

\- Assuming that "type" is the only kind of metadata you might like to attach
to a symbol, expression, or value. There are many other things I might like to
express, and I might like to use the usual machinery of type checking to
analyze or work with those things. Clojure's value-level metadata and Common
Lisp's property lists are some good examples of generalizing this.

\- Focusing on manifest typing for verifiability of programs and ignoring the
extra information it gives tooling. Visual Studio's IntelliSense and F#'s type
providers are examples. At least one person at Microsoft has trouble sleeping
at night because he or she is worrying about when the rest of us are going to
figure this one out. [4]

\- Assuming that most programmers want one or the other. The article touched
on this, but I wanted to expound a bit. I would like my original prototype to
feel like play-doh and my finished product to feel like concrete. Why do I
have to do this in two different languages? "Oh I'll prototype it in python
and then switch to Haskell when I want it to be safe." What?

\- In general, assuming that it is the job of the language designer to provide
the safety nets, rather than the DSL or API designer. A suitably designed
programming language is one that could prevent operation Foo in layer 3 while
allowing it in layers 1 and 2. This is one reason why languages like Idris are
so complicated. Dependent types HAVE to be complicated because the language
designer feels that it is his or her job to make sure the part of your program
that runs at compile time terminates and doesn't do anything "bad". Why do we
rarely question this assumption?

[1] [http://lisp-univ-etc.blogspot.com/2012/04/lisp-hackers-
pasca...](http://lisp-univ-etc.blogspot.com/2012/04/lisp-hackers-pascal-
costanza.html)

[2]
[https://github.com/clojure/core.typed](https://github.com/clojure/core.typed)

[3]
[http://dl.acm.org/citation.cfm?id=2594307](http://dl.acm.org/citation.cfm?id=2594307)

[4] [http://blogs.msdn.com/b/dsyme/archive/2013/01/30/twelve-
type...](http://blogs.msdn.com/b/dsyme/archive/2013/01/30/twelve-type-
providers-in-pictures.aspx)

~~~
codygman
> Sometimes I'd like my typechecker to launch some missiles.

You'd like missiles to be launched at compile time?

~~~
breckinloggins
Doesn't everyone? ;-)

~~~
AnimalMuppet
Of course. It's called "launch on warning".

;-)

------
raiph
Perl 6 elegantly blends static nominal typing and arbitrarily loose or tight
dynamic constraints.

The following slides were published a week or so ago and should demolish most
folks' overly simplistic view that one or other is best or indeed that they
don't play well together:

[http://jnthn.net/papers/2015-fosdem-static-
dynamic.pdf](http://jnthn.net/papers/2015-fosdem-static-dynamic.pdf)

------
eru
> (It should also do so using a straightforward explanation, ideally featuring
> a counterexample or stripped-down unprovable proposition. I don't know any
> type checker that does this, currently.)

I guess the incomplete-pattern-matching warnings (in eg ghc) come closest.
They usually give you an example of an example of a value you haven't matched.
But that's a very restricted and simple domain.

------
zak_mc_kracken
> If somebody stays away from typed languages, it doesn't mean they're stupid,
> or lack education, or are afraid of maths. There are entirely valid
> practical reasons for staying away. Please drop the condescending tone.

Well, ok, let's hear these practical reasons?

Otherwise, it's just an empty claim.

~~~
eru
Legacy code (and skillsets) are always a good practical reason.

------
michaelochurch
The hardest objective problem in computer science may be P ?= NP, but the
hardest _subjective_ problem is whether it is better to have static vs.
dynamic typing in languages. There are extremely intelligent, thoughtful
people on both sides... and it's rare in software engineering that there is
such an even divide. I'm quite familiar with Haskell, Scala, Clojure, and many
other languages... and I still can't conclusively call one better. I prefer
static, on a mix of practical grounds, but I realize that it's not a clear
victory.

Take the concept of data frames from R, also seen in Python's Pandas library,
and probably inspired by the APL family's (column-oriented) tables. It's not a
complex concept. Yet it's very powerful for interactive data exploration. It's
also a bitch to statically type (see:
[https://github.com/acowley/Frames](https://github.com/acowley/Frames)). It's
more of a leap than lenses (and probably, like lenses, extremely cool once you
fully "get" it).

If we have to pull out some advanced Haskell extensions to do something basic
in _interactive data analysis_ then I consider that worrying. (This is not to
say that Frames isn't a great piece of work. Bringing subtyping into a
Hindley-Milner derived language and not having everything go to hell is really
hard. See: Scala) And remember, this is coming from someone who's pretty
invested in the "static" side.

Clojure also has a world-class aesthetic sense. Haskell is still using
camelCase. OCaml uses under_scores; that's better than fuckingUglyCamelCase,
but not ideal. Clojure (and the Lisp family in general) is run by people who
realize that aesthetics are really important and that allowing hyphenated-
tokens is far better than allowing people who hate their space bars to write
"a-1" when they should be writing "a - 1". That's just one example. For
another, maps. Clojure has {"five" 5, "ten" 10} but Haskell has fromList
[("five", 5), ("ten", 10)]. My point is that (which I prefer Haskell, for its
types) Clojure's aesthetic sense is arguably better, and I would say world-
class, especially considering that it runs on a butt-ugly platform (JVM) and
is still a beautiful language. We need to keep watching that community closely
because they're doing some great work in a wide variety of fields, including
front-end development (ClojureScript).

 _Type systems don 't just make you write annotations; they force you to
structure your code around type-checkability. This is inevitable, since type
checking is by definition a specific kind of syntactic reasoning._

There is some truth around that. Just to get data frames, you need row types
which means you have subtyping, and that complicates type checking
considerably. Haskell (see Frames) uses multiple "experts only" language
extensions to get something that "just works" for casual use but requires
advanced knowledge to grok the types. It's great work, but its difficulty
shows that we can't just ignore subtyping.

The counter-argument that I'd make is that 95+ percent (maybe more?) of all
functions that are ever written _ought_ to have simple static types. So much
of code, when it is clean, is already well-typed, even in a dynlang. The
difference is that, in a static language, the compiler yells any time someone
fucks that up in refactoring. And, more generally, you learn about errors (not
all errors, but a surprising percentage of them) sooner and that's a good
thing. In a dynamically typed language, you see a lot of functions that evolve
into multimethods that do different things per input type(s) and (although
usually this won't happen if the code is maintained by decent designers) you
can get O(n __2) or worse complexity in the functionality. If you 're
programming defensively in a dynlang, you have to think about all the crap
that might be passed into your functions. "What's a Matrix plus a Scalar? How
about a RowVector plus a ColumnVector? How about a Sparse2DArray plus a
DataFrame?"

On the whole, I'd rather be in a statically-typed language for a program of
any size, and I'm stodgy and conservative and old (but efficient and concise)
so, for me, "big code" starts around a couple thousand lines. I hate writing
boilerplate unit tests that a compiler and a type system would knock out
automatically. But I do have to give Clojure a lot of props for its community
and its aesthetic sense, and I think that the above obviously derive from its
ease-of-use and its Lisp heritage, both of which are tied to a feature
(dynamic typing) that I otherwise wouldn't much like.

The main question that I've been pondering for a few years (and not coming to
satisfactory answers) is whether it's possible to unify the cleanness-of-
language and aesthetics of Clojure/Lisp with the static typing of Haskell...
and not get some hyper-complex type system that results in compile-speed
problems at scale (e.g. Scala under typical use). It's an open problem.

------
pekk
I found this wonderfully clear and refreshing and I felt hopeful for the
future of programming. In the meantime, it's just nice to see that not
everyone is a shambling cultist, repeating canned talking points they read on
a wiki which affirm to them that their own choice is the very best choice
possible.

~~~
davidgrenier
It's fair to say that but it isn't the case that someone can have an
worthwhile opinion of something they haven't tried and a lot of people working
with say JavaScript or PHP just have never tried a decent ML-style language in
anger.

I think Stephen is a tiny bit guilty (as we all are) of that which I'll use
Kahneman wording's to put: "Nothing is as important as you think it is when
you are thinking about it". There's what seems to be an exaggeration that
whatever cost a type system has must be high... it isn't even true of
dependently typed programming languages.

I also think product and sum types can be given a free pass to most complaints
and while at it, you can throw in first-class function type as a minimum. And
I will grant that you can stay away from ad hoc polymorphism and subtyping.

~~~
dragonwriter
> It's fair to say that but it isn't the case that someone can have an
> worthwhile opinion of something they haven't tried

I disagree. I don't have to try hitting myself in the head with a hammer to
have a reasonable basis to believe it would be a bad idea. There's not much
point in having reasoning ability if you can't apply it to develop useful
opinions on what is and isn't even worth trying.

Obviously, that's not to say that there aren't _additional_ insights possible
from direct experience, and that people sometimes apply prejudice _rather_
than sound reasoning to present opinions on things they haven't tried, but its
simply silly to say that people _cannot_ have a worthwhile opinion on
something they haven't tried.

~~~
AnimalMuppet
I think you're misunderstanding davidgrenier's comment, specifically the line
you quoted. It appears to me that you are actually agreeing with him.

~~~
dragonwriter
There may be an "is -> isn't" or "can -> can't" issue in the line I quoted,
but I don't think so -- the rest of the sentiment in that article seems to be
a specific case where the poster believes that someone is making an error of
offering an opinion without experience (which I agree is an error), which
makes sense (though, I'd obviously argue, represents an overgeneralization
from a valid example) if its supporting a generalization that opinions without
experience are invalid, but, without some "but" or "on the other hand", etc.,
doesn't make a lot of sense if the thesis it goes with is that _some_ such
opinions _are_ valid.

So I don't think I misunderstood it.

