
The limits of type theory: computation vs. interaction - danghica
http://researchblogs.cs.bham.ac.uk/thelablunch/2015/06/types-computation-vs-interaction/
======
paganel
I'm in a no way a computer-science expert, just a mere programmer, but by the
looks of it this discussion all boils down to the (quite big, I'd say)
impedance mismatch between real life ("computer programs" in this article) and
us, as humans, trying to understand said life and applying rules to it (by in
this case applying "type theory").

For example, looking at this:

>>> 0 if 1 else 'a'

made me remember one of me older thoughts on this subject, more exactly if the
word 2/two from a sentence like this: "there are two apples on the table"
should be treated as an Integer or a String. Some would say that we should
only treat it as an Integer if we intend to do computations on it (for example
adding those 2 apples to another 3 apples), to which I'd ask how come the
word's essence (its "transformation" from a String to an Integer) changes
depending on the type of action we intend to apply on it? (so to speak,
English is not my primary language).

Anyway, things in this domain are damn complicated, and have been so for the
last 2500 years at least, starting with Socrates (or was it Plato, in fact?)
who was asking about the essence/idea of a table (i.e about its type), i.e. is
a table still a table if it only has 3 legs remaining? what about two?, and
continuing with Aristotle's categories and closer to our days with Frege and
Cantor (somehow the table problem can be connected to the latter's continuum
hypothesis, but I digress). So one of my points is that this is not only a
computer science problem.

~~~
Retra
That 'table' problem is not that complicated once you understand the notions
of 'approximation' and 'continuity'. A table with three legs is just a better
approximation to what is meant by "table" than one with two, or worse than one
with four.

It basically leads to things like this:

[http://en.wikipedia.org/wiki/Domain_theory](http://en.wikipedia.org/wiki/Domain_theory)

~~~
paganel
And then I might respond (like many others did) that an "approximate" table is
not a "real" table at all, i.e. we're back to step one of the problem.

The way I like to look at it is that Aristotle's logic principles and all that
followed (including the concept of "continuity") is a very nice and especially
very usefull Domain Specific Language which helped us, humans, fly rockets to
the moon and build iPhones (we wouldn't have computers with NAND gates which
would not respect Aristotle's logic principles). But if we go back and re-read
Heraclitus, the guy against whom Aristotle fiercely battled, then the concept
of "continuity" and "one vs many" concepts become a lot more blurry. But I
digress, again, this was supposed to be about computer type theory.

~~~
igravious
If it's any consolation(?) I agree with you.

Plato's theory of Forms says: we can see around us many examples of horses.
What makes them all horses? Why, the ideal form of a horse. Plato, I believe,
then claims that these forms are perfect and timeless and the the actual world
is inhabited by imperfect reproductions of these forms. Apologies to scholars
of Plato.

Aristotle's Categories, while superficially, quite different I think is
tackling the same problem. His most universal abstract categories try to
answer the question, "what is it that all things regardless of their nature
have in common?". And so his 10 (suspicious that, that 10) categories. His
categories are very much rooted in language, as in they map readily onto parts
of speech.

Both are ontological investigations. Both probe the essence of what it means
to be something. And so on to the present day. Even though Russell was
attempting to resolve a paradox in set theory what I think he actually did was
answer the question, "what is a thing (any thing) in essence?" and I think the
answer is, for our purposes, a term (instance) of a type (form).

It is not at all obvious how following these ancient investigations leads us
to the door-step of type theory but all indications seem to point to it. You
seem to have thought about this a good deal. I'd be interested in what you
think. I've left out a _lot_ of detail but I hope what I've said is useful to
somebody.

------
dschiptsov
I wouldn't say that heterogeneous ifs are silly for languages with explicitly
type-tagged data (Lisps) where a value has a type, not a variable (actually,
there are no variables - there are _bindings_ \- symbols acting as pointers to
a type-tagged values). Moreover, I would say that homogenous conds and cases
(pattern matching) with a "default" branch are way more silly.

Also I am very surprised by such statement from 'the author of a several
papers with the word type in the title'.

I also would argue that the design of CPUs such that any sequence of bits
could be loaded in any in any register and interpreted depending of the
current context, instead of having typed registers is not a shortcoming but a
fundamental feature. It makes code small and elegant. There is whole TAOCP to
see how.

~~~
userbinator
_I also would argue that the design of CPUs such that any sequence of bits
could be loaded in any in any register and interpreted depending of the
current context, instead of having typed registers is not a shortcoming but a
fundamental feature. It makes code small and elegant._

Indeed, the fact that digital data is fundamentally "untyped" and its meaning
depends entirely on interpretation is the entire reason why computers are so
powerful and general-purpose.

~~~
eli_gottlieb
From a type-theoretic perspective, a language can be either logically
consistent by means of strong normalization _xor_ Turing-complete. The untyped
lambda calculus is the smallest, most elegant functional language on the right
half of that fork. It's also not strongly normalizing, so you can't prove
theorems with it -- oh well!

~~~
YellPika
> From a type-theoretic perspective, a language can be either logically
> consistent by means of strong normalization xor Turing-complete.

This is a common misconception. A counter-example: take System F, add an `IO`
monad, and add a function `fix : forall a. (a -> a) -> IO a` for general
recursion. Our language is now Turing-complete, but remains consistent since
any falsehoods produced by `fix` can't escape the `IO` type, e.g. you can
prove `fix id : IO _|_`, but not `_|_`.

~~~
eli_gottlieb
Fair enough, and that's been one of the approaches to paraconsistency and
inconsistency in modal logic, too. Of course, from a theorem-proving
perspective, it's utterly useless, as with a proof-checker, _at some point_ I
need to escape the monadic container and get back into `Prop`.

------
stevan
I'd disagree with the two statements:

> People who think they understand something about the theory of programming
> languages, including me, tend to agree that what Python does is wrong.

> In fact you can program heterogeneous lists in dependently typed languages,
> but it’s unreasonably complicated.

Here's some Agda code that shows that both heterogeneous if statements and
lists make sense and are easy to work with in type theory.

    
    
        data Bool : Set where
          true false : Bool
    
        if : ∀ {ℓ} {P : Bool → Set ℓ} (b : Bool) → P true → P false → P b
        if true  t f = t
        if false t f = f
    
        postulate Char : Set
        {-# BUILTIN CHAR Char #-}
    
        data ℕ : Set where
          zero : ℕ
          suc  : ℕ → ℕ
    
        {-# BUILTIN NATURAL ℕ #-}
    
        ex₁ : ℕ
        ex₁ = if {P = λ b → if b ℕ Char} true 0 'a'
    
        infixr 5 _∷_
    
        data List {a} (A : Set a) : Set a where
          []  : List A
          _∷_ : A → List A → List A
    
        data HList : List Set → Set where
          []  : HList []
          _∷_ : ∀ {A As} → A → HList As → HList (A ∷ As)
    
        ex₂ : HList (ℕ ∷ Char ∷ (ℕ → ℕ) ∷ [])
        ex₂ = 1 ∷ 'a' ∷ suc ∷ []

~~~
ngrilly
How can you compare that code with the Python one liner in the original
article? It's interesting, of course, but it's not "easy to work with".

~~~
Dewie3
He explicitly gives standard machinery like the a boolean type and natural
numbers, presumably to show more clearly what the parts are built up from. You
probably wouldn't need to write all of that to use HList at a later point.

~~~
danghica
You would still need to program in Agda though :) Cheers!

------
mafribe
Maybe the author plans to write about it in later parts of the article, but
it's misleading to assume that (1) there is no work on types for interacting
processes and (2) that types always have to be based on some underlying ideas
form functional programming.

Much recent research in programming languages is about types for interacting
processes. The most well-known, but by no means only example are the session
types pioneered by K. Honda. The key idea is that each process has a bunch of
interaction points, and the interaction at each interaction point is
constrained by a type. Such types typically

\- What direction does data flow? E.g. channel x is used for input while
channel y does only output.

\- What kind of data is exchanged on the channel? E.g. x is used to exchange a
boolean, while y exchanges pairs, the first component of which is a double,
and the second component is a channel name which is used for inputing a
string.

\- How often is a channel used? E.g. x is linear (used exactly once), y is
affine (used at most once) while z can be used an arbitrary number of times.

This setup has been investigated from many angles, and one of the most
beautiful results in this space is that normal types known from lambda-
calculus (e.g. function space) can be recovered _precisely_ as special cases
of such interaction types, using Milner's well-known encoding of functional
computation as interaction.

~~~
pron
> it's misleading to assume that ... types always have to be based on some
> underlying ideas form functional programming.

One of FP's more annoying achievements is somehow convincing people that it's
the only way to make programs more verifiable or more "mathematical".
Imperative, stateful computation can be (and is) just as mathematical
(whatever that means) and just as verifiable as pure-FP (it must be
constrained in some ways, but not as extreme as requiring complete
referential-transparency).

It's good to learn about applying types to process calculi. I wasn't aware of
that work at all.

~~~
mafribe

       Imperative, stateful computation [...] is [...] as [...] verifiable as pure-FP
    

Exactly. When you verify your programs using some kind of program logic you
realise that you have to keep track of all relevant data anyway.

The condescension towards languages like C/C++ and Java that some FP
extremists have shown is probably one of the reasons for the rift between
working programmers and PL theory. Fortunately, things are much better now,
with most new languages (e.g. Scala, Rust, Clojure) bridging both worlds.

    
    
        applying types to process calculi. I wasn't aware of that work at all.
    

Maybe "A Gentle Introduction to Multiparty Asynchronous Session Types"
[http://goo.gl/FeVLv3](http://goo.gl/FeVLv3) could be an introduction?

~~~
eli_gottlieb
>The condescension towards languages like C/C++ and Java that some FP
extremists have shown is probably one of the reasons for the rift between
working programmers and PL theory. Fortunately, things are much better now,
with most new languages (e.g. Scala, Rust, Clojure) bridging both worlds.

Look, if you want to build a whole language around Hoare logic, go ahead. We
just think it's ugly.

~~~
pron
You don't need to build a "whole language" around any kind of logic for it to
be sufficiently verifiable. Types are useful, but it's not necessarily an all-
or-nothing question. There are non-type-based verification tools (symbolic
execution and deduction), and tests complete the picture. A language that
doesn't let you write a program unless you yourself have worked hard to prove
to the compiler that it's correct may be more trouble than it's worth.
Verification does not necessarily have to be a part of compilation, and it
does not necessarily require the proof to be spelled out in code.

~~~
mafribe
I agree that it is good separate programming and verification. Otherwise the
programmer is forced by the compiler to worry about correctness and
termination from Day 1.

Moreover, Cury-Howard based approaches really work only for a restricted form
of programming: pure and total functions.

------
danblick
The article isn't loading for me right now, but I thought I'd mention co-
inductive types and how they relate to "interaction".

In Coq, co-inductive types are used to model infinite objects like (never-
ending) streams or (non-terminating) program executions.

    
    
      Inductive List (A: Set): Set :=
      | Nil: List A
      | Cons : A -> List A -> List A.
    
      CoInductive Stream (A: Set): Set :=
      | SCons: A -> Stream A -> Stream A.
    
      Fixpoint len {A: Set} (x: List A): nat :=
        match x with
         | Nil => 0
         | Cons _ y => 1 + len y
        end.
    
      CoFixpoint ones : Stream nat :=
        SCons nat 1 ones.
    
    

If you want to talk about how systems interact with the world, you can use
bisimilarity relations to prove "these two systems interact with the world in
the same way".

You can also use co-inductive types to embed the temporal logic of actions
(TLA) in Coq; TLA is the language Leslie Lamport works with for "Specifying
Systems".

* [http://www.labri.fr/perso/casteran/RecTutorial.pdf](http://www.labri.fr/perso/casteran/RecTutorial.pdf)

* [http://www.amazon.com/Specifying-Systems-Language-Hardware-E...](http://www.amazon.com/Specifying-Systems-Language-Hardware-Engineers/dp/032114306X)

* [http://en.wikipedia.org/wiki/Bisimulation](http://en.wikipedia.org/wiki/Bisimulation)

------
belovedeagle
> Each monad indicates one particular kind of interaction, such as using
> memory or perform input-output.

NO NO NO. Stop. It was cute when people got this wrong in 2010; now it's just
ignorant to pretend to know enough about Haskell to tell us something about it
and still get this so wrong. There is no inherent connection between monads
and statefulness; none whatsoever. Monads just happened to be a useful
abstraction to make opaque for the purposes of IO.

~~~
cousin_it
People keep getting it wrong because monads are the wrong abstraction. The
right abstraction is algebraic effects. They correspond much more directly to
"interactions", because the types of effect handlers are exactly the types of
allowed "interactions". As a bonus, algebraic effects commute with each other,
while monads usually don't. (For example, there's a gratuitous difference
between Maybe (List a) and List (Maybe a), and people who want to program with
effects shouldn't have to think about such differences.)

There was a great quote saying that when you get something almost right but
not completely right, you have to explain it to people over and over again.
IMO people should pay more attention to that quote, it applies to monads
perfectly.

~~~
dragonwriter
> For example, there's a gratuitous difference between Maybe (List a) and List
> (Maybe a)

In what sense is the difference "gratuitous"? Seems to me its a pretty
significant, meaningful difference and that the two types represent radically
different things.

~~~
cousin_it
Yes, I agree. But that difference is irrelevant and distracting to people who
just want to do effectful programming. A computation that's allowed to call
handlers A and B should be always convertable to a computation that can call
handlers B and A. There's no point in using a more general abstraction
(monads) that doesn't give you the operations you need.

~~~
dllthomas
This is handwavy and wrong.

Imagine we have two effects that need handling. One is "failure" \-
terminating the computation and propagating the fact of failure. The other is
"emit", producing some value for an external process.

If I combine these two, the question arises as to the behavior of "emit 3;
fail" \- does it emit or bury the 3? Depending on the circumstance, either
could be useful, and it's a tremendous difference that is neither irrelevant
nor distracting.

~~~
cousin_it
If I understand algebraic effects correctly, the computation "emit 3; fail"
might emit or bury the 3 (or do something even more strange) depending on
which effect handlers you pass to it. Keeping effect handlers outside of
computations is what allows for commutativity etc.

~~~
dllthomas
I am not objecting to algebraic effects; I am objecting to your calling the
different orderings "irrelevant and distracting." It is a vitally important
detail that can easily make the difference between correct and broken code.
When a code snippet can be written making no assumptions about ordering,
exposing it as polymorphic with respect to ordering is great. When it must
rely on a particular ordering, pretending it doesn't will bite you.

------
edtechdev
[0; 'a'; fun x -> x + 1];;

The common type is object, or perhaps more specifically, non-nil object
(object!), if 0 is not treated as nil (value type).

If you try to interact with an object (add it to a number, call a method,
etc.), your language has to decide what to do: fail because that method is not
part of the type object, or force people to cast to the proper sub-type, or
allow a kind of dependent typing/casting (if type(it)=int: it+1 else if
type=char: it.uppercase), or (like boo, python, ruby, etc.) allow "duck
typing": allow any calls/methods on the object, and check it at run-time
instead of compile-time

~~~
Dewie3
_Typing_ in the Haskell et. al. tradition don't like subtyping. And they also
like their parametericity.

------
muraiki
"To me this is one methodological weakness of type theory, the commitment of
having types for all terms in the language. Why is that? Types are designed to
facilitate composition, but the natural unit for program composition is not
the term but the function, or even the module. It makes sense to assign types
to modules — it is methodologically consistent. But how we calculate these
types could conceivably be more flexible, allowing a global perspective within
the unit of composition. Types, as properties at the interface could be
calculated using the entire arsenal of (decidable!) logics and static
analyses."

I admit to not being that familiar with type theory (I've done a little Scala
but most of my FP experience is in Clojure) but when I read this my gut
reaction was: "Isn't that kind of what Go is doing?"

Meaning that in Go, the focus of compositionality isn't on basic types and the
functions that act on them (see Go's lack of generics) but rather the focus is
on getting different modules to agree on "what should be done to some stuff"
using interfaces. This is putting interaction first but also getting
compositionality in a "generic" way given that interfaces in Go are
implemented implicitly.

~~~
Dewie3
Yeah, Go has interfaces. Just like any sane, decently modern statically typed
language. And it uses it to declare _interfaces_ between things. A great
concept, like the idea behind abstract data types -- defining data types by
what operations can be performed on them.

But what is so special about Go's approach to interfaces? The only _special_
thing I see is that interfaces are implemented implicitly, which I see more as
a "implicit over explicit" decision rather than anything having to do with
composition per se. Yet people keep raving about it.

~~~
muraiki
Well, I think that perhaps what makes Go different is not only interfaces, but
interfaces as part of a larger package -- meaning the "not quite OO" approach
it brings, in contrast to a language like Java's interfaces and general design
philosophy.

------
nabla9
> The result is to provide memory elements as black box library components,
> defined and tested in a different, lower-level, language. But at the
> interface the two levels of abstraction are reconciled and composition can
> occur.

I feel the same way. Uniqueness typing can work as the clue between component
and upper lever abstraction.

------
pron
The question of computation vs. interaction is an fascinating one, but I think
there's another question (perhaps related) to the applicability of "rich"
types to practical programming, and that is regarding the role of types, and
how far they _should_ be stretched. I think there is little question that
"simple" type systems (like in C/C++/Java/C#) aid in developing large systems.
But much of the fascination with type theory (as the article states) has to do
with the Curry-Howard isomorphism, and the ability of programs to carry proofs
to some of their properties. This is believed to be a powerful tool in
software verification. The question then becomes to what extent should this
tool be used? What properties _can_ be proven? What properties programmers are
willing to try and prove? How much harder is it to prove a property than write
a possibly-incorrect algorithm without proof?

The benefit of proofs in code would only be worthwhile if it was relatively
easy for common programmers to prove the absence of bugs that would have been
more costly to find and fix using other methods, and that that ease would not
come at the cost of other properties (such as performance). So far I have a
feeling that this is not being delivered. Your run-of-the-mill clever use of
types usually prevents very "cheap" bugs, and hard bugs are even harder to
prevent with types.

I'll give a concrete and recent real-world example. A few months ago, a bug
was discovered in Python's and Java's sequential sorting algorithm[1],
Timsort. It was introduced to Python in 2002, and its adoption by Java in 2011
has probably made it the most used sorting algorithm in the world. Yet, it
contained a fatal bug that could cause it to crash by overflowing an array.
There are a few interesting observations one could make about the bug and its
discovery with relation to type systems.

1\. The bug was discovered by a verification program that does not rely on
type-based proofs in the code. The output from the verification system quickly
showed where the bug actually was. AFAIK, the verification program required no
assistance with proving the correctness of the algorithm, only that the
variants the algorithm seeks to maintain be stated.

2\. Proving the invariants required by the (rather complicated) algorithm with
types would have required dependent types, and would have probably been a lot
harder than using the verification program used. It is also unclear (to me)
whether the problem and the fix would have been so immediately apparent.

3\. The interaction of "type-proofs" with program semantics (namely, the
requirement of referential transparency) would have made the type-proven
algorithm much slower, and this particular algorithm was chosen just for its
speed.

4\. The corrected version ended up not being used because it could have
adversely affected performance, and it was noticed that the "incorrect"
algorithm with a slightly larger internal array would still always produce
correct results for the input sizes currently supported in Java.

This example of a purely computational real-world bug shows some more
limitations of the type-system approach to verification, I think.

Going back to effects and interactions, pure-FP approaches have a hard time
dealing with timing properties. I already gave the same example on another
discussion today, but the Esterel language (and languages influenced by it)
has been used successfully to write 100% verifiable programs for safety-
critical domains in the industry for a long time now. It is an imperative
language (though it's not Turing complete) with side effects and a lot of
interactions, yet it has a verifier that uses Pnueli's temporal logic to
guarantee behavior, without requiring the programmer to supply proofs.

To summarize, type systems have shown their usefulness in reducing the cost of
software development, but they need to be considered as a component in a
larger verification space, and they need to strike a balance between
usefulness and ease of use.

[1]: [http://www.envisage-project.eu/proving-android-java-and-
pyth...](http://www.envisage-project.eu/proving-android-java-and-python-
sorting-algorithm-is-broken-and-how-to-fix-it/)

[2]: [http://www.envisage-project.eu/key-deductive-verification-
of...](http://www.envisage-project.eu/key-deductive-verification-of-software/)

~~~
sfvisser
> The benefit of proofs in code would only be worthwhile if it was relatively
> easy for common programmers to prove the absence of bugs that would have
> been more costly to find and fix using other methods, and that that ease
> would not come at the cost of other properties (such as performance).

I don't understand why proofs need to be universally easy to write/understand
to be useful. Why can't they be useful for a subset of programmers on a subset
of projects?

> So far I have a feeling that this is not being delivered. Your run-of-the-
> mill clever use of types usually prevents very "cheap" bugs, and hard bugs
> are even harder to prevent with types.

Types can be used to encode complicated invariants of software systems, which
will be automatically be machine checked throughout the entire lifetime of
your project. There is a vast gradient from simple properties like 'this value
is a number' to 'your data migrations are guaranteed to be completely
reversible without any information loss' to 'the sequential composition of a
set procedures will terminate within 600 cycles'.

Types are not only for preventing bugs. They model invariants, help design and
allow easy reasoning over large software projects.

> This example of a purely computational real-world bug shows some more
> limitations of the type-system approach to verification, I think.

No it doesn't. You just pick one example for which it didn't really work out
for some reason. That tells you exactly zero about if type based verification
could be useful in the general case. (Or a specific subset of verification
cases)

> To summarize, type systems have shown their usefulness in reducing the cost
> of software development, but they need to be considered as a component in a
> larger verification space, and they need to strike a balance between
> usefulness and ease of use.

Of course they need to. Luckily that's what smart people are working on every
day, making types easier to use.

~~~
pron
> Why can't they be useful for a subset of programmers on a subset of
> projects?

They can, but that would hardly make a deep impact on the industry. And my
question would then be why do _you_ think they'll be useful only for a subset
of programmers and a subset of projects?

> Types can be used to encode complicated invariants of software systems,
> which will be automatically be machine checked throughout the entire
> lifetime of your project.

I know they can, but so can other verification mechanisms that are less
intrusive, or even pluggable types, like those that have recently been added
to Java 8.

One point of difficulty -- that the article is talking about -- is precisely
those variants that are really hard to enforce, like "every request X will be
handled within 300 ms", or "the server will never respond Y if the user has
never asked for Z", or "the system will never contact the database if the
operator has turned on the offline flag". Similar (though less sophisticated)
kinds of guarantees _are_ made and verified by Esterel, without requiring the
programmer to supply the proof. Other approaches include behavioral
programming[1] (made by the same people behind the research that had led to
Esterel), which simply let you specify a rule: "if offline flag is on, disable
calls to the database", which will be enforced no matter how much lower
priority rules try to access the database.

Types are just one approach for verification. There are invariants they excel
at proving, and those that not so much. People who research type systems
should, of course, explore anything that's possible with them, but software
developers should realize that types are not the be-all and end-all of writing
verifiably correct programs, and that for some program properties there are
better approaches than types.

> They model invariants, help design and allow easy reasoning over large
> software projects.

I agree in theory. But I also think they may help up to a certain point and
then start hurting. The problem is, this has never been put to the test.

[1]:
[http://www.wisdom.weizmann.ac.il/~bprogram/more.html](http://www.wisdom.weizmann.ac.il/~bprogram/more.html)

~~~
whateveracct
> But I also think they may help up to a certain point and then start hurting.
> The problem is, this has never been put to the test.

You're right that the current cutting-edge of types in programming haven't
been put to the test yet. Scala and OCaml and Haskell all get varying degrees
of industry use, but it's nothing compared to C++, Java, Python, etc. So it's
clear that we haven't even almost gotten to the point where the industry uses
proper types at large.

I guess my point is this: Why then do you think at a certain point they start
hurting? It's not like the industry has even gotten close to using types to
their full potential. We've certainly never reached that "certain point" where
they start hurting.

~~~
pron
> Why then do you think at a certain point they start hurting

Oh, it's just a guess, but partly because of Haskell and its success at being
continuously rejected by the industry for two decades now (it was touted as
the next big thing when I was at university over 15 years ago). There have
been few if any languages of Haskell fame that have clearly tried to break out
of academia and have had so little use in the industry.

And the problem is less with types themselves but with the referential-
transparency-everywhere and non-strict-evaluation approach. The result is that
Haskell allows -- nay, demands -- a lot of reasoning about the program upfront
_at the expense_ of reasoning about it as it runs, with a debugger and a
profiler (because of the complicated execution model). I also think Haskell
places an unreasonable burden of thinking about the types vs. the algorithm an
domain.

As to Scala, because it's such a "multi-paradigm" language, it's pretty hard
to know exactly _how_ it's used in the industry. There are certainly those
that use it anywhere between Haskell-like with scalaz to Java-like only
without having to write constructors and getters (in fact, Spark is written, I
believe, closer in the latter style). But even if Scala is to be counted as an
advocate for rich typing, its success record is pretty spotty (and, again, I
wouldn't blame that necessarily on the type system).

I could, of course, be very wrong. One thing is certain: _what_ a type system
can do is a comp-sci question; how useful it is is a psychological question,
and trying to argue about the usefulness of type systems (or any programming
approach) as if it were a mathematical problem is just silly.

> We've certainly never reached that "certain point" where they start hurting.

I think Haskell is beyond that point, although I still believe a language with
a type system _as sophisticated_ as Haskell's might not be (because getting
more for the same price might be worth it). By that I mean that Haskell types
-- just like the article said -- waste a lot of mental power in the wrong
places (i.e. on proving trivial stuff). I don't know what that imagined
language would look like, but it would probably have some effect types (and
like the author of the article has noted, there's a huge difference between
being able to simulate effect systems and effect systems controlling actual
effects).

~~~
codygman
> I also think Haskell places an unreasonable burden of thinking about the
> types vs. the algorithm an domain.

When you are encoding your logic in types the distinction between types and
"the algorithm and domain" is very blurred if not the exact same thing. I
wouldn't want to be programming in any other language if I knew very little
about the domain.

~~~
pron
But "encoding your logic in types" is often non-trivial and _very_ different
from thinking about the algorithm itself. Of course, most algorithms require
dependent types (at the very least!) to be fully encoded, so you can't really
encode them in Haskell...

------
murbard2
One can also create a largely generic type, for example a json like object and
define polymorphic operators on these objects which handle casting as needed.

The program will need to keep track of the types at runtime, but that's also
true of Python, so what?

------
jimmyhmiller
The site appears to be down. Anyone have a mirror?

------
Dewie3
I wonder what someone like Conor McBride would have to say about this article.

~~~
Dewie3
I guess he would downvote me? I don't get HN any more...

~~~
gus_massa
I didn't downvote you, but here oneliners are discouraged and often downvoted.
They are usually very difficult to understand without too much context. For
example, I don't know who is Conor McBride (
[http://en.wikipedia.org/wiki/Conor_McBride](http://en.wikipedia.org/wiki/Conor_McBride)
)!

Try to expand the comment. Something like: "Conor McBride has a article that
agree/disagree with this [link] because [two or three line explanation]."

