
Dependently Typed Programming - waffle_ss
http://ejenk.com/blog/why-dependently-typed-programming-will-one-day-rock-your-world.html
======
MetaCosm
Dependently typed programming already rocks my world -- but not my job.

I enjoy Haskell.... as a hobby. Whenever I am playing with it -- I distinctly
understand that "it" is the point. The focus is squarely on the language and
the programmers. I absolutely think this has a place, and I am delighted it
exists. I remember the ... purity of my first little chess engine I wrote in
it -- how it itched the OCD spot in my brain so perfectly, it made me
exceptionally happy.

That said, when I am programming "in the wild"... the language is almost never
the point. The application is the point, the library is the point, the problem
domain is the point, the user is the point. Cleverness matters little, often
what needs to be done is obvious and tedious. The biggest problems tend to be
maintenance in every sense of the word... how to you maintain this if you are
successful (multiple data centers, dozens of employees). How do you avoid
rewriting everything at each major growth spike.

This means I use boring dependable imperative languages for what I bet my
business (start-up) and future on -- and I spend my hobby time using far sexy
functional languages... but I don't confuse my love for them with thinking
they are a good fit for my business.

~~~
squidsoup
Presumably one of the advantages of choosing a language with a strong type
system like Haskell is that it would allow for better maintenance and
refactoring. What "dependable imperative languages" do you use in practice
that offer better maintainability than Haskell?

~~~
virtualwhys
> What "dependable imperative languages" do you use in practice that offer
> better maintainability than Haskell?

Scala

------
coolsunglasses
>proof that your program will terminate

Or that a coprogram will be productive (services/servers/streaming fall under
this).

Most programs these days operate on codata, so termination on a per-
destructed-codata-component basis is productivity if I understand correctly.

(ejenk touches on this in the comments as well, but I wanted to add this
point)

------
peaton
UPenn has a ton of really interesting work on extending the Haskell type
system to support dependent typing. Some of the coolest pieces I've heard
about had to do with guaranteeing the security of a server application through
dependent typing.

I never found out what the actual paper or project was that accomplished this.
But these two papers[1][2] seems pretty interesting - having to do with
guaranteeing safety of database access.

[1]
[http://www.cis.upenn.edu/%7Eeir/papers/2012/singletons/paper...](http://www.cis.upenn.edu/%7Eeir/papers/2012/singletons/paper.pdf)

[2] [http://www.cis.upenn.edu/~ahae/papers/dfuzz-
popl2013.pdf](http://www.cis.upenn.edu/~ahae/papers/dfuzz-popl2013.pdf)

~~~
gtani
this idris paper, maybe, or how is security guaranteed?

[http://www.simonjf.com/writing/bsc-
dissertation.pdf](http://www.simonjf.com/writing/bsc-dissertation.pdf)

quote:

    
    
        enforce resource usage protocols inherent in 
        C socket programming, providing safety guarantees,

~~~
peaton
Hmm, I don't believe so. My prof went on about a group at Penn using dependent
types to prove the security of server applications. But that is definitely an
interesting paper too. Thanks for sharing!

------
TacticalCoder
Interesting article. Note that the article describes the maybe monad as used
in Haskell and explains what it solves and then comes with this criticism:

 _"...we still need to run the program to observe that the computation fails
somewhere and then figure out what happened"_

As if figuring out where a problem happened was an issue in Haskell (or,
really, in any language supporting something like the maybe monad).

If figuring out what happened when something failed is complicated because
there were several maybe monads and in the end you got nothing, then Haskell
solves this very nicely if I'm not mistaken: there's "Either", which can carry
an error message along with the fact that there's no result.

Now I'm not disputing that a proof that something cannot happen would be great
(and hence in this case not requiring Maybe / Either): all I'm saying is that
people shouldn't misread that thinking that Haskell has no clean way of
reporting why nothing got returned.

~~~
seanmcdirmid
> If figuring out what happened when something failed is complicated because
> there were several maybe monads and in the end you got nothing, then Haskell
> solves this very nicely if I'm not mistaken: there's "Either", which can
> carry an error message along with the fact that there's no result.

Does that really work? Say you have an error in a long pipeline, wouldn't
"Either" cause processing down the line to treat the error as valid data (the
error wouldn't be None)?

What you really want is a maybe monad where you can propagate wither a Valid
answer or an Error condition, where operations on the value merely propagate
the error condition. Also, it would be nice if the monad was integrated into a
debugger so the error condition could carry along enough context to allow the
user to jump to execution context where the error occurred during debugging
(if we are talking about programmatic errors, a distinction must be made
between errors of programming vs. exceptions from the environment that
can/should be handled).

~~~
chriswarbo
If the pipeline is constructed with fmap, <*> or >>= then the pipeline's
constituents will only be applied to Right values. An error (a Left value)
will short-circuit the pipeline and be returned as-is.

Alternatively, we could be more fancy and branch such that errors received
their own, separate, processing (eg. logging).

~~~
seanmcdirmid
Cool. One of the advantages of null pointer exceptions is that you can break
into the debugger when they happen. Masking them and continuing is absolutely
what you don't want to do.

------
mamcx
I have think in something like this, but far simpler: Why the (avg) languages
can't define types like in a database language? (ej: Age:Int, Check: x > 0 & x
< 200).

However, with this: I get how can be usefull for data directly in the code,
but how this work with data that come from somwhere? If i feed a CSV with
numbers, what happend when some is 0?

~~~
jonsterling
So the way it works when your data comes from somewhere and you don't know if
it is "clean" is that you write a decision procedure to "find out".

So the simplest use case is you've got a type like `{x : String | x.length <
10}`: that is, the type of strings which have length less than ten. A value of
that type is going to be a pair `<str, proof>` where `proof` is an object
which witnesses the fact that `str` has a length less than ten.

Now, it is a decidable property whether or not a string is shorter than 10
chars long. You can easily construct a function with the following type:

    
    
        decideValidity : (str : string) -> Either (str.length < 10) (Not (str.length < 10))
    

This function is defined by induction on the string.

Now you want to receive such a string from user input. Crucially, the logic of
your program is going to operate on data that's already been validated: so
your program will only deal with `{x:String | x.length < 10}`; we just need to
shore up the boundaries between your program and user input.

    
    
        yourComputation :: {x:String | x.length < 10} -> Something
    
        main : IO ()
        main = do
          userInput <- getLine
          case (decideValidity userInput) of
            Left prf -> do
              print (yourComputation <userInput, prf>)
            Right nope -> 
              print "The input was invalid"

~~~
munro
Is it possible with Idris to programmatically create the `decideValidity`
function? Conceptually it seems trivial to do, I could imagine building it
with meta programming. My crazy pseudo code-- :D

    
    
        decideValidity : (p : Proof) -> a -> Either (p a) (Not (p a))

~~~
pavpanchekha
No—there are many properties that cannot be decided. For example, many
programs can be proven to halt in finite time. For example, you can imagine a
way to prove that the program

    
    
        return 0
    

always halts. However, testing for this property is impossible (Halting
Problem). More generally, it is not possible to test a general predicate for
satisfaction. Even things that are decidable in principle may not be decidable
quickly (SAT).

On the other hand, there is a lot of space for simple predicates like
"str.length < 10" to be decided automatically, since the proofs for these can
be constructed with only forward search.

~~~
munro
Wouldn't the proposed function then just not compile for the instances were it
would be impossible to test, as desired? It would still be useful for where it
does work, like in the code you provided. Seems like a lack of meta
programming to me if it's not currently possible.

~~~
tel
You can sort of do this sometimes. Consider Idris' decidable equality module:

[https://github.com/idris-lang/Idris-
dev/blob/master/libs/pre...](https://github.com/idris-lang/Idris-
dev/blob/master/libs/prelude/Decidable/Equality.idr)

Idris has type classes and it produces a polymorphic total function `decEq`
for any type which instantiates the class DecEq.

    
    
        decEq :: DecEq t => (x : t) -> y -> Dec (x = y)
    

where Dec denotes a decidable type, something like (but not actually)

    
    
        data Dec t where
          Yes : t     -> Dec t
          No  : Not t -> Dec t
    

So now we have a proposition in our type-level prolog called `DecEq` and some
types, the ones with decidable equality, can instantiate it

    
    
        instance DecEq () where 
          decEq _ _ = Yes refl
       
        -- Equality is symmetric, so is the negation of equality
        total negEqSym : {a : t} -> {b : t} -> (a = b -> _|_) -> (b = a -> _|_)
        negEqSym p h = p (sym h)
    
        -- Proof that zero isn't equal to any number greater than zero
        total OnotS : Z = S n -> _|_
        OnotS refl impossible
    
        instance DecEq Nat where
          decEq Z     Z     = Yes refl
          decEq Z     (S _) = No OnotS
          decEq (S _) Z     = No (negEqSym OnotS)
    
          -- recurse on the arguments together and modify the eventual
          -- decided proof so as to match the arguments actually passed. 
          --
          -- I.e. we might find that Yes (n = m) but we need Yes (S n = S m)
          --
          decEq (S n) (S m) with (decEq n m)
            | Yes p = Yes $ cong p
            | No p = No $ \h : (S n = S m) => p $ succInjective n m h
    

Finally, at compile time Idris will try to infer the concrete types
instantiating variables throughout the program. If any of the variables are
bounded by `DecEq` then it must be able to solve the typeclass prolog to
establish decidable equality for that type.

If it fails to fulfill that obligation then it'll fail at compile time.

    
    
        -- this fails since functions on naturals are 
        -- far, far, far from decidable... Idris cannot achieve the
        -- obligation to find `DecEq (Nat -> Nat)`.
        decEq : (x : Nat -> Nat) -> y -> Dec (x = y)

------
peaton
> In a functional language, you describe the problem to the computer, and it
> solves it for you.

Isn't this the definition of a declarative programming paradigm? (I.e. SQL?)

~~~
UberMouse
According to Wikipedia Functional Programming fits under the Declarative
Programming paradigm. I'm not sure how accurate that is, but it seems to hold
some truth.

"Functional programming, and in particular purely functional programming,
attempts to minimize or eliminate side effects, and is therefore considered
declarative."

"While functional languages typically do appear to specify "how", a compiler
for a purely functional programming language is free to extensively rewrite
the operational behavior of a function, so long as the same result is returned
for the same inputs."

~~~
peaton
Ah, I see. That last bit is especially interesting.

I would argue that there are many languages (and no reason they couldn't) act
otherwise with regard to that second quote. However, the line between pure
functional and not starts to become fuzzy as well.

------
Ono-Sendai
My language works a little like this. All programs are verified to be safe
(not crash, read out of bounds, etc), and to terminate.

~~~
al2o3cr
"and to terminate"

That must be a neat trick - or you've constructed a non-Turing -complete
language...

~~~
skew
Non-Turing-complete is not a bad way to go. You pretty much have to already be
a researcher in dependent type systems (or maybe set theory) to invent
functions that always terminate but can't be written in non-Turing-complete
languages like Coq (an evaluator for programs in an at-least-as-powerful
dependently typed language is the only remotely natural example I know of).
Also, writing a program that proves some programs terminate is _way_ easier
than proving a program that correctly proves any terminating program
terminates, if you are confusing the two. If it's not too common, "I didn't
manage to prove this terminates" sounds like a reasonable compiler error.

~~~
tel
It can be kind of hard to satisfy termination checkers, though. They're not
smart. You basically have to show structural induction on _something_ which
sometimes forces you to invent lots of new proof terms.

~~~
Ono-Sendai
Indeed. I suspect that's where a lot of my time will be invested, making the
checker better, improving error messages, etc..

------
munro
Anyone know how runtime IO errors would be handle with dependent types?

You can't prove that an outside source is going to return what you expect.
Would you just prove that it essentially returns a "Maybe" ?

Haskell exceptions have always weirded me out, I've poked at Idris, but plan
to write a side project in it soon!

~~~
chriswarbo
It helps to remember that we don't gain any expressiveness from a type system,
only safety[1]. Type systems can only _restrict_ which programs we're allowed
to write[1], compared to un(i)typed programs. After our programs have passed
the type-checker, their types can be erased to leave behind a raw un(i)typed
program. In the case of functional programming, we can imagine it compiling
down to something like Scheme.

The point is that we can handle error conditions just like we do in any other
language: if we're compiling to something which looks like Scheme, we can
handle errors in the same way: have everything return a sum allowing errors
(ie. a Maybe). Even things which don't produce errors can be lifted to this
type automatically and chained together using a monad. That's basically what
Haskell does.

Of course, lifting simple functions to return Maybes isn't very nice, since
we're purposefully throwing away information; ie. we're causing our consumers
to ask "did this return a value or not?" when we already know that it always
will. The problem is that with composing Maybe functions with simple
functions. There's no way around this in Haskell, we need to lift the simple
functions; after all, that's the point of a monad: we can return a wrapped
value and we can join a wrapped-wrapped value into a wrapped value but there's
no way to _unwrap_ a value (otherwise we may have a comonad instead!).

Dependent types give us more flexibility than Haskell though: if our output
value depends on the input, we can also make our output _type_ depend on the
input. For example (where "A", "B" and "C" are some concrete types, for
simplicity):

    
    
        -- Simple function
        myFunc1 : A -> B
        myFunc1 = ...
    
        -- Maybe function (ie. might error)
        myFunc2 : C -> Maybe A
        myFunc2 = ...
    
        -- Maybe monad. Works, but loses some type info
        liftedChain : C -> Maybe B
        liftedChain x = case myFunc2 x of
                          Nothing -> Nothing
                          Just y  -> Just (myFunc1 y)
    
        -- Exactly the same algorithm, but without losing type info
        myType : Maybe A -> Type
        myType Nothing  = ()  -- Unit type, equivalent to Nothing
        myType (Just _) = B   -- Result type, equivalent to Just
    
        unliftedChain : (c : C) -> myType (myFunc2 c)
        unliftedChain x = case myFunc2 x of
                            Nothing -> ()  -- Unit value; matches myType Nothing as required
                            Just y  -> myFunc1 y   -- Value of type B; matches myType (Just y) as required
    

This makes it easy to handle runtime errors, without having to pretend that
everything else might blow up. Typically, we write most of our code in an
ideal world assuming all of our requirements are met, then we write simple
"driver" functions which justify those assumptions, returning errors
otherwise. Exactly like in Haskell, where we write pure functions for
manipulating plain values like strings, then write simple "driver" functions
to pull those strings out of files, databases, sockets, etc. using IO.

[1] Strongly typed languages can be more or less expressive than each other
due to their type systems (eg. dependent types make it easy to express
homogeneous lists, which can't be done (AFAIK) in System F), but all are less
expressive than un(i)typed languages.

~~~
seanmcdirmid
> It helps to remember that we don't gain any expressiveness from a type
> system, only safety[1].

That is controversial, especially when you are considering non-safety oriented
tools like code completion systems or program inference engines. Languages
with reified types may also make decisions at run-time based on the actual
type of a value (e.g. virtual method dispatch).

~~~
chriswarbo
I meant "expressiveness" purely in the sense that "we can write more programs
'without' types than with them" (of course, the types are 'still in there
somewhere'; we're just ignoring them). I wasn't referring to external tooling,
which I agree gains a lot from strong types (specifically because a load of
programs have been forbidden, which makes their job tractable).

When dependent types and proving correctness are mentioned, there is often an
assumption that some magical ability is gained. For example, all the questions
here about error handling. Dependent types don't handle errors, they're just
types; but what they _can_ do is forbid code which doesn't handle errors.

As for "reified types" affecting run-time behaviour, I would call those tags
rather than types. For example, the class of an object is a tag, not a type.

~~~
seanmcdirmid
> I wasn't referring to external tooling, which I agree gains a lot from
> strong types (specifically because a load of programs have been forbidden,
> which makes their job tractable).

Careful. You also want to allow unsound choices in completion options, since
the programmer could have made a mistake or is unaware of the more specific
type needed to support some operation (especially if they are using code
completion for discovery purposes). Sometimes the path between valid program A
and valid program C is an invalid program B (think about editing code without
the ability to create transient syntax errors...how horrible would that be!).
Type theorists often get this wrong, because they are so focused on
correctness :)

> When dependent types and proving correctness are mentioned, there is often
> an assumption that some magical ability is gained.

I thought magic was needed just to do the type checking. I don't think anyone
who has gone through the program proving process would think it was easier
(e.g. via F*), a lot of sweat is required to get those rock solid absolute
guarantees.

> As for "reified types" affecting run-time behaviour, I would call those tags
> rather than types. For example, the class of an object is a tag, not a type.

This is only the overly narrow type theorist's definition of type. The class
of an object is obviously a type/kind/classifier via the informal definition
of type. In a statically typed language, it also happens to correspond 1-1
with the static type of the object as created, which is incredibly useful from
a usability standpoint. Erasure likewise is pretty bad, and languages that
don't erase (C# vs. Java) are much more usable.

------
peterashford
Maybe I'm missing the point of Dependent Types and there's more to it than I
understood from the article, but in the example given of a divide function
that requires a proof of a non-zero denominator: isn't this already achieved
by languages that employ design by contract, such as Eiffel?

~~~
tel
Eiffel can promise that it will halt at runtime if a contract is violated. DT
languages can promise that no contracts could ever be violated by any possible
runtime evaluation pathway.

The latter is much stronger, but weird sounding. What it typically means is
that you are required to not cut corners. The type system will detect your
failure to handle edge cases and refuse to compile until you either handle or
explicitly ignore them.

