Hacker News new | past | comments | ask | show | jobs | submit login
Dependently Typed Programming (ejenk.com)
70 points by waffle_ss on July 1, 2014 | hide | past | favorite | 75 comments



Dependently typed programming already rocks my world -- but not my job.

I enjoy Haskell.... as a hobby. Whenever I am playing with it -- I distinctly understand that "it" is the point. The focus is squarely on the language and the programmers. I absolutely think this has a place, and I am delighted it exists. I remember the ... purity of my first little chess engine I wrote in it -- how it itched the OCD spot in my brain so perfectly, it made me exceptionally happy.

That said, when I am programming "in the wild"... the language is almost never the point. The application is the point, the library is the point, the problem domain is the point, the user is the point. Cleverness matters little, often what needs to be done is obvious and tedious. The biggest problems tend to be maintenance in every sense of the word... how to you maintain this if you are successful (multiple data centers, dozens of employees). How do you avoid rewriting everything at each major growth spike.

This means I use boring dependable imperative languages for what I bet my business (start-up) and future on -- and I spend my hobby time using far sexy functional languages... but I don't confuse my love for them with thinking they are a good fit for my business.


Presumably one of the advantages of choosing a language with a strong type system like Haskell is that it would allow for better maintenance and refactoring. What "dependable imperative languages" do you use in practice that offer better maintainability than Haskell?


> What "dependable imperative languages" do you use in practice that offer better maintainability than Haskell?

Scala


Erlang


Haskell is not really dependently typed. You can mimic it a bit, but you owe it to yourself to try reading through something in Coq or Agda (and maybe eventually Idris).


>proof that your program will terminate

Or that a coprogram will be productive (services/servers/streaming fall under this).

Most programs these days operate on codata, so termination on a per-destructed-codata-component basis is productivity if I understand correctly.

(ejenk touches on this in the comments as well, but I wanted to add this point)


UPenn has a ton of really interesting work on extending the Haskell type system to support dependent typing. Some of the coolest pieces I've heard about had to do with guaranteeing the security of a server application through dependent typing.

I never found out what the actual paper or project was that accomplished this. But these two papers[1][2] seems pretty interesting - having to do with guaranteeing safety of database access.

[1] http://www.cis.upenn.edu/%7Eeir/papers/2012/singletons/paper...

[2] http://www.cis.upenn.edu/~ahae/papers/dfuzz-popl2013.pdf


this idris paper, maybe, or how is security guaranteed?

http://www.simonjf.com/writing/bsc-dissertation.pdf

quote:

    enforce resource usage protocols inherent in 
    C socket programming, providing safety guarantees,


Hmm, I don't believe so. My prof went on about a group at Penn using dependent types to prove the security of server applications. But that is definitely an interesting paper too. Thanks for sharing!


Interesting article. Note that the article describes the maybe monad as used in Haskell and explains what it solves and then comes with this criticism:

"...we still need to run the program to observe that the computation fails somewhere and then figure out what happened"

As if figuring out where a problem happened was an issue in Haskell (or, really, in any language supporting something like the maybe monad).

If figuring out what happened when something failed is complicated because there were several maybe monads and in the end you got nothing, then Haskell solves this very nicely if I'm not mistaken: there's "Either", which can carry an error message along with the fact that there's no result.

Now I'm not disputing that a proof that something cannot happen would be great (and hence in this case not requiring Maybe / Either): all I'm saying is that people shouldn't misread that thinking that Haskell has no clean way of reporting why nothing got returned.


The point is that in Haskell, once you start chaining Maybes together you are forced to deal with the Nothings, even if they won't actually arise.

For example:

    :t Data.Map.lookup 123 (singleton 123 "hello")
    Data.Map.lookup 123 (singleton 123 "hello") :: Maybe [Char]
Even though this lookup will clearly succeed, I am forced to handle the Nothing case. Dependent types let me encode more information about the value in my types, providing evidence that it will always be a Just, which I can use at compile time to convince the type checker that the Nothing case is unreachable.

As soon as the Nothing case is unreachable, I can easily unwrap the Maybe, since my only remaining branch is a "Just x", so I can simply return "x".


fmap is very useful for dealing with Maybes also:

  Prelude M> import qualified Data.Map as M
  Prelude M> let val = M.lookup 123 (M.singleton 123 "hello")
  Prelude M> fmap reverse val
  Just "olleh"
So this means that you only deal with the chained maybes at the top level or in a call-site that needs to know a Nothing existed as the Nothings are propagated.


You can't use fmap to "break out" of a Functor, since its return type is still wrapped in the functor:

    fmap : (a -> b) -> Maybe a -> Maybe b
Likewise we can't escape from an Applicative:

    (<*>) : Maybe (a -> b) -> Maybe a -> Maybe b
Nor from a Monad:

    (>>=) : Maybe a -> (a -> Maybe b) -> Maybe b
Dependently typed languages use these all the time, but they also allow you to write a function:

    unMaybe : (A : Type) -> Maybe A -> Type
    unMaybe _ Nothing  = _|_  -- Empty type; impossible to satisfy
    unMaybe A (Just _) = A    -- The raw element type
And use it like this:

    unWrap : (x : Maybe a) -> unMaybe a x
    unWrap Nothing = ...insert proof that you'll never get a Nothing here...
    unWrap (Just x) = x
I haven't included a proof, since it depends completely on the application, but it's easy to understand how it works. In the "Just" case, we need to return a value of type "a", which is trivial. In the "Nothing" case, we need to return a value of type "_|_", but we no that there aren't any! The only way we can satisfy the type checker is to return a value of type _|_, and the only way we can do that is if we already have such a value. In other words, to get an impossible value we must be in an impossible branch. Hence we can prove by contradiction that the "Nothing" will never be taken!


I don't think that's what the author meant in that passage. They're not bemoaning error reporting, they're complaining about the fact that you have to run the code to find out if there was a problem, rather than finding out at compile time. Having to track down the source of the runtime error is a minor injury in comparison.


> If figuring out what happened when something failed is complicated because there were several maybe monads and in the end you got nothing, then Haskell solves this very nicely if I'm not mistaken: there's "Either", which can carry an error message along with the fact that there's no result.

Does that really work? Say you have an error in a long pipeline, wouldn't "Either" cause processing down the line to treat the error as valid data (the error wouldn't be None)?

What you really want is a maybe monad where you can propagate wither a Valid answer or an Error condition, where operations on the value merely propagate the error condition. Also, it would be nice if the monad was integrated into a debugger so the error condition could carry along enough context to allow the user to jump to execution context where the error occurred during debugging (if we are talking about programmatic errors, a distinction must be made between errors of programming vs. exceptions from the environment that can/should be handled).


If the pipeline is constructed with fmap, <*> or >>= then the pipeline's constituents will only be applied to Right values. An error (a Left value) will short-circuit the pipeline and be returned as-is.

Alternatively, we could be more fancy and branch such that errors received their own, separate, processing (eg. logging).


Cool. One of the advantages of null pointer exceptions is that you can break into the debugger when they happen. Masking them and continuing is absolutely what you don't want to do.


Either tends to be used for error conditions that arise from the outside—after all, you are (a) explicitly denoting them and (b) presumably handling them. Programmer errors tend to arise in invariant violations. Sometimes these are marked with `error "cannot occur"` or silly things like that. Dropping into a debugger here would be tremendous.


A simple, standard equivalent of __FILE__ and __LINE__ would be 60% as useful as a debugger.


Agreed, at least in mostly pure code. Lennart proposed one on his blog but I don't think it got far.

For IO heavy stuff I think a debugger would still be valuable.


I have think in something like this, but far simpler: Why the (avg) languages can't define types like in a database language? (ej: Age:Int, Check: x > 0 & x < 200).

However, with this: I get how can be usefull for data directly in the code, but how this work with data that come from somwhere? If i feed a CSV with numbers, what happend when some is 0?


So the way it works when your data comes from somewhere and you don't know if it is "clean" is that you write a decision procedure to "find out".

So the simplest use case is you've got a type like `{x : String | x.length < 10}`: that is, the type of strings which have length less than ten. A value of that type is going to be a pair `<str, proof>` where `proof` is an object which witnesses the fact that `str` has a length less than ten.

Now, it is a decidable property whether or not a string is shorter than 10 chars long. You can easily construct a function with the following type:

    decideValidity : (str : string) -> Either (str.length < 10) (Not (str.length < 10))
This function is defined by induction on the string.

Now you want to receive such a string from user input. Crucially, the logic of your program is going to operate on data that's already been validated: so your program will only deal with `{x:String | x.length < 10}`; we just need to shore up the boundaries between your program and user input.

    yourComputation :: {x:String | x.length < 10} -> Something

    main : IO ()
    main = do
      userInput <- getLine
      case (decideValidity userInput) of
        Left prf -> do
          print (yourComputation <userInput, prf>)
        Right nope -> 
          print "The input was invalid"


Is it possible with Idris to programmatically create the `decideValidity` function? Conceptually it seems trivial to do, I could imagine building it with meta programming. My crazy pseudo code-- :D

    decideValidity : (p : Proof) -> a -> Either (p a) (Not (p a))


It'd be more like

    decideValidity : (p : A -> Type) -> (a : A) -> Either (p a) (Not (p a))
and for some classes of propositions this is a really useful type, e.g. the notion of decidable equality[0]. For general propositions you're talking about unbounded proof search, however. Undecidable propositions will fail to terminate (thus causing problems in a total language). Decidable propositions may still be exponentially slow to decide. Some classes of propositions may be known to be decidable, but knowing generally whether or not a proposition is undecidable is undecidable (Halting).

[0] https://github.com/agda/agda-stdlib/blob/master/src/Relation...


No—there are many properties that cannot be decided. For example, many programs can be proven to halt in finite time. For example, you can imagine a way to prove that the program

    return 0
always halts. However, testing for this property is impossible (Halting Problem). More generally, it is not possible to test a general predicate for satisfaction. Even things that are decidable in principle may not be decidable quickly (SAT).

On the other hand, there is a lot of space for simple predicates like "str.length < 10" to be decided automatically, since the proofs for these can be constructed with only forward search.


Wouldn't the proposed function then just not compile for the instances were it would be impossible to test, as desired? It would still be useful for where it does work, like in the code you provided. Seems like a lack of meta programming to me if it's not currently possible.


You can sort of do this sometimes. Consider Idris' decidable equality module:

https://github.com/idris-lang/Idris-dev/blob/master/libs/pre...

Idris has type classes and it produces a polymorphic total function `decEq` for any type which instantiates the class DecEq.

    decEq :: DecEq t => (x : t) -> y -> Dec (x = y)
where Dec denotes a decidable type, something like (but not actually)

    data Dec t where
      Yes : t     -> Dec t
      No  : Not t -> Dec t
So now we have a proposition in our type-level prolog called `DecEq` and some types, the ones with decidable equality, can instantiate it

    instance DecEq () where 
      decEq _ _ = Yes refl
   
    -- Equality is symmetric, so is the negation of equality
    total negEqSym : {a : t} -> {b : t} -> (a = b -> _|_) -> (b = a -> _|_)
    negEqSym p h = p (sym h)

    -- Proof that zero isn't equal to any number greater than zero
    total OnotS : Z = S n -> _|_
    OnotS refl impossible

    instance DecEq Nat where
      decEq Z     Z     = Yes refl
      decEq Z     (S _) = No OnotS
      decEq (S _) Z     = No (negEqSym OnotS)

      -- recurse on the arguments together and modify the eventual
      -- decided proof so as to match the arguments actually passed. 
      --
      -- I.e. we might find that Yes (n = m) but we need Yes (S n = S m)
      --
      decEq (S n) (S m) with (decEq n m)
        | Yes p = Yes $ cong p
        | No p = No $ \h : (S n = S m) => p $ succInjective n m h
Finally, at compile time Idris will try to infer the concrete types instantiating variables throughout the program. If any of the variables are bounded by `DecEq` then it must be able to solve the typeclass prolog to establish decidable equality for that type.

If it fails to fulfill that obligation then it'll fail at compile time.

    -- this fails since functions on naturals are 
    -- far, far, far from decidable... Idris cannot achieve the
    -- obligation to find `DecEq (Nat -> Nat)`.
    decEq : (x : Nat -> Nat) -> y -> Dec (x = y)


But then, I don't see what I gain with this. Is like have Eiffel contracts.


> However, with this: I get how can be usefull for data directly in the code, but how this work with data that come from somwhere? If i feed a CSV with numbers, what happend when some is 0?

The same thing that happens in any language; have a validation layer which checks whether the incoming data is zero or not. If it is, then do something appropriate (eg. print an error and halt), if not then you can safely pass if to your non-zero function.

See http://existentialtype.wordpress.com/2011/03/15/boolean-blin...


Pascal, and Pascal-derived languages such as Modula and Turbo Pascal, have this:

    type Age = 1..199;
A value of type Age can only have the values 1 <= x <= 199, and can't be assigned to non-Age variables. Enums belong to this category of ordinal types.

In Ada:

    type Age is Range 1 .. 199;
    type Temperature is new Integer Range 32 .. 212;

    a: Age; t: Temperature
    a := t  -- this fails at compile time
Ada 2012 adds the support for specifying Eiffel-style contracts, but on types:

    subtype Even is Integer
      with Dynamic_Predicate => Even mod 2 = 0;
You can't assign an odd number to a variable of type Even.


Yep, I know about pascal (my 2d language after foxpro) but not the ADA 2012.

The Dynamic_Predicate is on runtime or compile time? If assign a:Int = Even, I need to be explicit and do a convert (I imagine is yes, that is the pascal style but not have experience with ADA)


I believe it occurs only at compile time if it is possible for the compiler to evaluate the check; otherwise it's done at runtime.


> In a functional language, you describe the problem to the computer, and it solves it for you.

Isn't this the definition of a declarative programming paradigm? (I.e. SQL?)


According to Wikipedia Functional Programming fits under the Declarative Programming paradigm. I'm not sure how accurate that is, but it seems to hold some truth.

"Functional programming, and in particular purely functional programming, attempts to minimize or eliminate side effects, and is therefore considered declarative."

"While functional languages typically do appear to specify "how", a compiler for a purely functional programming language is free to extensively rewrite the operational behavior of a function, so long as the same result is returned for the same inputs."


Ah, I see. That last bit is especially interesting.

I would argue that there are many languages (and no reason they couldn't) act otherwise with regard to that second quote. However, the line between pure functional and not starts to become fuzzy as well.


Meh. "Declarative" and "functional" are such weak technical terms that it's tough to say whether or not that's true. Or whether or not seeking the truth of such a statement is itself even meaningful.


I had the same thought, seemed off. What I think of "functional programming", and how it seems to be used, is that a function has to be pure.


That's a little vague, I think. Pure functional programming has no side effects. Isn't that as succinct & thorough as it gets?


"Pure functional programming has no side effects."

Well... every program has side effects when you run it - I don't yet know a language with a type system that reifies CPU temperature.

I think it might be correct to say that idealized pure functional programming doesn't rely on side effects for correctness?


It makes more sense if you think of your program as schematics for building a machine. Following that metaphor, the type system can help you find flaws in your design. When you build your schematics, you're still constrained by the real world, unfortunately. CPU temperature would just be an IO request, which leaves your program, the runtime collects the information, then feeds it back into your program.

I personally like to think of my machine running in a lab, with IO being scientists running around taking data out, doing work, then feeding it back into the machine. :D


You missed my point. I'm not talking about reading CPU temperature from the program - that is easily represented as you say. I'm talking about the generation of heat from the very act of running the program. I don't think there is any reasonable thing to label that but a "side effect". It's an unavoidable effect of running the program, it is not in any sense why you run the program, it doesn't show up anywhere in the types, the language gives you no guarantees about it, &c, &c.


You're conflating the term "side effect", in functional programming it only describes purity. That is all. [1]

[1] http://en.wikipedia.org/wiki/Side_effect_(computer_science)


You are going to have to point at something more specific than the article as a whole. At a skim, it seems to support my interpretation perfectly fine. It starts off:

"In computer science, a function or expression is said to have a side effect if, in addition to returning a value, it also modifies some state or has an observable interaction with calling functions or the outside world."

The increase in CPU temperature certainly an "observable interaction with [...] the outside world". Given that you are given no kinds of guarantees about the semantics of this interaction in any language I'm aware of, it would be poor engineering to rely on it, but that doesn't mean it doesn't happen or that it is not a side effect.


Well (a -> Void) "morally" ensures it'll never be run... kind of.

I'll show myself out.


How about: no side effects within the defined semantics of the language.

I doubt that CPU temperatures are part of many language specs.


I think I'm comfortable with that. Then you push the "that we care about" question down to considering correctness and optimization of implementations.


My language works a little like this. All programs are verified to be safe (not crash, read out of bounds, etc), and to terminate.


"and to terminate"

That must be a neat trick - or you've constructed a non-Turing -complete language...


Non-Turing-complete is not a bad way to go. You pretty much have to already be a researcher in dependent type systems (or maybe set theory) to invent functions that always terminate but can't be written in non-Turing-complete languages like Coq (an evaluator for programs in an at-least-as-powerful dependently typed language is the only remotely natural example I know of). Also, writing a program that proves some programs terminate is way easier than proving a program that correctly proves any terminating program terminates, if you are confusing the two. If it's not too common, "I didn't manage to prove this terminates" sounds like a reasonable compiler error.


It can be kind of hard to satisfy termination checkers, though. They're not smart. You basically have to show structural induction on something which sometimes forces you to invent lots of new proof terms.


Indeed. I suspect that's where a lot of my time will be invested, making the checker better, improving error messages, etc..


"(an evaluator for programs in an at-least-as-powerful dependently typed language is the only remotely natural example I know of)"

Note, also, that there's an obvious restriction for that case to get something you can write: "evaluate for the next X steps".


Yeah, it's non turing-complete.


Or it just refuses to compile/execute something it can't prove terminates.


That would make it non-Turing complete. Of course there probably aren't many interesting programs that are ruled out by that restriction.


There are many interesting programs that don't ave any guarantee that they terminate. A a few that don't really terminate, ever.

In fact, if you click on the launcher menu on your computer, I'll bet most programs there just keep going, until some random event (AKA clicking on an "x") throws them out of the loop.


That's why you'd ask your programs to be `productive'. Also look up co-recursion and co-data, if you want to find out more about this.

(It's basically the logical way to extend `termination' from batch programs to things like continuously running programs like operating systems.)


It wouldn't require that all programs terminate, just that all functions terminate.


Your programs aren't composed of one top-level function?


More likely they would be composed on one top-level IO value, like in Haskell, or something similar.


That is a very good point.


Finding non-Turing complete, but still interesting languages, is a noble pursuit.


Why? PFPL has like 10 chapters on them.

It's never really been stated why Completeness is a good thing.


Yes. It's a useful endeavour. Things like Datalog, regular expressions or applicative parser combinators come out of it.


Anyone know how runtime IO errors would be handle with dependent types?

You can't prove that an outside source is going to return what you expect. Would you just prove that it essentially returns a "Maybe" ?

Haskell exceptions have always weirded me out, I've poked at Idris, but plan to write a side project in it soon!


You can however obligate that the function that invokes the outside IO be required to return valid input as a constraint in the type-system. So you wouldn't be able to compose an unrestricted ``readLine`` function with a function that couldn't handle the invalid input.

It's shifting the checks to ensure that the invalid program is completely inexpressible by construction instead of ensuring runtime safety like a Maybe would.


We do something sorta like this in GWT by having a SafeHtml type to represent trusted HTML. To take untrusted HTML and convert it to SafeHtml, you have to sanitize it. In Java, we have a few trusted methods to do this and don't actually prove it, except informally in code review. In the old days, Perl had a "taint" mode that did similar things.

I imagine that in a dependently typed language, you'd have a lot of input validators that return either a proof of validation or an error to report back to the user. It clarifies which parts of the program assume "safe" input and what that means, allowing you to restructure your program. But it doesn't get rid of input validation, just moves it around.


It helps to remember that we don't gain any expressiveness from a type system, only safety[1]. Type systems can only restrict which programs we're allowed to write[1], compared to un(i)typed programs. After our programs have passed the type-checker, their types can be erased to leave behind a raw un(i)typed program. In the case of functional programming, we can imagine it compiling down to something like Scheme.

The point is that we can handle error conditions just like we do in any other language: if we're compiling to something which looks like Scheme, we can handle errors in the same way: have everything return a sum allowing errors (ie. a Maybe). Even things which don't produce errors can be lifted to this type automatically and chained together using a monad. That's basically what Haskell does.

Of course, lifting simple functions to return Maybes isn't very nice, since we're purposefully throwing away information; ie. we're causing our consumers to ask "did this return a value or not?" when we already know that it always will. The problem is that with composing Maybe functions with simple functions. There's no way around this in Haskell, we need to lift the simple functions; after all, that's the point of a monad: we can return a wrapped value and we can join a wrapped-wrapped value into a wrapped value but there's no way to unwrap a value (otherwise we may have a comonad instead!).

Dependent types give us more flexibility than Haskell though: if our output value depends on the input, we can also make our output type depend on the input. For example (where "A", "B" and "C" are some concrete types, for simplicity):

    -- Simple function
    myFunc1 : A -> B
    myFunc1 = ...

    -- Maybe function (ie. might error)
    myFunc2 : C -> Maybe A
    myFunc2 = ...

    -- Maybe monad. Works, but loses some type info
    liftedChain : C -> Maybe B
    liftedChain x = case myFunc2 x of
                      Nothing -> Nothing
                      Just y  -> Just (myFunc1 y)

    -- Exactly the same algorithm, but without losing type info
    myType : Maybe A -> Type
    myType Nothing  = ()  -- Unit type, equivalent to Nothing
    myType (Just _) = B   -- Result type, equivalent to Just

    unliftedChain : (c : C) -> myType (myFunc2 c)
    unliftedChain x = case myFunc2 x of
                        Nothing -> ()  -- Unit value; matches myType Nothing as required
                        Just y  -> myFunc1 y   -- Value of type B; matches myType (Just y) as required
This makes it easy to handle runtime errors, without having to pretend that everything else might blow up. Typically, we write most of our code in an ideal world assuming all of our requirements are met, then we write simple "driver" functions which justify those assumptions, returning errors otherwise. Exactly like in Haskell, where we write pure functions for manipulating plain values like strings, then write simple "driver" functions to pull those strings out of files, databases, sockets, etc. using IO.

[1] Strongly typed languages can be more or less expressive than each other due to their type systems (eg. dependent types make it easy to express homogeneous lists, which can't be done (AFAIK) in System F), but all are less expressive than un(i)typed languages.


> It helps to remember that we don't gain any expressiveness from a type system, only safety[1].

That is controversial, especially when you are considering non-safety oriented tools like code completion systems or program inference engines. Languages with reified types may also make decisions at run-time based on the actual type of a value (e.g. virtual method dispatch).


I meant "expressiveness" purely in the sense that "we can write more programs 'without' types than with them" (of course, the types are 'still in there somewhere'; we're just ignoring them). I wasn't referring to external tooling, which I agree gains a lot from strong types (specifically because a load of programs have been forbidden, which makes their job tractable).

When dependent types and proving correctness are mentioned, there is often an assumption that some magical ability is gained. For example, all the questions here about error handling. Dependent types don't handle errors, they're just types; but what they can do is forbid code which doesn't handle errors.

As for "reified types" affecting run-time behaviour, I would call those tags rather than types. For example, the class of an object is a tag, not a type.


> I wasn't referring to external tooling, which I agree gains a lot from strong types (specifically because a load of programs have been forbidden, which makes their job tractable).

Careful. You also want to allow unsound choices in completion options, since the programmer could have made a mistake or is unaware of the more specific type needed to support some operation (especially if they are using code completion for discovery purposes). Sometimes the path between valid program A and valid program C is an invalid program B (think about editing code without the ability to create transient syntax errors...how horrible would that be!). Type theorists often get this wrong, because they are so focused on correctness :)

> When dependent types and proving correctness are mentioned, there is often an assumption that some magical ability is gained.

I thought magic was needed just to do the type checking. I don't think anyone who has gone through the program proving process would think it was easier (e.g. via F*), a lot of sweat is required to get those rock solid absolute guarantees.

> As for "reified types" affecting run-time behaviour, I would call those tags rather than types. For example, the class of an object is a tag, not a type.

This is only the overly narrow type theorist's definition of type. The class of an object is obviously a type/kind/classifier via the informal definition of type. In a statically typed language, it also happens to correspond 1-1 with the static type of the object as created, which is incredibly useful from a usability standpoint. Erasure likewise is pretty bad, and languages that don't erase (C# vs. Java) are much more usable.


Typeclasses and proof search can add expressiveness. You need not program everything but merely constrain the logic and let the compiler determine the behavior.


Maybe I'm missing the point of Dependent Types and there's more to it than I understood from the article, but in the example given of a divide function that requires a proof of a non-zero denominator: isn't this already achieved by languages that employ design by contract, such as Eiffel?


Eiffel can promise that it will halt at runtime if a contract is violated. DT languages can promise that no contracts could ever be violated by any possible runtime evaluation pathway.

The latter is much stronger, but weird sounding. What it typically means is that you are required to not cut corners. The type system will detect your failure to handle edge cases and refuse to compile until you either handle or explicitly ignore them.


My understanding of DbC in Eiffel is that checks are done at runtime, rather than having a proof at compile time that the behavior cannot occur.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: