Hacker News new | past | comments | ask | show | jobs | submit login

>You don't get this for free and you have to help the computer verify these.

For library types, the most commonly used properties will have already been proven; you wouldn't have to write a proof that "sort" sorts a list any more than you'd have to write the sort function itself.

The computer can also be surprisingly good at proving these things in many cases. E.g. I was playing with the proof assistant Isabelle recently, and I tried to prove that given an int, a proof that twice that int is greater than 13, and a proof that four times that int is less than 29, then that int must be 7. I thought I'd have to fiddle around with induction on Peano numbers and the like, but nope, it could be proved with just a single "by arith".




> For library types, the most commonly used properties will have already been proven; you wouldn't have to write a proof that "sort" sorts a list any more than you'd have to write the sort function itself.

Sorting was just an example. Most proofs of nontrivial algorithms come with reams of hand written proofs. Happy to see examples that go against this though.

> The computer can also be surprisingly good at proving these things in many cases. E.g. I was playing with the proof assistant Isabelle recently, and I tried to prove that given an int, a proof that twice that int is greater than 13, and a proof that four times that int is less than 29, then that int must be 7. I thought I'd have to fiddle around with induction on Peano numbers and the like, but nope, it could be proved with just a single "by arith".

Doesn't that just fall into the Presburger arithmetic decision procedure? The problem is you can rarely tell when the proof assistant is going to complete the proof for you (e.g. you're fine if it's within Presburger arithmetic but not if it requires induction).


You don't have to write a proof if you don't want to. It's quite possible to simply write your sorting function without a proof, it's just that at that point you don't know that your function works as advertised (which of course is then just the same thing that you get in a non-dependently-typed language).

    data Comp = Gt | Lt | Eq

    sortBy : (a -> a -> Comp) -> List a -> List a
    sortBy = ...
The above is perfectly acceptable in Idris, given that your implementation of sortBy type checks. And it makes no use of dependent types. Separately to this function, you can write a proof that it's correct (if you want). Or unit tests, for that matter.


> You don't have to write a proof if you don't want to.

I know this, I'm simply pointing out that when you do decide to write a proof it can be a very challenging task (much harder than adding generic types into Java code for example).


> it could be proved with just a single "by arith".

Tactics are probably a usability improvement for mathematicians who are used to proving everything by hand. But for programmers used to type inference, they're a step backwards. Mathematicians typically prove much deeper results, but they do so at a much slower rate than programmers write programs.


Huh tactics are pretty great. Say exactly what you want and the computer programs itself!

It's unfair to just compare development time between tactic-generated programs in a dependent language with manually written programs in a non-depenendent language. The end result in the dependent language is much more valuable.


I'm not comparing dependent types vs. no dependent types. I'm comparing higher-order dependent types (Agda, Idris, Coq, etc.) vs. first-order dependent types: https://news.ycombinator.com/item?id=12350147

Seriously, Coq-style proof scripts are utterly unredable when they grow past a certain size. The only way to understand them is to replay them, so that you can see the intermediate hypotheses and goals.


> The only way to understand them is to replay them

Reminds me of the same kind of problem that arises in trying to understand the runtime behavior of nontrivial programs involving side-effects, mutable state, multithreading, etc. Sometimes the debugger is the only practical way to figure out what the program is doing. Add concurrency into the mix, and things can get tricky very quickly.

Dependently-typed programming and machine-assisted proof is still an emerging thing, and there is probably lots of room for exploring ergonomic improvements. It will be interesting to see where this goes.


Your analogy between debugging effectful programs and replaying proof scripts is frighteningly accurate.


I agree. Note that the Coq-style tactics-based approach is not the only possibility.

In Agda, you are expected to write proofs in a functional style, similar to Haskell programs. There is a small amount of integrated automation, which helps you fill in holes in your proofs — however, the results are explicit proof terms, inserted at the right place in your program.

Systems which behave in this fashion have the de Bruijn criterion, as described in Geuvers (2009) “Proof assistants: History, ideas, and future”.

https://www.dropbox.com/s/4mwtxojg7yqb365/Geuvers2009.pdf


Thanks for the links (this one and the one from your other reply), I'm checking them.


I had/have a similar feeling.

I have thought about it some time ago and came up with the following proposal:

http://matej-kosik.github.io/www/doc/coq/coq-trunk-proof-tre...


That's mostly true, but the speed programmers write programs depends how bulletproof they have to be. There have been times that if you include the debugging and maintainability difficulty, proving things to my compiler would be a step forward for me. There's probably a sweet spot.


> That's mostly true, but the speed programmers write programs depends how bulletproof they have to be.

Of course, but, in any case, writing programs at the same rate mathematicians prove theorems simply isn't viable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: