For library types, the most commonly used properties will have already been proven; you wouldn't have to write a proof that "sort" sorts a list any more than you'd have to write the sort function itself.
The computer can also be surprisingly good at proving these things in many cases. E.g. I was playing with the proof assistant Isabelle recently, and I tried to prove that given an int, a proof that twice that int is greater than 13, and a proof that four times that int is less than 29, then that int must be 7. I thought I'd have to fiddle around with induction on Peano numbers and the like, but nope, it could be proved with just a single "by arith".
Sorting was just an example. Most proofs of nontrivial algorithms come with reams of hand written proofs. Happy to see examples that go against this though.
> The computer can also be surprisingly good at proving these things in many cases. E.g. I was playing with the proof assistant Isabelle recently, and I tried to prove that given an int, a proof that twice that int is greater than 13, and a proof that four times that int is less than 29, then that int must be 7. I thought I'd have to fiddle around with induction on Peano numbers and the like, but nope, it could be proved with just a single "by arith".
Doesn't that just fall into the Presburger arithmetic decision procedure? The problem is you can rarely tell when the proof assistant is going to complete the proof for you (e.g. you're fine if it's within Presburger arithmetic but not if it requires induction).
data Comp = Gt | Lt | Eq
sortBy : (a -> a -> Comp) -> List a -> List a
sortBy = ...
I know this, I'm simply pointing out that when you do decide to write a proof it can be a very challenging task (much harder than adding generic types into Java code for example).
Tactics are probably a usability improvement for mathematicians who are used to proving everything by hand. But for programmers used to type inference, they're a step backwards. Mathematicians typically prove much deeper results, but they do so at a much slower rate than programmers write programs.
It's unfair to just compare development time between tactic-generated programs in a dependent language with manually written programs in a non-depenendent language. The end result in the dependent language is much more valuable.
Seriously, Coq-style proof scripts are utterly unredable when they grow past a certain size. The only way to understand them is to replay them, so that you can see the intermediate hypotheses and goals.
Reminds me of the same kind of problem that arises in trying to understand the runtime behavior of nontrivial programs involving side-effects, mutable state, multithreading, etc. Sometimes the debugger is the only practical way to figure out what the program is doing. Add concurrency into the mix, and things can get tricky very quickly.
Dependently-typed programming and machine-assisted proof is still an emerging thing, and there is probably lots of room for exploring ergonomic improvements. It will be interesting to see where this goes.
In Agda, you are expected to write proofs in a functional style, similar to Haskell programs. There is a small amount of integrated automation, which helps you fill in holes in your proofs — however, the results are explicit proof terms, inserted at the right place in your program.
Systems which behave in this fashion have the de Bruijn criterion, as described in Geuvers (2009) “Proof assistants: History, ideas, and future”.
I have thought about it some time ago and came up with the following proposal:
Of course, but, in any case, writing programs at the same rate mathematicians prove theorems simply isn't viable.