How do you write a type which fully describes, for example, "given a user, some content, and various bits of metadata, this function transforms them into a blog post with all the data in the right places"?

 You don't write just one type, you write many. The problem you described is pretty straightforward. There are actually lots of examples of how to do this with existing Haskell libraries. What specifically do you want to know about?
 OK, let's put it this way. I have an entirely arbitrary calculation, taking input A and outputting B. I want to ensure that this calculation is implemented according to specification.How on earth do you write a type, or set of types, for that that actually bear some resemblance to the spec and don't simply reflect the details of the implementation?To simplify it to the point of near-nonsense, how would you write a type which says `append "foo" "bar"` will always result in "foobar", and never "barfoo" or "fboaor"? Or that a theoretical celsiusToFahrenheit always works correctly and implements the correct calculation? If you can't do that, how can you do it for more complex data transforms?
 This is where dependent types come in. They allow you to write an implementation of append that is correct by construction. Your algorithm is in essence a formal proof of the proposition that `append foo bar = foobar`.A simpler example is that of lists and the head operation. In most languages, if you try to take the head of an empty list you get a runtime exception. In a language with dependent types you are able to express the length of the list in its type and thus it becomes a type error (caught at compile time) to take the head of an empty list.
 > Your algorithm is in essence a formal proof of the proposition that `append foo bar = foobar`.Isn't that literally just implementing the program, though? The point of tests is that they're simple enough that you can't really fuck up, and they describe the specification, not the implementation.If you have to write a formal proof, what's making sure the proof is actually proving what you intend? And what's the actual difference between this proof and the implementation?
 If you have to write a formal proof, what's making sure the proof is actually proving what you intend? And what's the actual difference between this proof and the implementation?The formal proof is the implementation. It is the code you run in production. The proposition is your types. Instead of writing tests, you write types. It's the exact same process you would use with "red-green-refactor" TDD except it's the compiler checking your implementation instead of the test suite. The advantage of doing it with types is that the compiler can actually infer significant parts of the implementation for you! Types also happen to be a lot more water-tight than tests due to the way you specify a type for a top-level function and everything inside of the body can generally be inferred.If you're interested, here is a series of lectures demoing dependently-typed programming in Agda by Conor McBride:
 I will certainly watch those - but for the moment, I see no obvious way to immediately see that a proof is proving what you intended it to prove, whereas `assert (append "foo" "bar") == "foobar"` immediately shows what you expect and can be read and checked by somebody with the slightest programming knowledge.The point of tests is that they're supposed to be simple enough that it should be near-impossible for them to contain bugs - they're incomplete, sure, but what they're testing should always be correct - whereas it seems that it would be easy for a proof to prove something subtly different from the spec without anybody being able to tell. If we could write complex programs to spec without error, we wouldn't have tests in the first place.
 You don't have to see that the proof is proving what you intend; the type-checker does that for you. This is why some people sometimes refer to type checkers as small theorem provers.As for your assert example, how do you know append will work as you expect for all possible strings (including null characters or weird unicode edge cases)? With a proof you will know.
 > With a proof you will know.But I could just as easily accidentally write a proof which proves something else, couldn't I? This is complex code expressing complex ideas - a type system would certainly help, but it can't tell me that I'm proving the wrong thing.
 But I could just as easily accidentally write a proof which proves something else, couldn't I? Then your proof would be rejected by the compiler. Remember, the types specify your proposition: i.e. what you are intending to prove. The actual proof itself is the function you implement for that type.As for whether you're proving the right thing or the wrong thing, a type system is no less helpful than a test suite. The advantage of a type system is that it checks whether your types are consistent within the entire program rather than in merely the specific test you're running.
 > Then your proof would be rejected by the compiler.Only for some types of mistake, surely. If that was always the case, we'd have a compiler that could read minds.> As for whether you're proving the right thing or the wrong thing, a type system is no less helpful than a test suite.Really? I can write down in my test suite, "assert (add 1 1) == 2". Anyone can come along and look at that and make sure it matches the spec. We can add additional tests for various bounds, and possibly use a quickcheck-like tool in addition, and for 99.99% of use cases be happy and confident that we're at least writing the right thing.What's the type for that and does it actually have any resemblance to "the add function adds two numbers together", or do I have to read a couple of papers and have a background in university-level math to convince myself that the proof actually does what it says?

Registration is open for Startup School 2019. Classes start July 22nd.

Search: