Learn You an Agda 127 points by bkirwi on Dec 2, 2014 | hide | past | web | favorite | 70 comments

 Languages that use types for elaborate proofs always seem to me like they only prove the most uninteresting properties of programs, namely the parts that deal with data transformations. Not that data transformations aren't important, but the properties they need to preserve are usually easy to be convinced of without any assistance by the compiler.At the end of the day, programs are written for their side effects (unless your program is a compiler, which is the classical example of an "interesting", non-trivial data transformation, and possibly the only example). What I'm most interested in is proofs that, say, in a concurrent environment, a function that closes a socket will never be called as long as there are pending tasks to write to the socket. Or, because I write concurrent data structures, I'm interested to prove that a certain function will eventually release all locks it acquires. Are there any such languages?I've heard of effect systems, but I couldn't find enough information on them. Are effect systems capable of the kind of proofs I'm interested in? Are there practical programming languages with powerful effect systems?EDIT: Some Googling yielded this: http://lambda-the-ultimate.org/node/4768 tldr: Not yet.EDIT: At least the lock proof seems to be implemented as one of Java 8's pluggable type systems: http://types.cs.washington.edu/checker-framework/current/che...It lets you add a type (Java 8's pluggable types are intersection types) to a function called @EnsuresLockHeld(locks...) that proves some locks will be held when a function returns and even @EnsuresLockHeldIf(locks..., result) that proves a function grabs certain locks if its boolean result is equal to the one specified in the type.
 Given formal semantics of your programming language and some time, I think you can use Agda to prove interesting properties about correct programs you write in that language. Semantics, the behaviour of objects such as sockets, can be encoded in types the theorem prover can understand. It can look like this for a basic imperative language: https://github.com/Twey/imp-agda/blob/master/IMP/Semantics/O.... You can then define the property you are interested in and get your proof by combining these types together.This is often tedious but can be automated to some extent and has already led to significant achievements. See http://sel4.systems, a proven kernel, done with Isabelle, a tool similar to Agda.Maybe you are looking for a language with a type system that will force you to write your program in a safe way regarding sockets and the like. This can also be achieved with static analysis, another way of automatically proving properties about programs. I do not know of any for distributed programs but it is probably possible. I would say this is still a research topic as such programs are still so hard to get right.
 Sounds like you are interested in applications of temporal logic for formal verification in software development, which is an area of active research. Prof. David Harel works in that field:http://www.wisdom.weizmann.ac.il/~harel/papers.htmlDo an in-page search for terms like temporal, behavioral, scenario, LSC, live sequence. By the way, I'm no expert but am interested in similar applications.
 I'm actually about to integrate some aspects of behavioral programming in a concurrency library I'm maintaining. I also vaguely recall taking a class with Prof. Harel many years ago...
 It's not exactly what you're asking for but Ur/Web puts an expressive type system to some slightly unusual ends: http://www.impredicative.com/ur/.
 Whoa, this is pretty horrifying: app code, queries and markup all mixed together, like typed-checked PHP. I know this is just a demo but "separation of concerns" does not appear to have been a concern. fun list () = rows <- queryX (SELECT * FROM tab AS T) (fn (fs : {T : \$([Id = int] ++ map fst M.cols)}) =>
{[fs.T.Id]}{col.Show v} [Update] [Delete]
{@mapX2 [fst] [colMeta] [tr] (fn [nm :: Name] [t ::_] [rest ::_] [[nm] ~ rest] v col => ) M.fl (fs.T -- #Id) M.cols} );
 Keep also in mind, that it is ultra-fast (http://www.techempower.com/benchmarks/#section=data-r9&hw=pe...).
 Oh, nice! Indeed I'm not sure that's what I want, but this seems like the first statically-typed FP language I like. I especially like how you can program the type level with constructs similar to the program level.
 Here Idris uses its effect system to verify correct usage of a protocol between a client and a server http://www.youtube.com/watch?v=vkIlW797JN8&t=53m35s
 > Languages that use types for elaborate proofs always seem to me like they only prove the most uninteresting properties of programs, namely the parts that deal with data transformations.You can proof almost anything in agda. Even the most complicated mathematical theorems.>What I'm most interested in is proofs that, say, in a concurrent environment, a function that closes a socket will never be called as long as there are pending tasks to write to the socket. Or, because I write concurrent data structures, I'm interested to prove that a certain function will eventually release all locks it acquires. Are there any such languages?You can do this in agda, however, you need to specify semantics of the language you use.
 >You can proof almost anything in agda. Even the most complicated mathematical theorems.And I think that's the thing that makes Agda interesting. If you limit Agda to proving things about computer programs, it doesn't sound that special. People are using Agda to prove things about, for example, topology.
 > Languages that use types for elaborate proofs always seem to me like they only prove the most uninteresting properties of programs, namely the parts that deal with data transformations.> At the end of the day, programs are written for their side effectsThe reason for such a focus on data transformation is that it's very easy to do in these purely functional languages (Agda included). It's so easy, in fact, that "other" things, like side effects, event handling, etc. are represented as data transformations. This is most obvious in Haskell, since laziness decouples definition from computation; eg. making it trivial to represent event streams as infinite lists.> What I'm most interested in is proofs that, say, in a concurrent environment, a function that closes a socket will never be called as long as there are pending tasks to write to the socket.The usual approach is to define a datatype of operations you might want to perform (eg. Write String, Read Length, etc.). Next you define a datatype which can combine these operations together in ways you may want (eg. something like a rose tree, if you want threads firing off threads). This forms an "embedded domain-specific language". We then write a bunch of helper functions for manipulating these datastructures (eg. a "combinator library"), then we use these to write down what we actually want as a combination of operations.Next we write an interpreter function which opens a socket, performs all of the operations in the tree (possibly in multiple threads), waits for them to finish then closes the socket.> Or, because I write concurrent data structures, I'm interested to prove that a certain function will eventually release all locks it acquires. Are there any such languages?You can do the same thing as with the socket example, except you can strengthen the type of your program (operation-combining) datastructure to enforce that locks are released. As a simplified example, we can force a serial computation to release locks by only allowing AQUIRE and RELEASE to be inserted together: data Op = Foo | Bar | Aquire ID | Release ID data SafeOp = SFoo | SBar data Program : Type where NoOp : Program -- Empty Program WithLock : Program -> Program -- Add lock ops to a Program PrefixOp : SafeOp -> Program -> Program -- Add a non-lock Op to a Program Interleave : Program -> Program -> Program -- Combine two Programs -- Part of the interpreter, not exposed to the world progToOps : Program -> [Op] progToOps NoOp = [] progToOps (WithLock id p) = [Aquire id] ++ progToOps p ++ [Release id] progToOps (PrefixOp SFoo p) = [Foo] ++ progToOps p progToOps (PrefixOp SBar p) = [Bar] ++ progToOps p progToOps (Interleave p1 p2) = interleave (progToOps p1) (progToOps p2) interleave [] ys = ys interleave (x:xs) ys = [x] ++ interleave ys xs  Notice that progToOps can never output a list containing Aquire without also containing a Release with the same ID. Also notice that we can define arbitrary combinations of the other ops: [] is NoOp [Foo, Bar] is PrefixOp SFoo (PrefixOp SBar NoOp) [Foo, Aquire A, Bar, Aquire B, Release A, Foo, Release B] is Interleave (PrefixOp SFoo (PrefixOp SBar NoOp)) (Interleave (WithLock A NoOp) (WithLock B (PrefixOp SFoo NoOp)))  Of course these datastructures would be build up by helper functions instead of by hand, would be tree-like for concurrency, would probably provide guarantees per sub-tree, would allow arbitrary functions (of some suitable type) in place of Foo, Bar, etc.
 But in your example the program doesn't represent the code itself but a program written in some new language. I said that current type systems are good for writing compilers, but not so good at describing their own behavior other than data transformations.
 It's not reasonable to embed every possible language constructs, semantic and logic into a theorem prover. So the trick is to make the host generic enough to be able to embed your needs as a domain specific language: your critic becomes a library problem. Which implies that you only need to trust the host logic/implementation; and many people can work, collaborate and profit from the same core. Also, we kind of know how a generic logic can be done, but the domain specific logic are more controversial (because everyone needs are different.)The fact that your DSL is represented as an immutable datastructure by the host is really not important. Any code you write is going to be turned into a datastructure at some point. The immutability should be read as "the context in which this result is valid has been made fully explicit", where "context" includes "things that mutate over time". You don't loose in expressivity, you just can't forget to handle special cases.
 I think this is a type system limitation that every type describing side effects -- i.e. the core of every interesting program other than a compiler/interpreter -- can only be used in the context of a compiler/interpreter.Checker (Java 8's pluggable intersection type-system framework) is able to do that (define side-effect types), but isn't a single general type system. I.e. you need to write a different type system for each property (of course, they can all work together). Maybe that's the only known way to do that. Searching for more general solutions, I've come up empty handed (the problem is well known, but the solutions are all in early stages of research).The idea of such a type system needs to be something like: each function has a set of types associated with it, and a function's type interacts with the types of the functions it calls in some ways. So the most primitive example is, of course, checked exceptions, where the type of a function is a simple union of the types of its callees. But the interactions can, and should be more interesting, and take into account the order in which the callees are called etc.
 I'm sorry, I don't think I understand your message and my answer might be off as a result. Could you link to the current research on more general solutions than Checker, so I can get a clearer picture?> a type system limitation that every type describing side effects [...] can only be used in the context of a compiler/interpreter.A type system is just a logic description of a property that a program might have. It specifies how the language constructs interact with this property []. The fact that compilers use a type system is independent of your ability to write your custom type system, or to use it in a different context (IDE, toolchain, ..).[] Are you saying that a type system shouldn't be concerned by the AST of the language?An external type checker for Haskell: http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/blog/2013/0... And the various type checkers for the not statically typed languages (python, ..)Also, I don't understand why you distinguish "typing side effects" as something different from the rest?> you need to write a different type system for each propertyBut you do have to define your type system at some point? Are you thinking of something like Mezzo, where some userland properties can be defined without going down the type-system-definition road? https://protz.github.io/mezzo/> But the interactions can, and should be more interesting, and take into account the order in which the callees are called etc.That's why type systems are defined with respect to the language construct, by rules on the AST (? I'm confused because of the earlier [*] here).To go back to Agda significance for a custom type system, it's not enough to define a type system. You generally want to prove that your definition is valid: If your "checked exceptions" says that a program P can't fail due to an exception, you want to check that you didn't forget anything by verifying the type checker with respect to the actual semantic of the language.You can also embed your custom property into Agda type system (isn't that what you want?).
 I am not a type theory expert (or even enthusiast) by any means, but I imagine something that will be very useful like so: the intersection types of callees interact in interesting ways with the type of the caller (taking into account ordering). For example, the type of the caller can be: "this function locks A, then B, then unlocks B, then unlocks A". This type can arbitrarily intersect other effect types, such as: "this function writes to socket X" and also "mutates variable V".I will be able to define a type "locks A" and "locks B", and the kind "locks" will be ordered, so that the type of a function that locks A and locks B, will be "locks A and then locks B". Also, I will be able to define interactions between types so that unlocking A will undo the effect, so that a function that simple locks A will have the type "locks A", but one that locks and later unlocks will have the type "holds A". Of course, these types intersect arbitrarily.
 I'm not talking about standardization but about a different kind of type system, where the intersection types of callees interact in interesting ways with the type of the caller[1] (taking into account ordering). From what I understand this kind of thing is being research but we don't have anything close to being practical just yet.[1]: for example, the type of the caller can be: "this function locks A, then B, then unlocks B, then unlocks A". This type can arbitrarily intersect other effect types, such as: "this function writes to socket X" and also "mutates variable V"
 > I'm not talking about standardization but about a different kind of type system, where the intersection types of callees interact in interesting ways with the type of the caller[1] (taking into account ordering). From what I understand this kind of thing is being research but we don't have anything close to being practical just yet.Isn't chriswarbo's example that? Since your Ops are generic they can be other effect types. The interleaving is the tricky part, but e.g. Haskell's extensible-effects is practical enough that it's used in real programs.
 > But in your example the program doesn't represent the code itself but a program written in some new language.That's the key idea ;)If I write some Java like DB.connect(credentials).select("users").where("name", "Kevin"), am I using "some new language"? As far as Alan Kay is concerned, yes; that was one of his main inspirations for OOP:> My math background made me realize that each object could have several algebras associated with it, and there could be families of these, and that these would be very very useful.The only difference between that Java example and my earlier Haskell/Agda example is that the functional version operates in two phases: calculating which operations to perform (building a Program containing Ops) is separate to performing those operations (applying an interpreter function to a Program).However, keep in mind that we're not doing imperative programming, so there's is no inherent notion of time: we get the same result no matter which order we evaluate stuff in (that's the Church-Rosser Theorem). Hence these two "phases" are logically distinct, but not necessarily temporally distinct. In practice, we tend to define our Program lazily, so it gets constructed on-demand by the interpreter.This is a bit like having a main loop containing a switch, for example (forgive mistakes; I've never used function pointers in C): void main(op* instruction, int* length, void* callback) { socket* s = set_up_socket(); int close = 0; char* data; while (!close) { switch(*instruction) { case READ: // Read logic data = read_from_socket(s, *length); *callback(instruction, data); // Update instruction based on data break; case WRITE: // Write logic data = *callback(instruction); // Update instruction and return data write_to_socket(*length, data); break; case CLOSE: // Close logic (ignores any other instructions) close = 1; break; } close_socket(s); }  This main function is basically the same as our interpreter; it's keeping the dangerous socket-related stuff in one place, providing some safety that the socket will be closed after use, etc. The values of data and instruction are being computed by arbitrary C code living at callback, just like our Program can be computed by arbitrary Haskell/Agda/whatever code. In this case the interleaving of choosing and executing instructions is explicit, but a lazily-evaluated Program will behave similarly in practice.Does this mean that those callbacks are in "some different language" to C? No; everything is regular C except that we just-so-happen to be labelling some ints as READ, WRITE and CLOSE. Likewise, in Haskell/Agda/etc. we have the full breadth of the whole language available to us; we just-so-happen to be returning values of type Op and Program. All of our logic, concurrency, etc. is done in the "host" language; we only define Ops for the things we want to restrict, like Read and Write; there's absolutely no point defining operations for boolean logic, arithmetic, string manipulation, arrays/lists, etc. because that's available already. Of course, once we've finished defining our socket library, that too will be available to everyone (if we release it); just because it might use some crazy Op, Program and runSocket things internally doesn't mean anyone has to care about that; it's just an implementation detail.Notice that this is very similar to an OOP dynamic dispatch system, where READ, WRITE and CLOSE are method names and the callbacks just-so-happen to be kept together in a struct which we call an "object". Just like Alan Kay said.
 Thanks for the very straightforward explanation of how strongly typed functional programs can have effects in the real world.(forgive mistakes; I've never used function pointers in C)You mostly got it right; the function declaration would look like this (I moved the * for pointers onto the parameter name for stylistic reasons related to how C declares types): void main(op *instruction, int *length, void (*callback)(...))  Also, your function pointer calls should omit the * in front (it would be applied to the return value of the callback, which is void in this case, so you'd get an error during compilation).
 You'd be surprised how many different things can be represented as "compilers" (i.e., as sequences of transforms of some languages). Starting from the relatively obvious things like computer algebra systems, CADs, CAE, and going into unexpected areas like even CRUD and computer games.tl;dr: side effects are overrated.
 Could you expand upon this?
 Sure. Take a look at all the mathematical theories and models out there. They're all languages, and solving problems within a framework of any given mathematical theory is pretty similar to a process of compilation - a source language should be transformed into another language in which the solution can be "executed".Let's take CAD as an example. We've got a source language: a constructive solid geometry, for simplicity. Pretty much everything a CAD does boils down to various transforms of this language - into an optimised octree for rendering purposes, into G-code slices for production, into, say, SVG for blueprints, etc.CRUD stuff boils down to transforms between different semantic domains - a relational structure at one end and workflow graphs and document trees on another.It's really hard to come up with any practically important problem domain which would not benefit from being represented in terms of languages and transforms.
 You may interested in Nim (http://nim-lang.org/). A practical programming language with an effect system.
 I really like these "learn you a" books.I have no reason why but I just like the freaking pictures. It keeps me going. It's like a children book but it also teaches me the subject I like, programming languages.
 On top of the pictures, the examples are a lot more fun than foo and bar. Here's (part of) how Learn You a Haskell introduces map: ghci> map (++ "!") ["BIFF", "BANG", "POW"] ["BIFF!","BANG!","POW!"]
 The official Python tutorial is also a good example of this. Is all Monthy Python references.
 I hate foo/bar for some reason. I think it's because it is supposed to be "this is just whatever", and yet they force me to think about Kung Fu, the Foo Fighters and crowbars.
 You know it's an old military acronym, right? F.U.B.A.R. stands for eFfed Up Beyond All Recognition. (And SNAFU is Situation Normal: All eFfed Up.) ;-)
 I just think of a certain audio player...
 I like to show you the immense beauty of the Agda standard library: http://agda.github.io/agda-stdlib/html/README.html (all clickable and…you'd better have proper unicode fonts).
 Even something as mundane as Data.Bool is beautiful, defining False in terms of bottom.I love to look at unicode, but is it a pain to type it in practice, even given emacs?
 Note that false the value of type Bool is not defined in terms of bottom. It is a constant in its own right just like nil and cons 1 nil are.What happens in Data.Bool is that we have the function from values to types T such that T true and T false are types equal to top and bottom respectively. This value-to-type encoding is called a "universe" and allows us to talk about propositions which are based on boolean function results like theorem1 : T (1 - 1 == 0)  which is somewhat interestingly different from theorem2 : 1 - 1 = 0  in that the first will reflect upon the definitions of the (recursive) functions (-) and (==) while the second reflects only upon the definition of (-).
 Agda-mode will translate from symbol names to unicode for you. For example, \\to gets translated to → and \\== goes to ≡. It is very easy to work with after a short learning curve. There is more explanation available on the Agda Wiki. [0]
 I don’t use Agda, but for most mathy Unicode input, I use C-\ (toggle-input-method) and choose TeX mode. Then you can type “\forall\alpha. \alpha \to \alpha” and get back “∀α. α → α”, which is pretty good. There is also C-x 8 RET if you want to type a particular Unicode character by name or code point number.
 First time I read about 'mixfix' operators, one less thing I'll have to invent I guess. https://www.google.fr/search?q=mixfix+notation
 It lost me at this point:Type the following into an emacs buffer, and then type C-c C-l (use \_1 to type the subscript symbol ₁): proof₁ : suc (suc (suc (suc zero))) even proof₁ = ?  On my newly created emacs setup from the earlier instructions, it's not clear what "type into an emacs buffer" means. Do I create a new file with C-x C-f? Create a new buffer with C-x b? I tried both, and promptly lost agda-mode.I then tried restarting emacs and opening a new .agda file so I get agda mode. Loaded the earlier module and then tried loading this, and it didn't seem to know about suc in the context of this buffer/file. So now I'm lost.
 That should read, "Type the following into the Emacs buffer." You should have something like: module LearnYouAn where data ℕ : Set where zero : ℕ succ : ℕ → ℕ _+_ : ℕ → ℕ → ℕ zero + m = m (succ n) + m = succ (n + m) data _even : ℕ → Set where ZERO : zero even STEP : ∀ n → n even → succ (succ n) even proof₁ : succ (succ (succ (succ (zero)))) even proof₁ = ?
 Thanks! Can't believe I didn't think of that :/What's the distinction between C-c C-space and C-c C-r? It looks like the former computes proof obligations for ?s, but what does the latter do? And why couldn't we use the latter in the first instance, to deduce the outermost STEP ? ?Edit: never mind, that did work.
 > What's the distinction between C-c C-space and C-c C-r?Beats the hell out of me. Here's the docstring for agda2-give (bound to C-c C-SPC): Give to the goal at point the expression in it  Here's the one for agda2-refine (C-c C-r): Refine the goal at point. If the goal contains an expression e, and some "suffix" of the type of e unifies with the goal type, then the goal is replaced by e applied to a suitable number of new goals. If the goal is empty, the goal type is a data type, and there is exactly one constructor which unifies with this type, then the goal is replaced by the constructor applied to a suitable number of new goals.
 If I wrote a natural number calculator in Agda; would all my numbers be represented as lists of successions from zero or can the compiler convert them to two's-complement integers as we know and love them?In case the compiler can do that conversion: is it is programmed to do that for some subset of numeric types or can it infer an optimal binary representation of a value somehow?On the other hand, if they really were represented as lists then I presume this language is intended as purely academic work and has no application in a production environment. Am I wrong?
 > If I wrote a natural number calculator in Agda; would all my numbers be represented as lists of successions from zero or can the compiler convert them to two's-complement integers as we know and love them?I don't think so, but Natural numbers can't be represented as two's-complement "integers"; neither can Integers.There are many ways to encode such numbers if you really want; the easiest are probably "Fin (2^32)" for a type with 2^32 members (easy to convert to/from unary Naturals), or "Vect 32 Bool" for a list of 32 Booleans (easy to do bitwise stuff). You'd need to decide how to truncate the Naturals though; do you take min(x, 2^32 - 1)? Do you take x % 2 ^ 32?Also, note that "S (S (S (S Z)))" isn't really a list; to turn it into a list we'd have to associate some trivial data with each constructor, eg. [NULL, NULL, NULL, NULL].
 I always thought the statement {-# BUILTIN NATURAL ℕ #-}  binds the inductive definition of ℕ to an efficient implementation. However, googling now I can find no confirmation of this. Does anyone know more?
 It's a proof assistant. Not really something you use for number crunching, industry or not.
 But a proof assistant for numerical programming would be extremely useful.
  If we can construct a value of a certain type, we have simultaneously constructed a proof that the theorem encoded by that type holds. data _even : ℕ → Set where  I do not understand how to interpret this passage. _ even, given a number, returns a type? in this case, zero even is a type, but we have not created any value with that type. What am I missing?
 That's exactly correct.In a Dependently Typed (DT) language like Agda the language of values and the language of types are one and the same. Thus, a function from types to types is exactly the same sort of thing as a function from values to values. In this case we have a third sort of totally-natural-day-old-kind-of-thing: a function from values of ℕ to types (in Set). It's "just" a function from values to types.Now, we annotate values with their types. Thus we might have a new top-level definition something : zero even something = _  where we've used our postfix even function to construct a type from the value zero. Is this type inhabited? That's a question of proof and programming. Regardless of its inhabitation, though, we can clearly see that the notion of zero even simply being a type is sound---that's the nature of function application when the target domain is that of Set!
 An Agda program isn't like a C program; a C program is just a bunch of strings that tells the processor where to go and what to do. An Agda program is, instead, a bunch of strings that one-by-one teaches the typechecker what is true about the world.In the beginning, there was Set, which is the set of all valid types. It's empty (more or less). Then comes along data ℕ : Set  This says that the type ℕ, which will be our natural numbers, belongs in Set. Now Set has one valid type, which is ℕ. But ℕ, which is also a type, is empty. And if there's one thing we know about natural numbers, is that there's lots of them.So we must tell it what its members are. We COULD do it like this: zero : ℕ one : ℕ two : ℕ three : ℕ  But how tedious would that be? Instead, we use induction: data ℕ : Set where zero : ℕ succ : ℕ → ℕ  So zero is a member of ℕ, like before. But the successor of any member in ℕ is also in ℕ. So succ zero, and succ succ zero, and succ succ succ zero.People see the arrow and they think functions. That's not how I think about it. Functions are great, but when I hear function I think C functions, which are tiny little instructions for tiny little machines and don't make any sense in a dependently-typed world. So instead, when I see the arrow, I think of it as a way of telling the type checker this fact: wherever you see the string "succ n", assign it the type ℕ but ONLY IF "n" is also of type ℕ. Think of the arrow as the lambda abstractor, and the space as the lambda applicator. Maybe that helps a little; maybe it hurts a lot.We plunge forward! _+_ : ℕ → ℕ → ℕ zero + n = n (succ n) + m = succ (n + m)  More substitution rules. Whenever you see zero + n, the typechecker is allowed to replace it with n. Whenever you see (succ n) + m, it's allowed to replace it with succ (n + m). Allowed, but not required: the typechecker should only use these rules if it needs help typechecking a string it doesn't recognize.We would now like describe even numbers in Agda. Could we just define data even : Set?  And thus give Set its second element? No! Because evenness is a proposition! It _depends_ on our other type ℕ, which is where we get our fancy dependently-typed programming from. We really want to say this: data even 0 : Set data even 2 : Set data even 4 : Set  But that would be tedious. So instead we say data even : ℕ → Set  If the arrow means substitution, like before, we can think about it like this: wherever we see the string "even n", assign it the type Set (a.k.a. let it be a valid type) but ONLY IF "n" has type ℕ. Is that true for any n? Nope. If it were, we would write data even : ℕ → Set where even_everything_is_goddamn_even : (n : ℕ) → even n  Instead, we first assume that "even n" is true for no n at all. That's right: a function with an empty domain. We fill in the domain gradually. data even : ℕ → Set where even_zero : even zero even_succ : (n : ℕ) → even n → even (succ (succ n))  First, even zero means that "even zero" is a member of Set. zero is a natural number, so this typechecks. Second, suppose you have n : ℕ. And suppose even n typechecks. Then even (succ (succ n)) also typechecks.It's simply another form of induction, although slightly trickier because we're trying to fill out a proposition.Anyway. Hope that helps more than it confuses. I think Learn You An Agda makes it more confusing that it really is by not explaining Set and by using that weird postfix notation. Set is just Haskell's asterisk kind "*" if you know Haskell; otherwise it's pretty hard to grok for a while.You should also check out Benjamin Pierce's textbook on Coq programming, which expands on a lot of these details. (Coq and Agda and Idris are all pretty much are built on the same ideas.)
 In his example, ZERO is a value of type zero even. ZERO is the "proof" that zero is even. Of course its not much of a proof, rather its true by definition. Similarly STEP ZERO is of type suc(suc(zero)) even and thus a "proof" that two is even.
 Recently learned this language in a functional programming course. Very cool language. The Curry-Howard Isomorphism proves that programming languages are equivalent to mathematical proofs and you can really feel it here. I could see this language being very useful in math-y applications such as cryptography and verification.
 Is there any reason to use Agda over Coq or vice versa?
 http://wiki.portal.chalmers.se/agda/pmwiki.php?n=Main.AgdaVs...You might also want to compare to Idris (and maybe Epigram, but its no longer in development).http://www.quora.com/How-does-Idris-compare-to-other-depende...http://www.reddit.com/r/haskell/comments/132kg0/agda_epigram...http://www.reddit.com/r/dependent_types/comments/q8n2q/agda_...http://stackoverflow.com/questions/9472488/differences-betwe...All have pros and cons; i like that Idris can compile to JS (https://raichoo.github.io/posts/2014-01-28-improved.html)
 How does Agda compare to the likes of Clojure, Haskell, ML, Scheme etc? What are the best reasons to learn it?
 The article covers this here http://williamdemeo.github.io/2014/02/27/learn-you-an-agda/#...Some excerpts:> Agda is a programming language that uses dependent types ... you can actually include values inside a type. For example, the List type constructor can be parameterized by both the type of its contents and the length of the list in question ... . This allows the compiler to check for you to make sure there are no cases where you attempt to call head on a potentially empty list, for example.> If I can come up with a function of type Foo -> Bar (and Agda says that it’s type correct) that means that I’ve written not only a program, but also a proof by construction that, assuming some premise Foo, the judgment Bar holds.> Proofs work in concurrent scenarios. You can’t reliably unit test against race conditions, starvation or deadlock. All of these things can be eliminated via formal methods.> Thanks to Curry-Howard, Agda can also be used as a proof language, as opposed to a programming language. You can construct a proof not just about your program, but anything you like.
 Is there a comparison to Idris? It sounds like they're very similar, and more of the people I've followed seem to be interested in Idris than Agda.
 I always wondered about dependent types. Even though it was a little bit too technical for me, at least I have some idea about what it really is.By the way, Z notation rocks.
 Ok, that's neat. Is there a proof checker and an editor/IDE other than Emacs' Agda mode?Because I'm probably going to stick with Coq for its tooling.
 I posted this on lobste.rs and somebody stole my link....Hackernews always gets so much more comments.
 Not guilty! I picked this one up on Twitter this afternoon, but it's possible the lineage traces back to your lobste.rs post. (I have no lobste.rs account, though not for lack of interest -- while sparse, the comments there seem to be of very high quality.)
 When are they going to open registration back up on there?