Hacker News new | past | comments | ask | show | jobs | submit login
Type systems and logic (hackerschool.com)
62 points by luu on Dec 8, 2014 | hide | past | web | favorite | 12 comments

    > The most straightforward way to represent the proposition
    > “not A” with the type A -> ⊥, where ⊥ is what’s called an
    > empty type – a type with no values. (This type’s symbol
    > comes from mathematics. It is usually pronounced
    > “bottom.”) A -> Void can be read as “if we have an A, we
    > can use it to make something that doesn’t exist”
Unless the author means something different from what I think, using "void" here is very confusing. "void", as it appears in C, Java, et. al. is not all like a bottom type. A bottom type means "there can be no value here". In other words, a function whose return type was bottom could not return. Coming from an imperative mindset, one way to understand that is to consider this function:

    foo() { throw "!"; }
You can never get a value out of this function. If you do:

    var result = foo();
result will never get assigned. So, here, foo's return type really is "bottom". This is different from void—a function that returns successfully but emits no value.

Instead of corresponding to "bottom", void maps to the unit type: the empty tuple.

In the type theory literature, it is rather the other way: Void is synonymous with the bottom type. I don't know who has first claim to the name "void", but from the type theory perspective, it is C, Java, et. al. who have it backwards.

Haven't read the article yet, but it looks like the author started accidentally using `Void` before the section where it's defined. The uninhabited type being named "void" is already familiar to many haskell programmers, e.g. http://hackage.haskell.org/package/void


You are trying to equate mathematical functions and software functions (i.e. co-routines). They are not the same thing.

A co-routine does not have a domain and co-domain. When we declare a function type

F(T_1, T_2) T_3, T_4, ... {}

we are saying that a co-routine begins in a pair type (T_1, T_2), it ends in a function type (return) mapping the T_3 and T_4 values to a pair type (Pointer, Data) for both values (as a simplified view).

> So, here, foo's return type really is "bottom".

No, the return co-routine is a routine that finishes successfully but "emits" no value. There are no bottom or unit types anywhere in this. Sorry.

I think you're trying to clarify something here, but I can't make heads or tails of what you're saying.

The article uses "⊥" in one sentence and "Void" in the next without clarifying if "Void" is synonymous with bottom or if a new concept is being introduced.

Either way, my point was that programmers coming from the large slew of Algol68-derived languages will see "void" here and likely think it has something to do with the "void" they're familiar with when it does not.

As far as I know, in practice, "void" in C, Java, et. al. works more like the unit type than a bottom type and/or this not-very-well-defined-by-the-article void type.

If I'm wrong, I'd like to know more, but preferably in language a little more familiar to a working programmer than describing software functions as "co-routines which do not have a domain and co-domain".

This is a great article, really clears up some holes that I had in understanding constructive logic and the concept of dependent types. I wonder if there is another article that explains the basic of homotopy type theory in a similarly accessible way?

Here's a rough summary. Homotopy type theory is about proofs of equality. Proofs of equality can have computational content. For example, a proof that two types A,B are equal is given by an isomorphism f : A -> B, f^-1 : B -> A, such that f . f^-1 = id (in homotopy type theory that is further generalized to the concept of equivalence). A proof that two functions f,g : A -> B are equal is given by a function that takes an x:A and returns a proof that f x = g x for every such x. In particular in homotopy type theory it is not the case that if two things are proven equal, then they are identical. For example the type of integers is equal to the type of naturals (positive integers), because you can give an isomorphism, but it's not identical because they are two different types. On the other hand the type of booleans is neither identical nor equal to the type of integers, because there isn't an isomorphism. Homotopy type theory allows user defined types where you can have equality between two values that aren't identical. These are called higher inductive types. An example of this is the type of finite sets represented as lists. The list [1,2,3] and [2,3,1] represent the same set, but they aren't identical. They do however represent equal sets. One of the key points of homotopy type theory is that if you want to give a function f : A -> B where the type A has values that are equal but not identical, then you have to give a proof that whenever x1:A is equal to x2:A, then f x1 is equal to f x2. In other words, functions have to respect equalities. For example we can define a function on finite sets by given a function on finite lists, but in order to make that a function on finite sets we have to give a proof that the output of that function does not depend on the order of the input list. Similar ideas crop up all over the place. The most well known is perhaps the idea that a mathematical description of physics should not depend on the coordinate system you choose.

Not an article, but this class[1] has video lectures for the course material.

[1] https://www.cs.cmu.edu/~rwh/courses/hott/

> a type system corresponds to a particular logic system.

Then the article proceeds to not conretely define what a "Type System" is. They just say: oh like those things that Haskel and OCaml do.

Then they just jump to concluding:

> Type systems don’t correspond to classical logic, but to something called “constructive logic”.

Wait, what? Why? And please don't explain constructive logic as your explanation of 'why'. That doesn't explain the presumption above. In fact, you still have not even told us what exactly you mean by Type System.

> correspond to propositional logic

So, by Type Systems we mean a Heyting Algebra?

> dependent types are first-order predicate logic

So dependent types are algebras that carry the same properties as propositional logic? But, I thought Type Systems were the logic?

> Type-based logic forces us to use constructive logic rather than classical logic

This is a completely different conclusion than the premise of the article.

It seems, that the author has a good understanding of a not well understood relationship i.e. the relationship between Types and and what we refer to as "a logic".

Well, there isn't any generally accepted definition of what a type system is. _Types and Programming Languages_ (2002) start out with the sentence "a type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute", a definition which is both not very precise ("tractable"?) and is also not general enough to cover dependent types (where we care about both presence and absence of behaviors).

Similarly, there isn't really any definition of a logic. Some authors talk about a proof system + a semantics, which is also not precise, and also not general enough (some constructivists don't want the semantics).

So it seems hard to do better than this?

> "a type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute" … not general enough to cover dependent types (where we care about both presence and absence of behaviors).

As someone who's comfortable with an informal approach to mathematical logic but has never done any programming with dependent types, I wonder if this may be hitting on a fundamental point.

Certain presences can be interpreted as absences, i.e., the absence of their absence, in just the same way as certain statements can be interpreted as the negations of their negations. (In classical logic, but not in constructivist logic, all statements can be so interpreted.)

Is there any chance that the 'presences' needed in dependent type theory are those that can be expressed as the 'absence of an absence' (something like that it is not so important what characteristics an object does have, as what assumptions about the object can't go wrong)?

The "presence" I had in mind was just "this function will eventually return a value", i.e. what you get from having a type for total functions. (On the programming side, this ties into a discussion about "safety" versus "liveness" properties).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact