Hacker News new | past | comments | ask | show | jobs | submit login

It's not meant to "inform" very much. Just contextualizes what type safety means!

tel.github.io/2014/08/08/six_points_about_type_safety/

http://tel.github.io/2014/07/08/all_you_wanted_to_know_about...




I appreciate your coverage on type systems. One thing I want to point out is type systems are "a way" to model an information system. Type systems have become the dominant way in the past 30 or so years, but Type systems are not the only way.

For example, lets look at genetics. Genetics are based on the lineage of reproduction & mutations. The genetics of each organism in the lineage tree is essentially information. This information can be combined with another organism (sexual reproduction) to create a new organism. Also, an organism's genetics change over the lifetime of the organism.

A Type system could model the lineage of each organism in the tree, however it does not seem beneficial to model it in such a way from an abstract point of view. There may be practical reasons to model genetic lineage with types, but I can't think of any. My main reason is types ultimately reduce the resolution of the genetic information.

I feel like Type systems are akin to imposing a bureaucracy on individuals.


I appreciate this point of view. My perspective is absolutely that types are just one form of analysis and I don't mean to suggest that they should edge out others.

On the other hand, I think there genuinely is a kind of universality to types which cause them to (a) be as popular as they are and (b) apply in many (if not most, if not all) domains.

The essential core of this is that types/logic/category theory seem to be making significant headway at paving a new foundation of mathematics via MLTT/intuitionism/CT. These are dramatically general frameworks for creating structures and theories and providing a space for them to "hang together" in the obviously correct ways.

The generality and breadth of such a program is unbelievable! I won't try to convince anyone of its universality if they don't want to accept it—but I will suggest that the richness of MLTT/Intuitionism/Category Theory is sufficient to "probably" capture what you're interested in.

The cost is usually complexity. You could, for instance, "type" someone as their total genetic code. You could even type them as the union of every cell and the genetic code active in that cell. This is obviously incredibly, perhaps uselessly complex. But it's a great start. Then if you have quotients (which nobody has quite built into a type theory yet) you just start cutting down that complexity until you end up with whatever level of granularity you decide is correct.


Thank you for pointing me in the direction of MLTT/intuitionism/CT. I have a lot of good processing ahead of me.

One more thought about types. Stereotypes are a usage of types in our thinking. Stereotypes are a relatively low resolution (and low information load) way of defining a person. Stereotypes are a type system to model people.

Type systems seem to be brittle when it comes to increasing/decreasing resolution. It seems like type systems need to become more elaborate to account for these different resolution scopes that combine to ultimately make a concept. Just like imposing a type on an individual, one needs to account for the many nuances of that person. These nuances are sometimes & sometimes not applicable to other people with similar nuances.


One way to look at this is that you have types which classify a structure/theory. In some sense, it works like this—you have generators which allow you to inductively define a large set of "things", then you have laws which cut those things down.

For instance, the theory of monoids gives rise to the generators

    zero : AMonoid
    +    : AMonoid -> AMonoid -> AMonoid
satisfying the laws

    zero + a = a
    a + zero = a
    a + (b + c) = (a + b) + c
Types can be refined and combined in principled ways by glomming the generators and laws together. Ultimately, what you're looking for is a notion of the set generated by all of the generators satisfying all of the laws plus any "coherence" laws which help to define what happens when two laws interact.

Unfortunately, nobody is very good at this today. It's a goal to be able to automatically achieve the above system in such a way that isn't O(n^2) effort to encode.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: