Hacker News new | past | comments | ask | show | jobs | submit login

Robert Harper argues that dynamically typed languages are languages of a single Type:

And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type! Every single value has to be a value of that type, you have no choice! http://existentialtype.wordpress.com/2011/03/19/dynamic-lang...

Bob in this same article also wrote the now oft quoted:

To borrow an apt description from Dana Scott, the so-called untyped (that is “dynamically typed”) languages are, in fact, unityped.

Now the author of the linked article wrote another piece in 2012 that rebuts this:

It therefore makes no sense to say that a language is unityped without qualifying whether that relates to its static or dynamic type system. Python, for example, is statically unityped and dynamically multityped; C is statically multityped and dynamically unityped; and Haskell is statically and dynamically multityped. Although it's a somewhat minor point, one can argue (with a little hand waving) that many assembly languages are both statically and dynamically unityped.

http://tratt.net/laurie/blog/entries/another_non_argument_in...

It is worth noting:

Sam Tobin-Hochstadt argues the uni-typed classification is not very informative in practice.

The uni-typed theory reveals little about the nature of programming in a dynamically typed language; it's mainly useful for justifying the existence of "dynamic" types in type theory.

https://medium.com/@samth/on-typed-untyped-and-uni-typed-lan...

http://stackoverflow.com/a/23286279/15441




Another way to look at it is that the "uni-typed" pejorative assumes that types are intrinsic to the semantics of a language. Optional type systems reflect the reverse -- that types are an extrinsic feature that should be taken care of by an external library, potentially allowing the choice between multiple competing type checkers or no type checker at all.

http://www.lispcast.com/church-vs-curry-types


The "unityped" descriptor is not meant to be pejorative. It's an observation that, from a type-theoretic point of view every expression in an untyped language can be given the same type. In a dynamically checked language, the type is the sum of all possible dynamic types (in Lua, for example: [1]). In an unchecked language, the type can be something else (in BCPL, for example, it's simply word; in the untyped lambda calculus, it's function). It's important to note that, according to type theory, types are a syntactic feature, not a semantic one.

It's just an observation rather than a judgment, and -- again from a type-theoretic perspective -- it's true. It's nothing to be offended about!

Note that if you were to write a type checker for a unityped language, then every program would trivially type check. So, while technically accurate, the notion of a language being "unityped" is not very useful. It's more of an intellectual curiosity than anything.

[1]: https://github.com/LuaDist/lua/blob/lua-5.1/src/lobject.h#L5...


You may not mean it as a pejorative, but it's clear that Harper did, and so do those who adopt the term as a rhetorical device. I also have to question your use of the phrase "according to type theory". You are implying that there is a single type theory with a single view on the nature of types. I recommend the link I put in my previous comment.


Harper is frequently pejorative. He's also accurate. He has an agenda and it is difficult to interpret him unbiasedly without accounting for it. But if you achieve it then you realize you can no longer argue with the factual points he makes.

That said, it's a completely true point in the notion of "static-types-as-type-theory-defines-them" that dynamic typing is a mode of use of static typing which is completely captured in large (potentially infinite/anonymous) sums. Doing this gives dynamic languages a rich calculus.

Refusing to doesn't strip them of that calculus, it just declares that you're either not willing to use it or not willing to admit you're using it. Both of which are fine by me, but occasionally treated pejoratively because... well, it's not quite clear why you would do such a thing. There's benefit with no cost.

Then sometimes people extend this to a lack of clarity about why you don't adopt richer types. Here, at least, there's a clear line to cross as you begin to have to pay.

---

The "Church view" and "Curry view" are psychological—the Wikipedia article even stresses this! So, sure, you can take whatever view you like.

But at the end of the day type systems satisfy theories. Or they don't. That satisfaction/proof is an essential detail extrinsic to your Churchiness or Curritude.


Can you elaborate a bit on what the benefits of embracing that calculus are? Or maybe provide some pointers? I'm having trouble imagining what utility there is in treating untyped languages as (uni)typed. I said in another comment that it's pretty much a useless notion, but I'm genuinely curious if I'm overlooking something.


Probably the best one I can think of is that it gives you a natural way of looking at gradual typing. Doing gradual typing well is still tough, but you can see it as an obvious consequence of the unityping argument.


I see. Looks like I have some reading to do :) Thanks.


The article you linked makes little sense to me. They do mention Clojure's core.typed as an example of the "Curry perspective", as opposed to the "Church perspective", so I decided to check that out.

The core.typed user guide and it uses the same definition[1] of type that I'm familiar with from "Church" type theory. It even makes the same observation that Clojure is unityped! It seems to me that core.typed does not use a "different" type theory at all. Rather, it is a testament to the great extensibility of the Lisps that a type system can be bolted on as a library. Perhaps unfortunately, most languages are not so flexible.

[1]: https://github.com/clojure/core.typed/wiki/User-Guide#what-a...

P.S.: I re-read Harper's original unitype article, and it does seem pretty combative. I still don't consider the term pejorative myself. I dislike the term, but because of its lack of utility above any other reason.


It's not meant to "inform" very much. Just contextualizes what type safety means!

tel.github.io/2014/08/08/six_points_about_type_safety/

http://tel.github.io/2014/07/08/all_you_wanted_to_know_about...


I appreciate your coverage on type systems. One thing I want to point out is type systems are "a way" to model an information system. Type systems have become the dominant way in the past 30 or so years, but Type systems are not the only way.

For example, lets look at genetics. Genetics are based on the lineage of reproduction & mutations. The genetics of each organism in the lineage tree is essentially information. This information can be combined with another organism (sexual reproduction) to create a new organism. Also, an organism's genetics change over the lifetime of the organism.

A Type system could model the lineage of each organism in the tree, however it does not seem beneficial to model it in such a way from an abstract point of view. There may be practical reasons to model genetic lineage with types, but I can't think of any. My main reason is types ultimately reduce the resolution of the genetic information.

I feel like Type systems are akin to imposing a bureaucracy on individuals.


I appreciate this point of view. My perspective is absolutely that types are just one form of analysis and I don't mean to suggest that they should edge out others.

On the other hand, I think there genuinely is a kind of universality to types which cause them to (a) be as popular as they are and (b) apply in many (if not most, if not all) domains.

The essential core of this is that types/logic/category theory seem to be making significant headway at paving a new foundation of mathematics via MLTT/intuitionism/CT. These are dramatically general frameworks for creating structures and theories and providing a space for them to "hang together" in the obviously correct ways.

The generality and breadth of such a program is unbelievable! I won't try to convince anyone of its universality if they don't want to accept it—but I will suggest that the richness of MLTT/Intuitionism/Category Theory is sufficient to "probably" capture what you're interested in.

The cost is usually complexity. You could, for instance, "type" someone as their total genetic code. You could even type them as the union of every cell and the genetic code active in that cell. This is obviously incredibly, perhaps uselessly complex. But it's a great start. Then if you have quotients (which nobody has quite built into a type theory yet) you just start cutting down that complexity until you end up with whatever level of granularity you decide is correct.


Thank you for pointing me in the direction of MLTT/intuitionism/CT. I have a lot of good processing ahead of me.

One more thought about types. Stereotypes are a usage of types in our thinking. Stereotypes are a relatively low resolution (and low information load) way of defining a person. Stereotypes are a type system to model people.

Type systems seem to be brittle when it comes to increasing/decreasing resolution. It seems like type systems need to become more elaborate to account for these different resolution scopes that combine to ultimately make a concept. Just like imposing a type on an individual, one needs to account for the many nuances of that person. These nuances are sometimes & sometimes not applicable to other people with similar nuances.


One way to look at this is that you have types which classify a structure/theory. In some sense, it works like this—you have generators which allow you to inductively define a large set of "things", then you have laws which cut those things down.

For instance, the theory of monoids gives rise to the generators

    zero : AMonoid
    +    : AMonoid -> AMonoid -> AMonoid
satisfying the laws

    zero + a = a
    a + zero = a
    a + (b + c) = (a + b) + c
Types can be refined and combined in principled ways by glomming the generators and laws together. Ultimately, what you're looking for is a notion of the set generated by all of the generators satisfying all of the laws plus any "coherence" laws which help to define what happens when two laws interact.

Unfortunately, nobody is very good at this today. It's a goal to be able to automatically achieve the above system in such a way that isn't O(n^2) effort to encode.


This notion of dynamically typed languages being a unityped static type system is how a static type system models a dynamic type system. If anything, it shows that dynamic type systems have nuance that can only be modeled as a "unitype" in a static type system, thereby exposing limits to static typing. The "unitype" also shows that types are orthogonal & incidental to solving the real problem. Types may or may not help the programmer, but types are not the direct solution.

I also object to Robert Harper's convoluted rationalization that dynamic type systems are less flexible because it goes against my practical experience. I'm not claiming that I'm the authority when it comes to programming languages. However, I gravitate toward dynamic languages in my career because I feel they offer me more flexibility & less incidental complexity than static languages.


That rebuttal is confused and unconvincing. It's easy, although clumsy and verbose, to write a very close equivalent of his "impossible to type check" code example in a static language. Define a sum type that ranges over the necessary values, define a few functions over that type that perform a case analysis corresponding to dynamic type checks, etc. That it's possible to do this is exactly the basis of Harper's argument that dynamic languages are subsumed within static languages.

Here's an concrete didactic example of the approach:

    http://okmij.org/ftp/Scheme/scheme.ml
Static languages are not good dynamic languages (as currently designed), but they do subsume them.


This is not precisely about Tratt's response, but there was a great discussion on another Bob Harper post where Matthias Felleisen argued that while dynamic languages are semantically unityped, at the level of pragmatics (in the linguistic sense) they allow the programmer to freely mix various implicit type disciplines (http://existentialtype.wordpress.com/2014/04/21/bellman-conf...). The cost is of course that the programmer gets essentially no assistance in verifying that their implicit types are consistent.


There's no reason to believe that you can't freely mix implicit type disciplines in "statically typed" languages, either!

It's just (a) oftentimes you can actually reify those implicit schemes into actual types and it's often worth the cost to do so and (b) if your compiler depends upon a choice of type semantics in order to function and your implicit system is inconsistent with that selected type semantics then your analysis may not guarantee compilation!

But again: dynamic types are a mode of use of static types.

---

To be clear, this is the brunt of Bob's response in the linked thread anyway.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: