And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type! Every single value has to be a value of that type, you have no choice!
Bob in this same article also wrote the now oft quoted:
To borrow an apt description from Dana Scott, the so-called untyped (that is “dynamically typed”) languages are, in fact, unityped.
Now the author of the linked article wrote another piece in 2012 that rebuts this:
It therefore makes no sense to say that a language is unityped without qualifying whether that relates to its static or dynamic type system. Python, for example, is statically unityped and dynamically multityped; C is statically multityped and dynamically unityped; and Haskell is statically and dynamically multityped. Although it's a somewhat minor point, one can argue (with a little hand waving) that many assembly languages are both statically and dynamically unityped.
It is worth noting:
Sam Tobin-Hochstadt argues the uni-typed classification is not very informative in practice.
The uni-typed theory reveals little about the nature of programming in a dynamically typed language; it's mainly useful for justifying the existence of "dynamic" types in type theory.
It's just an observation rather than a judgment, and -- again from a type-theoretic perspective -- it's true. It's nothing to be offended about!
Note that if you were to write a type checker for a unityped language, then every program would trivially type check. So, while technically accurate, the notion of a language being "unityped" is not very useful. It's more of an intellectual curiosity than anything.
That said, it's a completely true point in the notion of "static-types-as-type-theory-defines-them" that dynamic typing is a mode of use of static typing which is completely captured in large (potentially infinite/anonymous) sums. Doing this gives dynamic languages a rich calculus.
Refusing to doesn't strip them of that calculus, it just declares that you're either not willing to use it or not willing to admit you're using it. Both of which are fine by me, but occasionally treated pejoratively because... well, it's not quite clear why you would do such a thing. There's benefit with no cost.
Then sometimes people extend this to a lack of clarity about why you don't adopt richer types. Here, at least, there's a clear line to cross as you begin to have to pay.
The "Church view" and "Curry view" are psychological—the Wikipedia article even stresses this! So, sure, you can take whatever view you like.
But at the end of the day type systems satisfy theories. Or they don't. That satisfaction/proof is an essential detail extrinsic to your Churchiness or Curritude.
The core.typed user guide and it uses the same definition of type that I'm familiar with from "Church" type theory. It even makes the same observation that Clojure is unityped! It seems to me that core.typed does not use a "different" type theory at all. Rather, it is a testament to the great extensibility of the Lisps that a type system can be bolted on as a library. Perhaps unfortunately, most languages are not so flexible.
P.S.: I re-read Harper's original unitype article, and it does seem pretty combative. I still don't consider the term pejorative myself. I dislike the term, but because of its lack of utility above any other reason.
For example, lets look at genetics. Genetics are based on the lineage of reproduction & mutations. The genetics of each organism in the lineage tree is essentially information. This information can be combined with another organism (sexual reproduction) to create a new organism. Also, an organism's genetics change over the lifetime of the organism.
A Type system could model the lineage of each organism in the tree, however it does not seem beneficial to model it in such a way from an abstract point of view. There may be practical reasons to model genetic lineage with types, but I can't think of any. My main reason is types ultimately reduce the resolution of the genetic information.
I feel like Type systems are akin to imposing a bureaucracy on individuals.
On the other hand, I think there genuinely is a kind of universality to types which cause them to (a) be as popular as they are and (b) apply in many (if not most, if not all) domains.
The essential core of this is that types/logic/category theory seem to be making significant headway at paving a new foundation of mathematics via MLTT/intuitionism/CT. These are dramatically general frameworks for creating structures and theories and providing a space for them to "hang together" in the obviously correct ways.
The generality and breadth of such a program is unbelievable! I won't try to convince anyone of its universality if they don't want to accept it—but I will suggest that the richness of MLTT/Intuitionism/Category Theory is sufficient to "probably" capture what you're interested in.
The cost is usually complexity. You could, for instance, "type" someone as their total genetic code. You could even type them as the union of every cell and the genetic code active in that cell. This is obviously incredibly, perhaps uselessly complex. But it's a great start. Then if you have quotients (which nobody has quite built into a type theory yet) you just start cutting down that complexity until you end up with whatever level of granularity you decide is correct.
One more thought about types. Stereotypes are a usage of types in our thinking. Stereotypes are a relatively low resolution (and low information load) way of defining a person. Stereotypes are a type system to model people.
Type systems seem to be brittle when it comes to increasing/decreasing resolution. It seems like type systems need to become more elaborate to account for these different resolution scopes that combine to ultimately make a concept. Just like imposing a type on an individual, one needs to account for the many nuances of that person. These nuances are sometimes & sometimes not applicable to other people with similar nuances.
For instance, the theory of monoids gives rise to the generators
zero : AMonoid
+ : AMonoid -> AMonoid -> AMonoid
zero + a = a
a + zero = a
a + (b + c) = (a + b) + c
Unfortunately, nobody is very good at this today. It's a goal to be able to automatically achieve the above system in such a way that isn't O(n^2) effort to encode.
I also object to Robert Harper's convoluted rationalization that dynamic type systems are less flexible because it goes against my practical experience. I'm not claiming that I'm the authority when it comes to programming languages. However, I gravitate toward dynamic languages in my career because I feel they offer me more flexibility & less incidental complexity than static languages.
Here's an concrete didactic example of the approach:
It's just (a) oftentimes you can actually reify those implicit schemes into actual types and it's often worth the cost to do so and (b) if your compiler depends upon a choice of type semantics in order to function and your implicit system is inconsistent with that selected type semantics then your analysis may not guarantee compilation!
But again: dynamic types are a mode of use of static types.
To be clear, this is the brunt of Bob's response in the linked thread anyway.