> Languages where variables can be given (nontrivial) types are called typed languages.
This is a vacuous definition because it doesn't specify what is meant by "variable" or what it means to be "given a type." This is not nit-picking. It actually matters in the case of Common Lisp. Consider:
(let ((x 1)) (declare (fixnum x)) ...)
X is clearly a variable, but has it been "given a type"? In CL, type specifications are promises to the compiler, not specifications of program semantics. (declare (fixnum x)) means "I, the programmer, promise to write my code in such a way that the value of X will always be a fixnum, and you the compiler are free to assume that I will do this correctly in order to make my code run faster."
On the other hand:
(setf x (make-array 10 :element-type 'fixnum))
In this case the implementation is allowed (though not required) to make an array that can only hold fixnums. So are the array elements variables? And have they been "given a type"?
I think we can forgive such stuff at least in the intro. This document is clearly an introduction to type systems for people who know more about programming than they do about the underlying maths and logic of type theory. Good.
Also good is that it starts from what the reader knows informally rather than trying to build everything rigorously from the ground up.
> Also good is that it starts from what the reader knows informally rather than trying to build everything rigorously from the ground up.
It's certainly pragmatic, but is it good? Maybe I'm biased as a math student, but I've seen first-hand the horrors that happen when people think they know something just because they have a vague intuition about the topic's main ideas.
But whenever I have learned maths in strict lemma-proof form it was on the second time around. That is, I first had a general sense of the field and intuition about concepts and some of the results, and then rigorous stuff makes sense.
From everything I've read about how mathematicians work, I thought that was normal.
Most of them are type hints.
Perl6 and cperl have a similar distinction. Hard types and compiler hints. In an FFI context it's a hard type, with arrays or structs also, with primitives the compiler may ignore it or allow otherwise permitted temporary casts.
> Most of them are type hints. Perl6 and cperl have a similar distinction.
Why are so-called “type hints” interesting in? The real value in a type system is:
(0) Not having to worry about large classes of mistakes. They just don't happen in a legal program.
(1) Enforcing abstractions, i.e., not allowing library users to make decisions based on library implementation details.
(2) Managing the verification effort. Proving that a program meets its specification becomes much easier when components are guaranteed not to violate the assumptions made in each other's design.
A “type system” that can't do any of these is just a waste of engineering effort.
I'd love to read about how you can use so-called “type hints” to mechanically rule out programs that inspect the internal representation of abstractions provided by third parties.
I believe that most type theorists will say that's not a type, since it is trivially unsound. Otherwise a Python program is typed, so far as the programmer sorta halfway thinks about types ("I can iterate over this") at some point.
> that's not a type, since it is trivially unsound
You are misunderstanding what a type is. A type is just a subset of all of the possible data values in a language. Types by themselves don't do anything.
1. CL implementations are free to do static type analysis based on declarations (and some do) and they are also free to signal run-time errors if you assign a value not of the declared type (and some do).
2. Not all storage locations are variables, e.g.:
Welcome to Clozure Common Lisp Version 1.10-r16479M (DarwinX8664)!
? (setf a (make-array 10 :element-type '(unsigned-byte 32)))
#(0 0 0 0 0 0 0 0 0 0)
? (setf (aref a 0) 'foo)
> Error: FOO doesn't match array element type of #<VECTOR 10 type (UNSIGNED-BYTE 32), simple>.
The point is that focusing on "variables" makes tacit assumptions about programming languages that aren't necessarily true.
> The hyperspec has a definition of what is a variable
The hyperspec's definition only applies to the hyperspec, not the paper. On the hyperspec's definition, an array cell is not a variable, but it seems very unlikely that this is what the author of the paper intended. It's much more likely that what the author intended by "variable" the CLHS calls a "place". (But this is exactly the problem: we don't know what the author of the paper intended because he didn't say. We have to guess.)
> vars take on values only of the specified typespec
Right, but the spec doesn't actually say what happens when this constraint is violated.
> On the hyperspec's definition, an array cell is not a variable, but it seems very unlikely that this is what the author of the paper intended. It's much more likely that what the author intended by "variable" the CLHS calls a "place".
There is no syntax to declare the type of a single cell, you can only put identifiers (symbols), not places, into declarations. Either your variable is declared to hold an array of, say, fixnums, or you bind a variable to the content of a cell and declare the type of that variable.
> Right, but the spec doesn't actually say what happens when this constraint is violated.
The spec says the consequences are undefined. Either you code against the guarantees of your implementation (which are hopefully more precise), or you aim for portability and consistently add check-type before locally declaring types.
At the end of the day, a Zen master tells his student: "your next lesson will start at 5am tomorrow, don't be late"; the student says: "what will happen if I am late?". The master doesn't reply.
The student goes to sleep and wakes up late. "You missed your lesson, come back tomorrow at 4am or else I can't guarantee your safety.". The next day, the student arrives on time.
----
Saying things are undefined is an explicit way of demarcating what is part or not of the specification. Otherwise, the meaning could ambiguous.
Most mathematics isn't constructive (https://en.m.wikipedia.org/wiki/Constructivism_(mathematics)) so it doesn't really come up I think. E.g. you know how matrices need to have a certain shape to be multiplied? People don't really talk about what happens if they don't.
Physics on the other hand very much has types, the unit to type correspondence is dopetastic.
But what about Russel's type system(s)? Were they just an addition he proposed that never went anywhere, or are they used much like certain programming languages have types and others don't, depending upon the preference of the user? Or are types just 'too hard/restricting' for general mathematics?
Russel and Whitehead's type system never gained significant traction. It's primarily of historical interest. Once mathematicians became convinced that ZF set theory over first order logic was plausibly consistent and expressive enough, such foundational issues became less important to the typical mathematician.
Lately there's been a resurgence in foundations because of what new foundations like Homotopy Type Theory bring to the table. Funny how something as basic as equality can be revisited time and time again for thousands of years!
"Once mathematicians became convinced that ZF set theory over first order logic was plausibly consistent and expressive enough, such foundational issues became less important to the typical mathematician."
Was this decision ever explicitly expressed and agreed on by mathematicians at the time, or was it just implicitly adopted?
Constructive mathematics is based on intuitionistic logic (i.e. classical logic without law of the excluded middle, meaning you don't have proof by contradiction, to demonstrate the existence of something you have to "construct" it).
Intuitionistic logic is very close to type theory.
OK, ELI10: Why or how is intuitionistic logic very close to type theory? Is it based on type theory? Or is it just similar? In either case, how and why?
tl;dr: types are propositions and values are proofs of those propositions. Constructive logic is not concerned with (true xor false), but rather "I can prove A" or "I can prove not-A". There's also the middle ground of "I don't know how to prove anything about A".
Physicists just love to work on natural unities, where they throw their types and conversion functions out of the window and calculate in a unityped framework.
Checking your units is a basic sanity check that we teach students. But as we advance, physicists throw away that wisdom. Theorists especially have an annoying habit of doing this, because all the repetitions of "c", "hbar" etc "k_B" etc clutter up their equations.
But I preffer to keep that stuff in, even when doing pretty abstract stuff. Essentially because I wanted the type annotations. For example my favorite convention for the Minkowski metric is:
[c^2 0 0 0]
[ 0 -1 0 0]
[ 0 0 -1 0]
[ 0 0 0 -1]
That way your vectors are just $(t, x, y, z)$, where $t$ has units of time and the rest of space. Your covectors do come out oddly as $(c^2t, -x, -y, -z)$, but that's what makes them covectors.
Yes, and and that a much more common convention than the one I favour. They do it partly because it keeps the factors of c out of the visible equations -- but I like my those cs.
Another reason is that people instinctively feel that the elements of the coordinate tuple all have to have the same units. But that's not necessary when you have a metric around. Indeed keeping irregular units is helpful because it prevents you from accidentally forgetting to multiply through by the metric.
Mathematics is not a language. Your question is like asking if database-backed web applications are typed.
Mathematics is often expressed in language, That language us usually a mix of a human natural language English/French/German/etc plus symbolic extensions, and that is very poorly typed.
In the recent failed proof of P!=NP, the flaw was a type error:
Type errors are rather common in math proofs that end up retracted.
But it's hard to call them type errors, because it's not like a computer program where the typechecker detected the collision or failed in some way: the proofs make very little mention of types at all, and it's up to the readers to apply a type-system from their own heads.
This is trivially true, but only because you can keep stretching the definition of language.
For example suppose a geometer explains something to another geometer just by drawing diagrams with no verbal or textual annotation. You can plausibly claim that "geometry diagrams" are a language, but now you are defining language so broadly that the term is almost useless.
Why would you say that "geometry diagrams" are not a language?
Google, define language:
lan·guage
ˈlaNGɡwij/Submit
noun
1.
the method of human communication, either spoken or written, consisting of the use of words in a structured and conventional way.
"a study of the way children learn language"
2.
the system of communication used by a particular community or country.
"the book was translated into twenty-five languages"
synonyms: tongue, mother tongue, native tongue; More
I would say "geometry diagrams" are a language, by the second definition.
There are loose conventions. For example, in some contexts θ and α usually designate angles. But even then the only way to know if the angle is in degrees or radians is by tracing the proof from the start. It's easy to make errors.
I don't know my way around Type theory, but I have done a fair bit of Mathematics and Programming.
Mathematics is largely based on set theory, so every element has a set it belongs to, and every function has a source-set (domain) and a target-set (co-domain). Mathematicians are very careful to always qualify the encompassing sets, domains and co-domains. You would always write:
* Let x \in X be a point in Space.
* Let f:A->B be a fuction with ceratin properties.
This is even more pronounced in Category theory, which invaded algebra and geometry quite heavily. Where the only thing you know about a function is it's domain, codomain and how it composes.
So, yes. Mathematics on a practical level is _very_ type aware.
And it also prevents you from doing stupid stuff: f:A->B, g:A->C, can't be composed, so f(g(x)) is a "type error".
While Mathematics is type aware, Mathematicians often aren't. They mix things that are "not the same object, but almost, I swear", and constantly write ill-typed expressions.
This is generally on purpose, to avoid making explicit things that are obvious, but as someone developing and using statically typed languages, reading Math papers is often quite frustrating.
As others have mentioned, Mathematics isn't really a language. It's at the very least absolutely certainly not one language. Certain parts of Mathematics use a typed language. Certain parts rely on other kinds of containment properties which serve nicely as types.
Finally, generally, math is very interested in being completely clear the properties of objects and very interested in speaking abstractly, so even if the language of use is untyped the logic is often very rigorous around typing.
Above it all, perhaps, is the idea that mathematics is concerned with proof and proofs may contain reference to the properties being discussed---metamathematical ideas pervade normal mathematical thought. This means that even absent of types the content of a mathematical utterance is often going to explicitly posit, discuss, and prove properties similar to type propositions.
I assume that by "mathematics" you mean" mathematical notation".
No, but static type hinting is strongly encouraged. If I say in my paper that "x" is a vector, if I later use "x" to mean a scalar it would cause tremendous cognitive dissonance and confusion for the reader. There is no compiler error, in that the information can still be read and understood. It is just much more difficult.
Mathematical notation is also aggressively polymorphic. The expression "f x" can have wildly different meanings depending on what "f" and "x" are.
Admittedly, I skimmed through this, but I miss a balanced discussion of the cost/benefits of static typing. I could see no discussion of the benefits of dynamically typed languges. Insted arguments for static typing are enumerated without a single reference.
You should try skimming to the end, where there are two pages of references. Not to mention the fact that Cardelli is a prominent figure in the study of programming languages.
And I think you'll find that Gilad Bracha is a more informed critic of reliance on static type checking.
This is a vacuous definition because it doesn't specify what is meant by "variable" or what it means to be "given a type." This is not nit-picking. It actually matters in the case of Common Lisp. Consider:
(let ((x 1)) (declare (fixnum x)) ...)
X is clearly a variable, but has it been "given a type"? In CL, type specifications are promises to the compiler, not specifications of program semantics. (declare (fixnum x)) means "I, the programmer, promise to write my code in such a way that the value of X will always be a fixnum, and you the compiler are free to assume that I will do this correctly in order to make my code run faster."
On the other hand:
(setf x (make-array 10 :element-type 'fixnum))
In this case the implementation is allowed (though not required) to make an array that can only hold fixnums. So are the array elements variables? And have they been "given a type"?