Very clean and easy to follow video lectures on the relation between, types, programs, and logical proofs. One does not need functors and monoids to appreciate the beauty of functional type systems. (And to see why such type systems are indeed discovered rather than invented.)
If anything, I think the conceptual father of type theory is Gottlob Frege, and Alonzo Church was the first to apply it concretely.
Frege certainly articulated a clear notion of what a function is, which is significant.
> program3 fails because runFunction can only run first-order functions and runFunction is a second-order function – a function that takes a first-order function as a parameter.
Also, I never thought of "higher-order functions" as breaking into a countable hierarchy of nth-order functions, which is an interesting thought. In Haskell the hierarchy would start off as
zerothOrderFunction :: a
firstOrderFunction :: a -> b
SecondOrderFunction :: a -> (b -> c)
SecondOrderFunction' :: (a -> b) -> c
1. Takes an (n-1)th order function as an input and returns a simple type.
2. Takes a simple type as an input and returns an (n-1)th order function as an output.
The upshot is that typed programming languages not only catch bugs, but prevent you from legally expressing many non-sensical expressions (analogous to the set theory paradoxes). For example, consider the expression
(\x -> x x) (\x -> x x)
(\x -> x x) (\x -> x x) == (\x -> x x) (\x -> x x)
On the other hand, this means that typed systems are in some sense strictly less expressive than their untyped counterparts. It would therefore be interesting if somebody found an expression which was both (i) meaningful and (ii) only expressible in an untyped language. You would then have an argument for untyped languages :-)
In case it's not clear, it's just that it eventually ends up with a run time error (the talk about runFunction being a second-order function is just an explanation for why you should expect it to end up with a run time error). The evaluation is
== Error: func (with value 1) is not a function
1(1) -> 1
Just because the common idea of application is that it only applies to functions does not mean that application applied to other types is nonsensical and should end up with a runtime error.
If your language allows application to be defined for different types, then the type system should be capable of determining the type that returns from application.
A practical example of a language in which application is valid for integers is Unicon/Icon. Both functions and integers have specific semantics with regards to application. And failure is an option.
foo :: a -> (b -> c)
To a useful first approximation, Haskell works as a cartesian-closed category, which basically means we have tuples and that tuples harmonize with functions so that foo behaves just like
uncurry foo :: (a,b) -> c
So the majority opinion calls foo a first-order arity-2 function.
Here's a bar function that's truly second-order:
bar :: (a -> b) -> c
The same counting principle applies to other invariants such as ranks and kinds.
"How high can the order of a function go?" is the question explored in this blog post . Note how the author's definition of order -- essentially the same given here -- got left and right mixed up.
Here's an example: The W combinator `\f x -> f x x` can be expressed in Haskell but it doesn't work everywhere it should, unlike in an untyped universe:
In other words, types can sometimes make you lose reuse.
That's amply compensated by types removing massive amounts of junk in the untyped world. If you don't believe me, try hacking in an untyped calculus sometime.
That's an example in one type system, not a feature of all type systems. There's a good argument that typing is more expressive than dynamically typed languages: you can always add a dynamic type to your style system and recover all dynamically typed language features, but you can't go the other way around and recover all the features of a statically typed language. For instance, the performance benefits.
Any type system will reject some correct program. As a trivial example, consider:
if( f() ):
1 + "a"
The issue with the first definition is 2-fold. What set of implementations are you qualifying over, and what to do with weird programs that crash occasionally. This is similar to what happens in the given example. The program is equivalent to 'return 1' when f is always false. Otherwise, it might crash.
In the second definition, 'return 1' and the given program are just different programs, until you actually substitute an f and continue reduction.
For a less trivial example, consider the expression "f(x) + g(x)" where both f and g can return either a number or a string. It is conceivable that for any given x, they would always return the same type, but a type system cannot detect every such case.
Or consider a program like:
x = 1
Of course, a type system is no guarentee that you do not have this sort of problem. For example, you could say, in the first example, that x is a union type of Int and String (and print_int and print_string are defined to operate on such a union). In this case, the program is well typed, but has partial functions, so the type system cannot guarantee that there will not be a runtime error when you call them (it won't, however, be a type error, since the type system missed it).
You could also have x be an untagged union. In this case, I don't see a way of doing any better than undefined behavior.
A completely different type of example, is C code such as:
*0x01 = 0
"f(x) + g(x)"
if isinstance(x, str):
elif isinstance(x, int):
You can make a point that this code is hard to modify in dynamic languages, but it is an issue with the code style rather than type systems. For example - IntelliJ IDEA would highlight such Pythonic code as incorrectly typed, since you have changed the type and therefore the variable itself.
I find the general argument against complexities in the type systems such as Haskell to be quite amusing, to be fair. Most of the time you are not dealing with code that is undecidable - and that's why type systems are interesting.
You can encode this quite easily in a language with a dependent types.
let x = 1 in
Printf.printf "%d\n" (x + 1);
let x = "hello" in
print_endline (x ^ " world")
let x = string-of-int(x)
and the rhs will take the value of x (a well-typed integer variable from the previous scope, and the lhs will introduce a new x (a well-typed string variable) in the newly created scope.
this is an orthogonal thing to static/dynamic typing, incidentally; for instance in ruby you can say
a = 10
(3..5).each do |a|
f(x) + g(x)
Now it depends on the implementation of f and g. And type systems, in most of the practical cases - would be able to deal with this.
Usually one would have:
zerothOrderFunction :: a
firstOrderFunction :: a -> b
firstOrderFunction' :: a -> (b -> c)
SecondOrderFunction :: (a -> b) -> c
The standard definition of order is given here (at the end of the section):
This happens quite often. Take datastructures. It is often very useful to structure different types of data together in a common structure. Now in most cases, you don't know in advance which type will go where in the datastructure, as the data might only be known or derived at runtime.
In fact I'd say dynamic behavior is the major expressiveness loss. Granted, I'm not sure how runtime vs static type system relate to type theory. Anyone can educate?
"Very useful" often just means: coding a properly typed, easily-understandable solution takes longer than doing it untyped.
I argue that is the case only when using a plain-text editor. When you have a good IDE for a typed-language at hand that can refactor, complete, analyed and follow code on the press of a button, you lose this "advantage" of untyped languages.
Now, what IDE and language are you referring too? The best one I've used was VisualStudio for C#. While I'd say typing wasn't adding too much overhead in it, its type system is also poor, and it's arguable that it may not really prevent much bugs. Now, it does help make C# faster.
It’s just that historically people have found it markedly more useful to have that information available statically, because it lets you prove things about your code that hold “for all” cases (universals), not just “for some” that are executed (existentials). And static information can be used for optimisations because of those guarantees—if you have proof, you don’t need a fallback.
If you give me an unknown function f : ∀T. T → T in a statically typed language, then I know it always returns a value of the same type as its argument. With some assumptions (purity, parametricity, halting) I even know it must be the identity function.
Whereas doing what I’m suggesting in a dynamically typed language, running inference dynamically, I could only recover existential properties: that there are some types for which f returns a value of the same type as its input. So I could recover more and more information (in the form of an intersection type) by calling the function with different inputs:
f : int → int
f : (int → int) ∧ (string → string)
f : (int → int) ∧ (string → string) ∧ (Foo → Foo)
But I could never arrive at ∀T. T → T, because that’s essentially an infinite intersection.
ETA: actually I might not even be able to get this much information; I only know the output type for each input value, not each input type.
Now, you may say “okay, the original program doesn't exist anymore, but my language implementation is a term rewriting engine, so I have a current program that I can submit to the type checker”. Alas, most type checkers, Hindley-Milner included, operate on self-contained program fragments (i.e., not containing unbound variables), so you can't just submit any arbitrary program fragment. And, if you didn't want to type-check your original program, how likely is it that you will want to type-check the entire syntactic representation of the current program state?
Say, because you want to use that reference cell in two different threads.
And degrading the type to a union (int ∨ string) as I mentioned is still nondeterministic even though unions form a lattice—I can recover determinism when writing to the list in both A and B, but supposing A writes first, B writes second, and then A reads expecting only integers, it gets a type error because it hasn’t checked for the string case.
But I’d argue that 1. this is bad form and you want a type error somewhere and 2. you have this problem already with type-changing mutable references in a dynamically typed multithreaded language. (Say just “int” and “string” instead of “list of int” and “list of string”.)
I wish the author would expand on this piece with either more installments or a even short book.
I find myself interested in type systems as it relates to programming language design but I haven't found much middle ground between the basic types described in introductory texts about a language and the opposite extreme heavy academic texts such as the ones the author is breaking down in this article.
Can anyone recommend any other such middle ground resources on type systems and type system theory?
I really enjoyed Pierce's Types and Programming Languages. As I recall, he starts with the simple untyped lambda calculus, and builds a motivation for a type system, as well as the type system itself. It then switches to ML (or perhaps OCaml) and shows how features can be built-in to a type system as they're described.
I think it's a good fit - I know you said not heavy academic texts, but I don't think it's too heavy, it's at the level of an introductory undergraduate course. (Beware Advanced Topics in ~ by the same author which probably is on the heavier end, I don't know, I haven't braved it yet!)
But I guess what I was referring to as a "middle ground"qa any resources for learning about types systems written in a similar approachable tone like this article.
This was another article I read recently that I thought was similarly accessible on the subject of types systems:
So I guess I'm wondering if there exists such a book or series that might allow one to further their knowledge of type systems without requiring university study.
You can find some earlier PDF drafts online if you Google.
In Boyer-Moore theory, all functions are total - you can apply any function to any object. Types are predicates. Here's a definition of ORDERED for a list of number:
(DEFN ORDERED (L)
(IF (LISTP L)
(IF (LISTP (CDR L))
(IF (LESSP (CADR L)
(ORDERED (CDR L)))
When you prove things using definitions like this, there's a lot of case analysis. This gets worse combinatorially; a theorem with four variables with type constraints will generate at least 2⁴ cases, only one of which is interesting. Most of the cases, such as when L is not a list, are trivial, but have to be analyzed. Computers are great at this, and the Boyer-Moore prover deals with those cases without any trouble. But it went against a long tradition in mathematics of avoiding case analysis. That made this approach unpopular with the generation of pre-computer mathematicians. Today, it would be more acceptable.
(It's fun to run the Boyer-Moore prover today. In the 1980s, it took 45 minutes to grind through the basic theory of numbers. Now it takes a few seconds.)
What happens with OOP? Every class has to understand how to "bark", not only the dog class?
If any class can somehow "bark" without throwing an exception, that may not be in alignment with the programmer's intent, or promote the furtherance of his or her goals in any way.
For intance, the intent may be that the programmer wanted to ask the local variable dog to "bark", but misspelled it as ndog and the integer which counts the number of dogs was asked to bark instead.
There is much value in identifying the problem that an integer doesn't bark.
Add the shell ADD1 of one argument
with bottom object (ZERO),
type restriction (NUMBERP X1),
default value (ZERO), and
well-founded relation SUB1P.
Ouch; did you see that "overfull hbox" that got rendered out in the first line of paragraph 3 of the Preface? :)
First incompleteness theorem
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.
Second incompleteness theorem
For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself.
Russell stumbled onto Russell's paradox (among others) and it shook mathematicians' confidence that everything in math was built on top of a perfectly consistent and stable foundation. If you can define a set that it "the set of sets that don't contain themself" then what other kind of crazy talk can you say in math? How do you know proven things are true and false things can't be proven in the face of weirdness like that?
Russell tried to solve the problem by inventing type theory. Types stratify the universe of values such that "the set of sets that don't contain themself" is no longer a valid statement to make.
Meanwhile, Gödel went and proved that, sorry, no, math is not consistent and complete. There are statements that are true but which cannot be proven.
> Similarly, type theory wasn’t enough to describe new foundations for mathematics from which all mathematical truths could in principle be proven using symbolic logic. It wasn’t enough because this goal in its full extent is unattainable.
For the average programmer you may as well be spouting gibberish because the average programmer will have no way to evaluate the claims (if any) you are making. Note, I am saying that you may as well be and not that you are. Please do not misunderstand me.
Types systems certainly are formal theoretical systems but I personally have come to believe that the majority of coders are ill-served by the mathematical leanings of type theorists.
I'm not sure I can explain myself better than that at the moment.
Now I believe there's an artificial split between math leaning people and pragmatics, the former end up as PhD, the latter in IT or close. But in reality the average coder could understand and even enjoy the land of abstractions, it's just that the river he swims in isn't flowing there so one has to run against the flow.
Not to say that ideals are the only-tru-way.
Let's for arguments sake say that there are two camps (broadly speaking), the pragmatists as you say and the theorists/idealists let's call them. It reminds me of the difference between someone like Torvalds and someone like Stallman.
Thing is we need both! You're right, the split _is_ artificial. The Linux kernel couldn't wait for someone to come along and create a type-awesome version of C. I mean, Rust seems to be the first attempt to take what type theorists have learned and apply it to a systems programming language. In the meantime software needs to be written and we have the tools we have.
If the fruits of type theory are going to filter through into software development I'm not sure it should come laden down with category theory (as awesome as that is) or the lambda calculus (as mind-bending as that is). I could of course be dead wrong.
Usually ideas filters in tiny bits, kind like genes. See closures, forever in lisp, but now in every languages while lisp is still niche.
The issue with theorists is that they see the world in abstract algebra / combinatorics, it's not fluff, it's just extremely high level thinking with extremely short notation. It looks like straight BS until you spend enough time seeing that it translates into reality. Say something like prolog, where a few rules give you graph coloring. It's not theory only, it's actual application of different semantics.
 also, as in any group, they developped a culture and taste for some things, expressing everything in the form of arithmetic expressions. F^n <= iteration of F n times, it's a loop in average coder lingo.
My reading is that it was invented by Scotus as Haecceity ! This was required by Catholic Christianity because of the difficulty that The Creed introduces about the identity of God - there are three entities which represent God, the Trinity - how to account for this? Well; the thisness of God is joined with the thisness of man, the thisness of the creator and the thisness of the thing which is motion (I have never understood The Holy Spirit). You can think of this as multiple inheritance! Theologians then had to account for "why three" as you can carry on making aspects of god with this mechanism infinitely, god the mother, god the lover, god the hunter and so on. But there are three - why? The answer was provided by Scotus's student Occam, entities should not multiply beyond necessity and hence there are three aspects of god because it is necessary for creation that there are.
The fun bit it that this procession of thought is somewhat guessed at because writing things like this down or debating them publically was a quick route to the afterlife via a bonfire!
Theatre and Philosophy have always been able to have a lively chat with one another
Any more articles in this vein?
This is great point, and I think it is absolutely worthwhile to put time into researching better, more powerful type systems.
Tony Hoare said that his research into formal systems and his hope that the pr Framing world would embrace these new innovations that increase safety and reliability was futile, but I think what we need is a new approach, with particular care given to practicality and adoptability.
Then there’s the opinions that users of languages with more elaborate, expressive type systems have, like how some people really enjoy Haskell or Elm because they feel that the type system helps them express their ideas clearly, avoid errors, and aids refactoring and maintenance.
If you’re worried about this one, don’t I guess? If you can use dynamic languages to achieve your goals, and you like them, then that’s fine! There are plenty of languages you can play with if you want to get a feel for programming with types. Even Java 8 and C++11 are decent at this point (I’m sure a Haskell programmer is fuming right now).
Then there’s like, a few thousand people in the world who have well-informed opinions on research into the theory of programming languages, the Curry-Howard correspondence between types and proofs. Also a lot of Hacker News posters who have heard these words. Some of them pretend like they know what they’re talking about.
This isn't true. There are no types in lisp or untyped lambda calculus.
An integer type I1 from 0 to 65535.
(deftype i1 (&optional (min 0) (max 65535))
`(integer ,min ,max))
(declaim (ftype (function (integer) i1)
(defun foo (i)
(mod i 65536))
(defun test ()
(foo 6712312) ; okay
(foo 311212.2)) ; wrong call
; compiling file "/private/tmp/test.lisp" (written 08 JUL 2017 11:10:09 AM):
; compiling (DEFTYPE I1 ...)
; compiling (DECLAIM (FTYPE # ...))
; compiling (DEFUN FOO ...)
; compiling (DEFUN TEST ...)
; file: /private/tmp/test.lisp
; in: DEFUN TEST
; (FOO 311212.2)
; note: deleting unreachable code
; caught WARNING:
; Constant 311212.2 conflicts with its asserted type INTEGER.
; See also:
; The SBCL Manual, Node "Handling of Types"
; compilation unit finished
; caught 1 WARNING condition
; printed 1 note
The type system has been defined for Common Lisp with its first version in 1984. It had been used to allow the compiler to generate optimized code or to do more precise runtime checks. Early on in the mod 80s the CMUCL compiler then added the idea to use these type declarations as type assertions and to check those at compile time.
You literally have to count the types by analyzing the grammar. So for the lambda calculus, you have lambda=1, halt. So does a single type mean no types or literally one type?
What's the advantage of thinking that 1 type actually equals 0 types? I don't see any, so in my mind, all languages are unityped or have a richer type structure. Whether a richer type structure is desirable is a separate question.
Easily my second favorite instructor, possibly my favorite. I felt pretty prepared to deal with analysis and design in statically typed languages just from that grounding in set theory and logic. Fond of saying things like, "We have a box. What's inside that box? Another box. What's inside that box? We don't care."
Now retired, he was an early proponent of distance learning, so surely some of his stuff is accessible still.
There's nothing wrong with not caring about type systems if they don't make your life or your job better or worse.
As long as you enjoy what you are doing, everything else is optional.
I didn't start caring about type systems until I started running into cases where I really wished for a static one (when I was working on a large system in Python) and later when I was prototyping things where I had to make a lot of guesses in C#. Both situations frustrated me, and then I got to start really caring a lot about type systems.
To a certain extent, I think it's human nature that we often don't really start caring about things that much until we experience real, personal frustration with them. Then we start caring a lot.
The reason you see so many people weighing in on this here on HN, is that many regulars are the kind of person to start feeling pain very very soon and over small inconveniences where other people will just sort of deal with the minor inconvenience and focus on other aspects.
One attitude is not better than the other, nor is one more ideologically pure or a marker of a better programmer or engineer. The only thing it implies is different pain thresholds.
Depending on your area of focus as a developer a low or a high threshold could be either a benefit or a drawback. A language designer needs to have a very low threshold. A front-end developer/designer can afford to have a very high threshold and focus on things besides being provably correct.
One of the things I like the most about software engineering is that there are opportunities for joy and discovery for everyone. And as careers progress, you can easily find yourself caring about different things at different times, and there's nothing wrong with that.
There's nothing to be ashamed of any more than you should be ashamed of preferring strawberry to chocolate ice cream. (Although, in keeping with tradition here on HN, if you say that you prefer strawberry ice cream, you are dead to me and practically Hitler. :).
Is this actually true? Of course Haskell, Idris, etc. leverage type theory, but how much type theory underlies the type systems of widespread practical languages like C# or Java? Can something like C++'s SFINAE be grounded in type theory, or is it just a hack?
Are there any approachable resources on the subject?
This is one of those things where you should try to reserve judgement about it because your experience is so limited. Modern typed languages often don't even require you to write the type, because of type inference.
> will there be less bugs and regressions? will i be more productive ?
The idea with static analysis is that you're pushing more errors into the type system so it's caught at compile time rather than runtime. Everyone will answer this differently.
IMO a dynamic type system doesn't make you more productive because the same invariants you have from not having an explicit type still exist in the code, they just go unchecked. For example, if I write a function to add 1 to a number, in a dynamic language if I pass a string I'll get some output that is invalid if I'm expecting the result to be a number elsewhere. Types let you encode those invariants. But encoding simple types and primitives is really just scratching the surface, I could ramble on for ages here but you should just dive into a language with a good type system (like Haskell or Ocaml like you mentioned) and stick with it long enough to give it a chance. It's so much more than just being and to say 'int' or 'string'.
Not even 1/10th of HN has that. Tons of business types, lowly JS programmers, designers, sys-admin types, old-school programmers in C/C++, etc around.
For me, types are sets of possible values, plus sets of valid operations on those values. I don't much care where they come from. As far as I am concerned, they are an engineering construct to make programming easier and safer, and are interesting only to the degree that they accomplish those goals. Any connection to pure math is completely incidental; if there were no such connection, it would not make (programming) types any less useful.
Now, math often gives deeper insight into what's going on, and enables you to create more powerful (useful) abstractions. But if the useful abstractions don't correspond perfectly to types as used by mathematics, I don't care.
I'm very interested in theory (by the standards of non-academics). I'm very interested in pragmatic type systems. I've spent a few hundred hours on trying to learn type theory, in the mathematical sense. The only thing I have personally found useful, so far, in type theory, is the notion of sum types and product types. But that's just jargon for things I was able to deduce from a shallow study of many programming languages, so even that has not been that useful to me.
On the other hand, I entirely agree with you, that what a type is to a programmer is a set of values and a set of valid operations on those values. Exactly. That's what types mean when you're working close to the metal ("this value is meaningful as a 16-bit float; if you try to dereference it, the consequences are mightily hard to reason about"). That's what types mean when you're talking about the function signatures of higher-order functions that use generic types. That's what types mean when describing statically-typed variables, and what types mean when describing dynamically-typed values.
I see some signs that a few other people share my interest. For example, using dependent types to e.g. specify that two sequences can only be zipped if they are of the same length; that's a useful type-check, and if it can be determined statically, that's great (that's a toy example, of course). Unfortunately, most of the languages that contain these features seem remarkably impenetrable.
I am very interested in situations where math reveals underlying truths about the universe. Like you, I'm so-far unpersuaded that mathematical type theory is a useful avenue to learning about powerful abstractions about types in programming.
mh, given Curry–Howard correspondence, aren't those the same? so the goal is indeed not having logical contradictions?
I may write another article about this.
It's actually just a belief. Nothing suggests that type systems and type theories can be improved to be practical at preventing bugs. I'd say it's the opposite, even with as much understanding about the nature of bugs as we have today, they don't look very promising, unlikely to make it even into the top ten of other different approaches.
If you allotted me a finite amount of effort to put a codebase (a) into a strong type system, (b) festoon it with lots of pre- and post-conditions and asserts, (c) run it through every static checker under the sun, (d) build a huge suite of tests, (e) find a way to formally verify critical algorithms in it with (say) Z3, (f) carry out fuzz testing, ... I'd probably say "do the easy stuff from most of these categories" rather than "pick the One True Path and figure that that will save you from all your bugs".
So saying that they aren't practical at solving bugs is a little disingenuous.
Why aren't they used? Probably existing code bases, inertia, and ignorance play a role. I suspect, though, that at least part of the problem is that most programmers find those type systems too hard to use. In that sense, the type systems aren't practical.
And if you're going to blame the programmers for that, well, if your plan for making programming better requires different programmers than the ones we have, your plan isn't very practical, either.
I think Java 8 and optionals show it doesn't have to be that hard, it's just that there's too much old code that relies on nulls for the Java ecosystem to ever be fully null safe.
Use-after-free is solved in a language without manual memory management, so that's actually quite common.
Programmers get comfortable with new ideas over time. Higher order functions and type inference used to be obscure concepts. Today they're par for the course. I don't know if we'll all use dependent types some day, but I think we'll keep getting more powerful types in mainstream languages for a while.
Rust is another good example. It's not very ergonomic, but it is getting better every release.
I think how those two languages do will show whether type systems can me made practical for reducing wider classes of bugs. They seem practical and have the backing to help drive adoption.
Rockets might be able to carry humans up to space, but in practice they don't because only a small set of humans actually get to go.
Isn't that a slightly absurd interpretation of "in practice"?
Why these languages aren't used may have absolutely nothing to do with their technical merits. It's a myth that technical merits is the only consideration for language popularity.
For an example of preventing use-after-free there is the typesystem used in Rust. It is based on the theory of linear types, which lets you have operations that mark a value as "used" and forbid you from using it again after that point. The same system is used to protect against data races because you can guarantee that a value is only accessible from one thread at a time.
If anything, I agree with zzzcpan and disagree with swsieber. Types are nice but people oversell them. OO languages appeared at the same time as ML family languages, but one got successful and the other got stuck in the realm of "new ideas".
Affine logic rejects contraction, i.e.
Γ, A, A ⊢ B
Γ, A ⊢ B
Of course, that's very hand-wavy, and doesn't say (as I believe is the case) that there wouldn't be some other way to proceed through a combination of other rules.
Seems you don't know much about types then. I suggest you look up theorems provers and compcert and the TyPiCal language, as but a few examples.
We've done testing since the start. Still many bugs. We've done ad hoc modeling (behavior driven) for years with some improvement. Formal methods aren't popular but are successful at least at some (small to the low end of medium) scales or within portions of large scale systems. Type systems can and have been used for codifying concepts from all of these into the program semantics. So I'm not sure how it is that they don't help.
Exactly, as the OP clearly doesn't understand, but every researcher in programming languages does, test can only prove the presence of bugs, it cannot prove their absence. Types can.
Type systems are a necessary part of computing. One can even define computing in terms of how types are transformed. Types are an extremely primitive and fundamental description of how computation happens, and without them, it'd be hard to imagine how anything could be computed at all, let alone correctly.
No such type safety exists in assembly, except that certain opcode-register combinations are prohibited.
Are you? usually I find it to be the reverse - you're leveraging the type system to track some properties for you, so that you don't have to track them mentally. I.e. less mental overhead (though possibly more boilerplate-overhead)
> Why there’s so much research around types if perfectly applying them to programming languages is impractical?
Somehow Haskell does this perfectly. Whaddya say to that?
Not to mention the mental overhead of Haskell (which is also not optimal).
So no, they never get in to this stuff.
The whole of programming, nevermind types, perhaps the most mathematical part of modern programming, arises from mathematics. There's some good history here, but the early paragraphs in particular are a display of ignorance if not arrogance.
The author quotes Newton, the very chap who's said to have said he merely stood on the shoulders of giants (to 'see' such insight). Any programmer in the 21st century stands on the shoulders of mathematicians and computer scientists of the 20th,; who were in turn standing on the shoulders of the mathematicians of the 19th centuries.
I thought it was intended to speak to people who might be intimidated or feel obtuse when they encounter really dense academic texts when trying to learn more about type systems as it relates to programming. As such I really appreciated it.
I didn't think the author was grinding any axes at all, quite the contrary.
Among whom I cannot count myself, for whatever it's worth.
When we say that some field is inaccessible, we don't blame the reader trying to understand. At the same time we're not saying that the field is wrong either, but that communication could be improved.