> But please, stop pretending that type-level programming is a desirable thing. It's a kludge, and hopefully a temporary one. Personally, I want to write all my specifications in more-or-less the same language I use to write ordinary code. It's the tool's job to prove correctness or, if it must, tell me that it failed.
I don't see explicit type-programming as (always) a kludge; oftentimes it's the primary vocabulary for expressing certain ideas. I.e. type annotations are not (always) just typechecker-hints, but can be explicit documentation of intent.
> We need to be able to step up the strength of our specifications without switching language all the time. We need to integrate typed and untyped code
This has been a revelation during my time spent with TypeScript. The fact that I can just add a comment that disables an unhelpful type error on a given line is a game-changer. Or that I can assert to the system "no really, this value that has a union type is actually just this particular subset of that union" (cast) and retain meaningful checks after that point. Having a type system that's flexible has made it enormously more useful. It allows you to deal with those cases where what you want to do can't be type-checked, within the context of (and playing nicely with) a broader system that is type-checked.
They also make refactoring much easier. Renaming a function or variable and having all "true" references renamed (not just things with the same name but in a different scope), for example.
Safely and automatically renaming a function, local or global variable does not require a type system (see clojure and many other dynamic languages). Only renaming an instance field on a class needs it (because then all accesses on an object must be renamed, which requires knowledge of the type of your variables).
Incidentally programming in a more functional style reduces the need for this kind of thing. it doesn't eliminate it though so it can still be a source of runtime bugs when typos creep in, that's a trade off you have to make.
Unit tests for business logic and specifications for data shape can help to catch those typos as a side effect of the other benefits they bring. However typo catching will never be as solid as in a typed language, I've decided I'm ok with that as long as I'm not programming something that could kill someonev if it crashes.
If you're writing an application that could kill someone if it crashes, you'd better not be relying on the type system to catch those crashes for you, since it will only catch the most obvious ones.
For any life-critical system you want first and foremost to have a battery of end-to-end tests that run the software on the target hardware under various operating conditions. No amount of software-only testing/formal methods will give you this part.
Then, you want to have a formal specification of your program that can be run through a model checker - this would be the industry standard, though, if you have a team of PhDs at your disposal, you could also try to go with full formal verification a la Coq.
Finally, you want unit testing, integration testing, and possibly a type system to help day-to-day development. But do note, none of these should be critical for assuring the system works as indended - if typos in your code aren't caught by your end-to-end tests, what hope is there that they will catch complex dynamic behaviors?
> If you're writing an application that could kill someone if it crashes, you'd better not be relying on the type system to catch those crashes for you, since it will only catch the most obvious ones.
Safety critical systems are quite different from the vast majority of software systems, and raising them as an example in a general programming argument isn't really fair. There already are safety critical certifications that are mandatory for many aviation and healthcare related applications. They generally utilize heavy testing, model checking, and supporting documentation. Very few commercial industries operate with this much attention to quality, basically because it's too expensive.
The GP had raised them, that's the only reason I commented about this.
You are right though, the standard for that type of software is very different from regular programming.
Still, we shouldn't forget that even the strongest typing can't substitute testing. Most likely, even if you had a formally proved system, which is already much, much more difficult than any other kind of regular software development practice, you would still want some amount of testing to go with that system before putting it in production.
It doesn't require it, but by all indications it makes the tooling much easier to write when there is a type system and it's constraints reducing the surface area.
Another thing to consider is the type system of Idris where the compiler may recognise at compile time that the value can only be of a certain subset of a union and, more importantly, can use the type of a function to infer much of the correct implementation, like, all the cases that must be handled and sometimes even how to handle them.
Lots of type systems support type inference. It seemed (I could have misinterpreted) like the OP was suggesting that in a perfect world all types would be inferred. That's the idea I was disagreeing with.
Normal type inference deduces the type from the code, Idris can infer the implementation from the type if the type is specific enough. In a way, this is Church in action: the type is a theorem, and the compiler is a proof assistant that helps to automatically find a proof (a correct program having that type).
So Idris is an argument supporting your position: if it works (it is still part of a research program), the types are the primary expression, and the code is secondary.
I read the OP differently. Type-level programming (not to be confused with type-oriented programming) is using the typechecker to perform computation on types, which proves guarantees at compiler time. In this case, types are less about documentation, they are literally proofs of properties of the program by construction. In fact, type-level programming is usually unreadable[0] unless the reader has deep knowledge of the specific language implementation of type families, functional dependencies, poly-kinds, etc (whatever permits typelevel programming in that language).
I'm getting that the OP is arguing against these ad-hoc constructs (type families etc) and would much rather write these proofs in the same language as ordinary code (e.g. writing the law "List x where length(x) == 5" in the type rather than "forall a. List (S (S (S (S (S Z))))) a") which, well who can disagree?
In a sense, you and OP both agree on the same thing. Types are good. No types (or unreadable ad-hoc constructs for types) are not good.
> type annotations are not (always) just typechecker-hints, but can be explicit documentation of intent
More than this, in highly expressive type systems such as Haskell's, they can carry proofs. This can be useful for constraining the capabilities of effects (ie: we can state that a particular function has the ability to make a network request but cannot open files) as well as assisting us in writing the correct implementation by construction. As an author of a type-level library it's also highly useful for generating code from arbitrary types that is also correct by construction and deriving obvious implementations so we have to write less code.
Haskell's type system may soon be able to check for usage as well by encoding linearity into the type system. This will enable to tell the type checker that a function which has a resource must use it at least once. Without explicitly sequencing our nice, pure code the compiler can ensure that linearity is preserved. Idris 2 is experimenting with this as well. It seems like a pretty good idea.
And all of this type level stuff is so useful that we want to do more of it. Dependently typed languages remove the separation between the type language and the language of terms. This is powerful stuff.
It is sensible to include this in TypeScript, because it needs to interface with potentially large swathes of plain JavaScript, which is akin to unsafe code in Rust.
TypeString type assertions are like C++ reinterpret_cast. If you're wrong about what you think it is, that's as good as undefined behavior in C++, virtually anything can happen.
I'm only nitpicking because this is one of the things the article complains about but I think you're conflating two types of saftey. All Javascript (TypeScript included) is "safe" in the sense of C++/Rust. Using the wrong TypeScript type in TypeScript can result in programming errors but not unsafety or undefined behaviour in C++/Rust terms.
But I think their criticism comes from an ivory tower where people in general agree that efficient programming needs strong and sound tools.
Unfortunately, in practice it seems to be all about "velocity". When you see a huge codebase in python grown over several years and try to refactor it, reason about it, or just enhance it, you will inevitably come to the conclusion that the omission of static type checking is a grave error from a system-design perspective. And when you work with the developers of such a code base you will understand that static type checking is first and foremost a means of communication between the language implementation and the developers. Without this tool it is like taking away a common spoken language between the developers and having to translate everything into some form of pidgin.
>When you see a huge codebase in python grown over several years and try to refactor it, reason about it, or just enhance it, you will inevitably come to the conclusion that the omission of static type checking is a grave error from a system-design perspective.
I inevitably came the exact opposite conclusion. Perhaps because at the start of my career I was trying to do the same thing with Java, and it added overhead and expense without a commensurate payoff in safety.
Frankly, I think velocity is ridiculously underrated:
* Because of the age old "customer doesn't know what they want" issue - hence you throw away a lot of the code that was not fit for purpose.
* Also because the right kind of architecture is only obvious in retrospect - hence you throw away a lot of the code that was not fit for purpose.
* And finally, because the only times in my career where I've had significant success in refactoring a large mission critical codebase it was python and it was because I'd managed to quickly build custom tooling infrastructure (e.g. testing) to support that refactoring.
I do take some issue with python's type system (e.g. the whole truthiness thing), but on the whole it balances safety and velocity in a highly pragmatic manner.
Well, java tends to be one extreme and python the other.
Java appears to be the most bureaucratic language around where most code is written by people who are just a bit too enthusiastic about OO so every class becomes a FactoryManagerPatternVisitorImplementation and every struct 'of course' needs getters and setters.
Python on the other hand is often written without any safety and I have frequently witnessed how every typo becomes a new variable maybe just because the cat stepped on the backspace key and how one developer changes the signature of a function but not all places where it is called.
I think there are great benefits to compile/lint time type checking for anything bigger than just a throwaway script. For python one can use mypy, which helps. There is no need to, though, to turn everything into a festival of object orientedness. Most classes need neither parent nor child. And some functions can just be global functions.
I think a lot of java's "bureaucracy" is simply duct tape around its badly constructed type system.
>Python on the other hand is often written without any safety
Oh for sure, but I find it easy to start dialing up python's safety from day one of starting on a project (e.g. start writing asserts), no matter how complicated.
Working around java's horrible type system? Less easy.
I'll have to agree with pytester, and say something strongly discouraged by the article... But, well, Java can not be an extreme, its type system is very basic, it can represent very few stuff, so it can not check much.
If you want something closer to the extreme, go for Haskell. But then, you can't complain much about bureaucracy.
There is an annoying tendency of people to base their opinions on technology directly copied from the 70's.
> I inevitably came the exact opposite conclusion. Perhaps because at the start of my career I was trying to do the same thing with Java, and it added overhead and expense without a commensurate payoff in safety.
It really sounds like your code base simply wasn't too big after all. When you change a Java API, how much time does it cost you to find all broken client code? How much in the same case for python?
> Frankly, I think velocity is ridiculously underrated
Again, it sounds like your projects were still rather young. IME, throwing away code practically never happens when it has been installed for at least half a dozen customers. So python helps you to build stuff quicker, because it does not force you to be sound. But soundness is exactly what you want after a certain threshold.
>When you change a Java API, how much time does it cost you to find all broken client code? How much in the same case for python?
With a decent test suite and a bunch of well crafted asserts, it ends up being much quicker. Indeed, for the really subtle broken stuff, pretty much only a decent test suite can catch it.
>Again, it sounds like your projects were still rather young.
Probably just better tested than yours.
In time, you might learn to appreciate this quality but who knows...
> With a decent test suite and a bunch of well crafted asserts, it ends up being much quicker. Indeed, for the really subtle broken stuff, pretty much only a decent test suite can catch it.
I think you don't understand the point. Unless you propose that my test suite tests the libraries I am using, testing does not find code affected by API breakage. And even if you do test your used libraries in your client code, think again about how much effort it cost you to do what a machine can do just better.
> Probably just better tested than yours.
First of all, I doubt it. Second, you just sound rather arrogant.
I hope you are aware that a) testing by necessity will always be incomplete and b) type systems by necessity will always be restricted.
The developer who just discards one of these tools sounds like a woodworker who discards a chainsaw because he loves his splitting axe so much.
Are you aware of the meaning of the phrase "it sounds like"? Because comments like this just make the whole discussion-in-the-web thing a rather unpleasant experience.
Not sure how the meaning of the phrase "it sounds like" plays into this.
My point was that you accuse the parent of arrogance, while he was merely mirroring/responding to the arrogant tone in your comment (in the lines I've quoted).
That is, my point wasn't that you made an absolute claim ("your code is X") that was false. If that was my claim, yes, you could refute it by saying "I only said that it sounds like X is the case, I didn't make an absolute claim".
But my point was rather that the tone in your comment was arrogant.
"Your code base simply wasn't too big after all" is equally arrogant whether prefixed with "sounds like" or not.
It makes assumptions about the size of the other person's codebases, and it arrogantly implies that they don't see what they see because they haven't worked in as large codebases as you have.
if you really worked on large projects you would know that integration tests involve more than just one language. They are a helpful tool, and a very important one. But if I had to rely on integration tests to figure out API breakage, I would have to quote a massive price for any libray upgrade. Fortunately I have cheaper and better means.
> It's a little sad that you didn't have the self awareness to recognize that my feigned arrogance was satirizing yours ;-)
Yeah right. You quote two projects and consider your work better tested than mine. And I am the arrogant one. All I said is that you sound like you did not see the kind of projects I did.
Here's the thing. You either are really up to something and know something that all us soundness-first old farts doni know. Then you are going to revolutionize software development that has been coping with the same issues since the 70s. Or you are simply an arrogant youngster full of Kool aid, and you will crash against the same walls everyone else in the industry did.
I can't help the feeling that your post is about Python vs Java rather than static vs dynamic.
And I don't see how your 3 points about the velocity thing are affected by static types.. Like, come on, many points can be argued over compiler usefulness, but help with throwing away code is rather not one of them.
>I can't help the feeling that your post is about Python vs Java rather than static vs dynamic.
Because the OP was about python and because python and java are the two languages where I have worked on refactorings of large scale code bases.
I can't help but feel that if you make a hypothesis (e.g. "static typing is naturally better"), you need to be open to examples that disprove this hypothesis.
>And I don't see how your 3 points about the velocity thing are affected by static types..
Static typing usually means a lot of time spent satisfying the compiler. I don't disagree that this can be useful because it catches bugs (although a lot of stuff caught by compilers are not bugs). I disagree that this isn't a cost/benefit trade off that naturally works out.
I don't think you disprove the hypothesis, though. I think you are talking about equally, if not more, important thing - ergonomics. Java is not an example of static types not bringing in benefits, it's an example of overcoming those benefits by other things.
> I disagree that this isn't a cost/benefit trade off that naturally works out.
I agree, it is. I just think that programming languages are a whole package and Java is really bad with this cost/benefit ratio. I'd rather see something else being an ambassador of static types.
>I don't think you disprove the hypothesis, though. I think you are talking about equally, if not more, important thing - ergonomics
Types are an intrinsic part of ergonomics.
>I'd rather see something else being an ambassador of static types.
I'd be happy to see another language be an "ambassador of static types" provided the tip of the iceberg of 'visible things built in it' doesn't compromise of say, one spam filter and a file converter.
I'm getting pretty tired of hearing about how awesome and productive haskell's type system is for 15 whole years and seeing a whole lot of evidence that it's waaaaaay too unproductive to build anything of use in it.
Rust is a fine ambassador of static types (and has some great usage examples), but rust and python address extremely different non-overlapping markets and rust isn't shy about the fact it makes it fucking hard to compile things in.
For example creating new type in Java and Rust is a whole different world. If you can't separate Java the language from the idea of static typing and type systems, you are probably not a person suited for such discussion.
As for your remarks about Haskell, welp, I have build things with Haskell and Python, and it's not even close. But I guess chasing runtime exceptions during development is an acquired taste.
Everybody evaluates new programming languages relative to the previous programming languages they've used. In fact, you can usually tell what language(s) a person comes from by their complaints about the new one they have to use.
> When you see a huge codebase in python grown over several years and try to refactor it, reason about it, or just enhance it, you will inevitably come to the conclusion that the omission of static type checking is a grave error from a system-design perspective.
I never did.
When you give a C++ or Java programmer a large Python codebase, they'll find it hard to work with, and conclude that the problem with Python is the "omission of static type checking", i.e., the defining feature that their previous language had.
Coming from Lisp, when I found large Python codebases hard to work with, I concluded that the features of Lisp (macros, symbols, conditions, multimethods, dynamic vars, feature-complete lambdas, etc.) were the problem. Static type checking is one tool to help deal with the massive quantity of dirt [1], but better abstraction capabilities mean there's an order of magnitude less dirt to deal with in the first place.
Yes, static types are a means of communication. But they really only communicate what's already there. Conditions and multimethods are also means of communications, and they enable me to say what I mean in higher level terms instead of spelling it out in lower-level constructs each time. I don't worry about "refactoring tools" in Lisp because I don't have to repeat my intent all over the codebase.
Most people come from programming languages with static type systems. The most popular dynamic languages in 2019 (Python, Javascript), unfortunately, lack the great abstraction-building features that Lisp has. It's not surprising most people extrapolate from their Py/JS experience into believing the problem with dynamic languages in general is that they don't have a static type system.
When learning about math vectors the code often looked like x Pos + y Vec. But when rewriting it in a language withot annotations it became hard to understand the code. So I argued about adding annotations to the language. But the other side of the argument urged to use better variable names. So I rewrote it as position+speed and it became even easier to understand. And I have never missed annotations since then.
If the type is something generic like Tuple, it doesn't tell you what it means. If it's a type with domain semantics like Speed or Velocity, then goto your issue with naming variables.
No, but it tells me what types the tuple elements are, and thus indirectly (I know where to look to find out), how I can use them, and what calling functions on them will do.
It doesn't tell me their significance for the business logic, that's what variable names are for.
It sounds simple on math vectors, but some systems are inevitably complicated and in constant flux.
If you are making a linear collection, backed by a resizeable array, your interface is clear.
If you are making something like an I/O subsystem of an OS, in 10 years your primitives suddenly may change. For example, flash storage replaces spinning disks, and you need to schedule block erasure preferably when user programs are idle. Now you need to find all the places blocks are detached from the file structure, excluding FS journaling, and change the logic there to accommodate new scenarios, trying to reuse as much code as possible for you everyday operations. In a statically typed language I can perform exhaustive search for all such cases in no time, and make guaranteed safe refactorings to deal with that problem. No so in dynamic languages.
The change happens even faster with 3rd party networked services/libraries, because their evolution is not held back by expensive experimentation.
Is it? I think it's fairly uncommon to use types to enforce correct units in calculations, and you need a pretty sophisticated type system to be able to do it.
I'm not saying it's a bad idea, just that it's not really a core application of type systems in real-world programming practice.
Sure, but the general idea that you should disallow undefined operations are a core application of type systems.
You don't need too advanced of a type system to implement what you're talking about. With the caveat that doing it in Java would make the code tedious to write ;-).
If you have a system where the developers often aren’t sure what it does, how to organize the code, aren’t designing for maintainability, etc., static type checking really isn’t going to help you pick up the pieces afterward.
I like static type checking for what it does, but it solves some little problems, not big ones. (And can cause their own little problems.)
In other words, it will help you not break things. Tests are existential proofs of something broken, but types establish a universal proof that something is not broken in specific ways. Tests simply can't replace the properties guaranteed by types.
And types can’t represent the properties shown by tests. Type checking as typically applied only catches the simple problems of feeding the wrong argument into a function.
Mere mortals can’t construct type systems to enforce most definitions of correctness or licenses or performance.
> And types can’t represent the properties shown by tests.
That's not true generally. Dependent types can be used to represent any property that a test might check, and lightweight alternatives can prove a subset [1]. And this makes sense from the type of logical property being established: you can easily turn a universally quantified proposition into an existentially quantification, but going the other way around means checking all possible permutations of values.
However, proving a universally quantification is sometimes (often?) far more difficult than proving an existential quantification, which is why programs have to be structured in a certain way to make them amenable to such proofs.
> Mere mortals can’t construct type systems to enforce most definitions of correctness or licenses or performance.
Type systems always improve performance, so I don't know what you mean by that exactly.
It’s not an either/or. Both types and tests help, for different reasons.
For programming in the large, both should be used. In the long run it’s about saving time that is otherwise wasted on wolf-fencing and hair-pulling; and in getting new developers up to speed faster.
Actually with dependent types you totally can encode proofs that your logic is correct. But I don't really see that trickling down into mainstream programming anytime soon.
I normally don't respond to snark but what the heck I'm feeling festive.
Your stated "big problems":
* developers often aren’t sure what it does
It's easier to understand what code is doing in a strongly typed system. Just trying to find where a method is defined in a dynamic language is often difficult and time-consuming.
* how to organize the code
Strongly typed languages generally encourage better code modularity. Not a panacea, but a help.
* aren’t designing for maintainability
Automatic, safe refactoring is a crucial tool in the maintainability toolkit. Again, not a panacea, but a huge help.
I understand there are always bigger problems, and adopting a strongly typed approach after a mountain of technical debt has already accumulated may not be worth it. And if your developers can't write good, maintainable code regardless of static/dynamic types, that is certainly a bigger problem. But none of this means that static types don't solve big problems, which was the assertion I disagreed with.
> In a typed language, only polymorphism which can be proved correct is admissible. In an untyped language, arbitrarily complex polymorphism is expressible. [Hickey's tranducers give an] example of how patterns of polymorphism which humans find comprehensible can easily become extremely hard to capture precisely in a logic. Usually they are captured only overapproximately, which, in turn, yields an inexpressive proof system. The end result is that some manifestly correct code does not type-check.
There is something here close to a feeling that I've got that I struggle to experess. My two day to day languages are Python and C#. My C# code is usally longer and more complex than my Python code. At least part of this isn't down to the problem domain, syntax or features available in one but not the other. However I struggle to put into words exactly why or what is the cause of this (or even to really quantify the difference as I'm doing very different things in each language).
The pendulum has swung massively towards pro-static-typing but I remember the last cycle and I would like to hear from more people who were around for the last time we all started to decide static typing had huge drawbacks and the cost of dynamic typing was a price worth paying.
Well, C# is a pretty verbose language and it really pushes you towards object orientation (which is itself a verbose paradigm). It's not surprising that your code is longer in C#. I'd say a more fair comparison with Python would be F#, which is roughly as terse as Python.
I've written a lot of C++, and a fair bit of Python. Dynamic typing is seductive at first, but you eventually realize that you would much prefer errors be caught at compile time, rather than run time.
I want a language where I can:
cInches in1, in2, in3;
cCm cm1;
in1 = in2 + in3; // OK
in1 = cm1; // compilation error: cannot mix units
in1 = in2 * in3; // compilation error: square inches are not inches
in1 = 2.0 * in2; // OK
float f1 = in1 / in2; // OK
cCm foo(cInches in)
{ return cCm(in * 2.54);
}
foo(cm1); // compilation error
Naturally this is accomplished with special classes (not shown here), it is not an inherent property of the language.
Both are measures of length. I can add an inch and a centimeter, and the result is a well defined length that we can express in inches, centimeters, light years, or any other measure of length. Likewise for areas or ratios of lengths.
I would generally prefer it if the language just gave me the correct result, which I can then print out in the user's preferred unit.
Lengths are just as absolute as integers, but we have different ways to input and represent them. A sufficiently advanced language understands the equivalences and does just the right thing. Adding apples and oranges does not happen.
Generally your function should not expect inches, it should expect some length and I can input it in any unit of length I want.
Sure there are cases where you're constrained and need to account for the limited range and precision of your underlying storage format, but that should not be the default for any modern high level general purpose programming language. We don't need to concern ourselves with grade school stuff, the computer can do it for us.
Adding apples to oranges is concern at a lower level than the abstraction we should be working with by default. Doing arithmetic by hand on pen and paper is pretty low level.
Adding apples to oranges is also a concern when you don't have proper types by which the system can deduce that orange is actually just 2.54*apple.
You're missing the point. I make an app that prints charts. It includes a function that subtracts user-specified margins from the paper width, to calculate available space. Some of my customers are in the U.S (inches), others are not (cm).
If I call
printPage(0.75, 0.75);
you really think it's going to be all the same whether "0.75" represents inches or centimetres?
I am not sure if your "I want a language where I can" is followed by an implicit ", and C++ is that language" or ", I wish it existed".
I am guessing the former as your example is possible with C++ but it doesn't seem like a great example as you are using units of measure for your types. There are languages that support those and obviously, in such a case, assigning centimeters or inches to a distance-typed variable is both ok. The type error would be in mixing distance and, say, weight.
I meant C++. I'm using this currently in C++ production code. Has saved me tons of debugging.
What I meant was that if I'm going to look at a new language, I would expect it to support this type of functionality.
I suppose I can enhance the classes and overload the operators to automatically convert from cm to inches when doing mixed arithmetic. But this will quickly become insanely complicated when considering all possible permutations of arithmetic operators.
The problem is, just like with "function", the usage word of the word "type" is a bit less formal in programming than mathematics.
As far as I can tell, we use "types" in programming for 4 things:
1. Modelling the domain (e.g. type Temperature)
2. Limiting the range of values (e.g. enum)
3. Decision about how to store the data (e.g. ArrayList)
4. Specification of dispatch on values (e.g. file as an interface)
The formal type theory deals with 1 and perhaps a bit of 2. The original usage of types in programming (in old assemblers, Cobol, etc.) was 3. Dynamic programming languages primarily use the term in the 4th meaning.
Your #2 is wrong (sorry). Enums in most mainstream languages don't "limit"; they merely specify known values. Unknown values are still possible. For a type-safe enum, see discriminated union.
Can you give some examples other than C, C++, and C# where this is true? Most languages either don't have them as definable types, are C based and therefore map them to integers, or actually limit values.
AFAIK, python, java, rust, ocaml, haskell, and swift all have enums that actually limit values.
"Personally, I want to write all my specifications in more-or-less the same language I use to write ordinary code."
If we take 'specifications' one level further and actually want our algorithms to be proven correct, as in, by a proof assistant it is quite unclear that this possible or desirable. Maybe the best way to program is to have specifications in a language as close to mathematics as possible and executable code in a language much closer to the machine. It is not clear to me that both of these things should be the same language.
"Equivocating around type-safety"
The author seems to want 'type-safety' to mean that it is made sure that an integer in memory is not accessed as if it were a float. While this is clearly a desirable feature that also dynamically typed languages provide by checking and/or converting at run time I really don't think this is the meaning of 'type-safety' in common parlance. If I attempt to use a variable as if it were of a different type I generally get a runtime error in a dynamic language, e.g., because I am accessing a field that does not exist. 'type-safety' is the property that the language prevents this runtime error from happening because this is checked at compile/lint time.
> I really don't think this is the meaning of 'type-safety' in common parlance.
The bulk of the article is a call for more accurate use of terminology, so in a sense you’re noting the same thing: people using “type safe” when they mean “compile-time type-checked”, as a specific misuse of terminology.
Types are safe if they are checked, and the checking can be done statically at compile time or dynamically at run time.
Perhaps we’re just so used to working with type-safe languages that it is taken for granted (it would be quite unusual in eg Web Dev to use a language that is not type-safe) that the terminology is repurposed to be useful in conversations that we might have more often, eg, whether to rely more on static vs dynamic type checks.
> The author seems to want 'type-safety' to mean that it is made sure that an integer in memory is not accessed as if it were a float. While this is clearly a desirable feature that also dynamically typed languages provide by checking and/or converting at run time I really don't think this is the meaning of 'type-safety' in common parlance.
The author is wonderfully clear on this point. Under what he calls “Saraswat type safety” (not the best name), a dynamically typed language is type safe. So is a language like B that just has a word type (sort of... B may or may not have had UB depending on your perspective and UB may or may not preclude type safety... again depending on perspective).
There is a modern new-age use of the term “type safety” that means something less precise than this, namely your idea that it somehow precludes dynamic type checks. What nonsense. If a language checks array bounds dynamically, does that mean it’s not type safe? Give me a break.
> If we take 'specifications' one level further and actually want our algorithms to be proven correct, as in, by a proof assistant it is quite unclear that this possible or desirable.
It is not only possible, it is how dependently-typed languages/proof-checkers like Agda work.
The question is whether the performance hit that one gets from running such a language at run time is acceptable. For one, thing, it pretty much forces one to use a purely functional language. Which may or may not be a good thing.
> Presenting type-level programming as a good thing
> Fetishizing Curry-Howard
These seem a bit causal :)
When you do closer and closer to dependently-typed programming (which necessitates more type-level programming because dependent types aren't present in any production language yet), Curry-Howard actually can be a nice way to ground yourself when you get confused or stuck programming at the type level.
The word "Type" is such an overloaded term, just like how "Workflow" is an overloaded term in the tech industry[1].
For example, when people say "X is strongly typed language" means something much different than when people say "In Haskell, function have types" [2] or "Idris has dependent types".
They aren't completely unrelated, but they mean two very different set of things depending on the context.
> But please, stop pretending that type-level programming is a desirable thing. It's a kludge, and hopefully a temporary one. Personally, I want to write all my specifications in more-or-less the same language I use to write ordinary code. It's the tool's job to prove correctness or, if it must, tell me that it failed.
I don't see explicit type-programming as (always) a kludge; oftentimes it's the primary vocabulary for expressing certain ideas. I.e. type annotations are not (always) just typechecker-hints, but can be explicit documentation of intent.
> We need to be able to step up the strength of our specifications without switching language all the time. We need to integrate typed and untyped code
This has been a revelation during my time spent with TypeScript. The fact that I can just add a comment that disables an unhelpful type error on a given line is a game-changer. Or that I can assert to the system "no really, this value that has a union type is actually just this particular subset of that union" (cast) and retain meaningful checks after that point. Having a type system that's flexible has made it enormously more useful. It allows you to deal with those cases where what you want to do can't be type-checked, within the context of (and playing nicely with) a broader system that is type-checked.