The title is pure click bait, the title suggest that type safety doesn't matter, but then the author explains why it does matter e.g. catching errors at compile time instead of run time. A better title would be "My take on type safety".
Pushing runtime complexity into compile time complexity almost always pays off, regardless of the number of people who will tell you that hash table of hash table can take care of anything
So this basically summarizes as "A) the goal is to reduce runtime errors so manage your time well, also B) linting tools are good at doing that quickly"?
I feel like it could really use a couple examples of situations where type safety is a waste of time. As-is the post vaguely suggests a tradeoff and doesn't elaborate.
I think it's not saying that time is wasted but that it could be better spent verifying the same thing with other techniques. For example you could implement a graph type that guarantees absence of cycles at all point. Such a type would be great except for the fact that in most cases you would be better off preventing such cycles by construction, e.g. by only adding edges directed towards nodes that are not added yet.
> It’s only useful because of what it accomplishes: moving errors from runtime to compile time. Even that isn’t a goal on its own. The real goal is reducing runtime errors. Type safety is one of the best methods of achieving these cascading goals, but it’s far from the only one.
Why is an example necessary for this? It's trivially correct.
It doesn't take into account the whole range of errors that can creep into runtime. If a type system only moves 2% of programming errors to compile time, but costs you in terms of coupling, complexity, and effort, it may not be worthwhile. This is especially true if you can catch that same two percent of errors with tests instead, and avoid the coupling and effort penalties.
Personal opinion: Type systems are more useful for initial code comprehension and tooling than for writing it in the first place, or preventing bugs.
It doesn't just move errors to compile time. It moves runtime errors into the interface layer where you parse a request to bind it into your type so that by the time it reaches the persistence layer or something deeper you don't need to bubble up validation errors. You measure your invariants at the boundaries to provide safety to your interior code (defensive programming) and provide meaningful errors messages to users (developer experience).
I assume they use a bunch of dynamically typed functional programming languages which motivated them to write this somewhat incoherent article in their defense.
I agree completely with the premise and the conclusion, however I would not describe types as moving errors to compilation time, but as moving effort to the earliest parts of the workflow: more time spent on writing code, more time spent on thinking before doing, more time spent on specifying your interfaces. Agreed on the diminishing return as well: use them as long as your leverage (= time saved debugging / (writing time + reviewing time + maintenance time)) is good, maintenance time in particular is often overlooked: how much time will it take to train the new recruit so that the understand your oh-so-smart custom types?
For those dinging the article, please check this line
> when discussing architecture of code or reviewing a pull request, I will often times push back on changes that add more complexity in the type system. The reason is because, even if a change adds “type safety,” this extra complexity is only warranted if it achieves our primary goal, namely reducing runtime errors.
It's poorly emphasized, but the author is referring to Typescript-style static typing which can come with a truck load of complexity. Something like Go's type system is fine (there's nothing sophisticated about it) but Typescript's type system gives you enough rope to hang yourself with.
Yes, it's this line that gives the hint that the author means Typescript when they mention they will "often times push back on changes that add more complexity in the type system". They're referring to code that tries to make the type system do too much in the name of "type safety" and end up with overly-complex code. You simply can't get that kind of complexity with a dumb type system like Go's.
They still love Typescript, but they don't abuse the type system.
I would still not refer to that type of complexity as "Typescript-style" if I'm trying to interpret the author. The kind they like is also "Typescript-style".
> It’s only useful because of what it accomplishes: moving errors from runtime to compile time.
...
> Bug reduction is not the only benefit of strong typing. There’s also: easier codebase maintainability, simplicity of refactoring, new engineering onboarding, potentially performance gains, and probably a few other things I missed.
But besides that, what have the Romans ever done for us?
It is a very simple blog post, it makes a simple point, but I also think it is a very important point: Types are a means to an end.
I am implementing Practal in TypeScript, and it is so much more productive than if I had to do it in JavaScript. I am (mostly) not using any advanced features of the type system, but use it for its simple features, as described in this blog post. The greatest feature: I can ignore the type system, where it makes sense. That isn't as bad as it might sound, given that the type system of TypeScript is not sound in the first place.
The blog post suggests also thinking about other means to achieve the end (less bugs), such as push-button technology like static analysis. I agree with this, but am more interested in another direction of developing this thought: If we are actually proving theorems in a logic about mathematical objects like code, we don't need a type system, either. In fact, a type system can and will get in the way of formulating your theories in the simplest and most elegant way. It adds the additional constraint that on top of figuring out how to express something mathematically, you also need to figure out how to express it in this particular type system. And while often these two things are aligned well enough, this is not always the case. By the way, note that I am not saying that types themselves are not always useful. They are. It is just that we don't need a static type system to use types, which I think of as abstract sets.
Let me anticipate a comment someone will make: But type systems are how in practice logic is implemented, what are you talking about?! And yes, if you look at the main systems in the space, like Isabelle, Lean, Coq, they are based on type systems. But that is not how it necessarily needs to be done. You could be using first-order logic, like Mizar does, but that comes with its own set of restrictions, mainly missing out on higher-order features. What I am implementing in Practal instead is Abstraction Logic [1], which is higher-order, but based on a single mathematical universe instead of types.
Now, right now that is a far cry from the push-button alternatives the blog post suggests, but a) it is not opposed to these alternatives, but an additional angle to view things from, and b) it is becoming much more push-button than it used to be with the help of increasing compute power and AI.