
How porting to TypeScript solved our API woes - mrbbk
https://www.executeprogram.com/blog/porting-to-typescript-solved-our-api-woes
======
dguo
Note that Stripe is working on a type checker for Ruby:
[https://sorbet.org/](https://sorbet.org/)

I second the sentiment though. I wouldn't choose to use plain JavaScript over
TypeScript for any significant project.

You might ask why I wouldn't just use a more fully typed language like Java.
My main response is that I love having the flexibility to choose how strict
the types should be. Sometimes, typing something out fully is not worth the
trouble. Having escape hatches lets me make that choice. While I enjoy using
types to turn runtime bugs into compile time errors as much as possible, it's
not the right thing to do 100% of the time.

~~~
gregkerzhner
Having partial type safety is like having a no peeing section in the pool.

~~~
jjeaff
That's not true. There are plenty of cases where certain pieces of software is
cordoned off and/or non-critical that would not affect the rest of your stack
if they fail.

~~~
tristanstcyr
If you properly do validation at runtime this can be true, but it's a manual
process that's not verified by Typescript.

------
eat_veggies
> That means that when one side of the API changes, the other side won't even
> compile until it's updated to match.

Coupling your front end and back end in this way can give you false confidence
in making API changes. If you control both sides of the API on a web app that
doesn't have too many users yet, then this can be very productive, but
consider this situation:

You've made a breaking API change on the back end, and thanks to your type
system, you make the corresponding front end changes as well, which is great.
You deploy your code, and the back end changes take effect immediately.
Visitors who were in the middle of a session are still running the old front
end code, and their requests start failing. Cached copies of the front end
start failing too.

You can engineer around this but it's better to have a system that doesn't
make introducing breaking API changes too easy. It should be painful.

~~~
jkaptur
And if you do a gradual rollout, newer clients can connect to older backends!
And if clients can talk to each other, there can be THREE versions involved.
Also: rollbacks.

I’m curious if anyone has a system for managing this that they love. I’ve
pretty much only seen painful ways.

~~~
hamandcheese
At <dayjob> we use flow types and graphql, so like OP our frontend will fail
to compile if we make a breaking change. To assist with the backward-
compatible-during-deploy issue, we additionally have a teensy bit of tooling
that comments on our PRs to indicate dangerous API changes.

It's not perfect (I've ignored the comments before, thinking I knew better...
I didn't), but it seems to help.

It wouldn't be difficult to make it more sophisticated, and then completely
block PRs it knew weren't backward compatible, but we haven't seen a strong
need to do that yet.

------
lmm
If you control both ends of an API, there's no reason not to use Thrift or
equivalent (gRPC) rather than untyped JS. Even if you don't believe in typing
in general, an API boundary is exactly where it's most important to make sure
both sides agree on what the interface is.

~~~
jpdb
Calling gRPC endpoints from the browser isn't something that works very well
today.

~~~
lmm
Calling thrift endpoints from the browser works great. I had heard that gRPC
was a thrift equivalent but haven't used it myself.

------
leipert
Do I read the chart correctly and their whole Codebase (minus dependencies) is
less the 15k lines of code? Porting 6.5k likes of ruby in 2 weeks sounds
reasonable, but how doing migrations 10x or 100x that size is far more
challenging.

~~~
myth_drannon
I think Gary is the only guy working on the project and it is a new project no
wonder he migrated it super fast.

------
orange8
TypeScript has been on HN for different reasons over the last few days. The
main thing that has stood out, from the flame wars erupting over it is that it
is a truly divisive idea. To some, it gives the same securities and checks
provided when working with a statically typed language. To others, it curtails
the power and expressiveness inherent in JS, the dynamic, functional language
it compiles down to.

TypeScript does provide a lot of benefits, but for it to truly succeed and
take over in the JS world would mean it has to truly also reflect, and enable
JS's functional and dynamic roots. JS got where it is by being JS.. an
extremely flexible and accommodating language. HTML got this far today by
doing the same, just google XHTML if you doubt that. So for TypeScript to
succeed where CoffeeScript failed, this may be the direction it needs to lean
more towards: being less divisive, and inviting all kinds of programming
paradigms to the party.

That, after-all is how JS succeeded.

~~~
scarface74
_JS got where it is by being JS.._

JS got where it is solely by being the language built into browsers.

~~~
orange8
Is that really all it is though? Cases in point: Java Applets, VB, Flash, Dart
...

All of these were at one point or another "built into the browser", but where
are they now? Give credit where it is due, the success of the web as a
platform lies not in its technical superiority over alternatives, but in its
inclusiveness and flexibility. Any tech that is trying to replace HTML, CSS
and JS in this regards will seriously have to consider and accommodate this,
or suffer the same fate of hundreds of other pretenders to the throne. Long
live the king! Long live open, approachable and flexible tech!

Exhibit A: List of very different and diverse languages that compile to JS (
[https://github.com/jashkenas/coffeescript/wiki/List-of-
langu...](https://github.com/jashkenas/coffeescript/wiki/List-of-languages-
that-compile-to-JS) ). If this does not demonstrate the flexibility and
malleability of the language, then I do not know what does. Take web assembly
for example, which has been around for over five years now, and was
specifically designed as a "compile to" language. How many languages compile
to web assembly in comparison?

Any tech that is as divisive as TS is simply not going to get far. Flash
ActionScript was massive 10 years ago compared to any alternative to JS tech
today, and where is it now? The creator of TS even quotes ActionScript as one
of the main inspirations for TS. ActionScript even had a more powerful version
of Reacts JSX (ES4), where is it today? CoffeeScript was all the rage 5 years
ago, JS simply absorbed all its good ideas, where is it today? The things that
last, that stand the test of time are the things that are flexible and
accommodate different ways of doing things. For your beloved TS to stand the
test of time, it has got to accommodate the whole JS eco-system, not just
those who favor the static object oriented way of doing things.

~~~
acemarke
I'm not sure why you're talking about "object oriented". TS, in and of itself,
has nothing to do with OOP. You can write functional-oriented code in TS same
as you would in JS. TS doesn't mandate use of classes or anything like that.

As an example, I can write a React+Redux app in TS, and be writing 100% plain
functions (components + reducers) the entire way through.

From my viewpoint, TS has more than hit enough critical mass to survive for
the long term:

\- Microsoft is heavily invested in its ongoing development

\- The Angular community requires use of TS

\- The React community has split in general between types and no types, but a
recent survey of /r/reactjs readers indicated ~50% of React devs are using TS
[0]

\- Where CoffeeScript introduced new syntax entirely, TS's focus on being a
superset of standardized JS means that there's both less to worry about
compat-wise _and_ it can be seen as a way to use new language features instead
of Babel

FWIW, I wrote up my thoughts on learning and using TS as both an app dev and a
Redux maintainer [1], and I'm sold on using going forward.

[0] [https://www.swyx.io/writing/react-
survey-2019/](https://www.swyx.io/writing/react-survey-2019/)

[1] [https://blog.isquaredsoftware.com/2019/11/blogged-answers-
le...](https://blog.isquaredsoftware.com/2019/11/blogged-answers-learning-and-
using-typescript/)

~~~
orange8
OOP is not just about using classes, just like functional programming is not
just about using functions. They're different programming paradigms, each with
their own patterns, strengths and weaknesses. Idiomatic TS favors an OOP
style, which is why there are event libraries out their [0] whose sole goal is
to enable using TS the functional way.

> The Angular community requires use of TS

Angular was once the king of the JS frontend, but now it has more or less been
reduced to a certain niche in the market. It is no accident that it is very
popular among those with a Java background, with Java being a very good
example of a static OOP language.

> The React community has split in general between types and no types

That split is being caused by TS. This is not a good thing for the eco-system
as a whole. The idea that everyone will be forced into adopting TS because of
the power of MS is the very idea that will lead to a backlash, just like the
backlashes against XHTML and Java applets.

[0] [https://github.com/gcanti/fp-ts](https://github.com/gcanti/fp-ts)

------
myth_drannon
How times are changing. The person who is famous for his amazing Ruby/Rails
screencasts rewrote his Ruby backend to TS...

------
xupybd
Wow porting the entire backend in two weeks is impressive. I guess he knew the
domain well and both languages well but I'm impressed.

------
cameronfraser
How does one deal with remote data in typescript? I really like typescript,
but not having any guarantees on the returned data is kind of frustrating.

~~~
a_humean
It is actually quite frustrating compared to some other languages, and more
often than not you see that its just `JSON.parse` and hope for the best. I
wish we had something like Rust's Serde or Haskell's aeson.

If you want to validate data coming over IO then your options are data
validation libraries such joi, io-ts, and yup. You have to write seperate data
validators on top of your types. io-ts has a way of deriving types from
validators, but io-ts is often seen as quite intimidating being built on top
of fp-ts.

No matter how carefully you maintain strict typing within your typscript
project the moment you hit IO everything is basically `any`.

Some projects like openapi-generator might generate some validators for server
respones, but I've not seen any good generators that do actual validation.

I'm not sure if apollo-graphql does responses validation? Does anyone know?

~~~
eropple
Plenty of web frameworks can validate via JSON Schema, using ajv under the
hood. (I use Fastify.)

io-ts is pretty scary, though. I like the `runtypes` library but it doesn't
help me express my types in a way clients can consume, which, enh.

~~~
Kinrany
I also liked runtypes more, but it turns out composing io-ts codecs is
perfectly readable, and being able to deserialize at the same time/instead of
validating is very convenient.

Caveat, I've only been using it for two weeks.

------
awinter-py
this is _ruby_ -> TS

was expecting JS -> TS

~~~
swrobel
It’s both

------
theonething
But Ruby is so much more fun that JS or TypeScript.

/opinion

------
gyrgtyn
I watched them go through this process via twitter, and it sounded like
inventing that shared api model was a type puzzle i wouldn't be able to figure
out myself. also they're the only people i know of doing it?

~~~
Scarbutt
This is more prominent in Haskell and Scala circles.

------
femto113
Did it not occur to them to, I don't know, "test" the API when they make
changes? A compiler or stricter type system may help prevent certain careless
errors, but not all (or even most) of them, while a proper test scheme will
catch all such errors.

~~~
yakshaving_jgt
The author is one of the world's most prolific publishers of TDD educational
material.

You shouldn't be so quick to assume that everyone else is clueless.

~~~
femto113
My snarky tone was unwarranted but I'm not actually assuming cluelessness
here, nor did I come there quickly. I find in practice that TDD and what I
think of as "testing" are quite orthogonal.

> With the Ruby backend, we sometimes forgot that a particular API property
> held an array of strings, not a single string. ... These are normal dynamic
> language problems in any system whose tests don't have 100% test coverage.

TDD in general focuses on code-adjacent test strategies like unit tests. In
the Ruby TDD world it's a popular strategy to test first, code second at the
level of classes or even individual methods. In practice this"tests = code =
tests" philosophy produces both more code and a focus on metrics like coverage
that only measure "for how much of my code do I have other code that asserts
that my code is doing what the other code says it should be doing" rather than
ensuring "my code is actually doing what someone else needs it to do".

"Testing" as I intend the term means using the software to do whatever it is
supposed to. For a server side API that probably means consume it via a
client. Any client that relies on the type of a property being an array
instead of a string will blow up (except perhaps Python, grr). Any reasonably
complete smoke or integration testing regime should expose this problem, but
more immediately I think developers should be actively testing the thing they
are changing while they are changing it. Personally I dislike compilers and
restrictive type systems in large part because they _inhibit_ this sort of
rapid, iterative testing and fixing. Partially functional dynamically typed
code is far more useful to me as I work through a series of related changes
than statically typed code that requires I fix all the issues it perceives as
important before I can keep going on what really matters.

~~~
yakshaving_jgt
> Any reasonably complete smoke or integration testing regime should expose
> this problem

But that's not without cost. At least in the Ruby world (and Bernhardt
holds/held this view) integrated tests are to be avoided where possible
because it creates a negative feedback loop of slow tests and an exponential
number of tests that need to be written to achieve equivalent coverage.

> but more immediately I think developers should be actively testing the thing
> they are changing while they are changing it

This smells like the age-old discipline/rigour/professionalism platitude often
trotted out by proponents of _Software Craftsmanship_. I'd rather embrace the
fact that humans make mistakes and optimise for that, rather than hold people
to unreasonable standards.

Furthermore, while I _agree_ that people should be testing the thing they are
changing while they are changing it, you appear to be implying that only one
form of testing is acceptable here. Why not test the type signature? Why not a
formal proof of correctness? Why not a property-based test? There are many
ways to improve the chances of a piece of software to work. Restricting
ourselves to only one of those ways is, quite frankly, dumb.

> Personally I dislike compilers and restrictive type systems in large part
> because they _inhibit_ this sort of rapid, iterative testing and fixing.

That's a fine opinion to have, but that's _all_ it is. I've worked
professionally with dynamic languages, and I hold the _opposite_ view. I work
with a few projects totalling about 60,000 lines of Haskell, and I feel the
language _enables_ rapid, iterative testing much more than Ruby ever did for
me. Of course I can't prove this empirically, which is why my opinion will
only ever be as good as yours, and vice versa.

> Partially functional dynamically typed code is far more useful to me as I
> work through a series of related changes than statically typed code that
> requires I fix all the issues it perceives as important before I can keep
> going on what really matters.

I'm sorry, but this is plainly incorrect. The ability to defer type errors to
runtime certainly exists in Haskell, and I expect not exclusively.

~~~
femto113
If you can come up with a formal proof of correctness for an evolving API then
you are clearly operating on a higher plane of software development than us
mere craftspeople, and I certainly wouldn't presume to talk you out of trying.

------
valuearb
Swift on the Server!

------
shortstuffsushi
Porting to a typed language helped prevent type errors in a typeless language.
Who woulda thunk?

I am currently also working on a project written in a JS backend that's slowly
porting to TS, for the same reasons, but I still would prefer to go back to C#
and take it a level further. I just don't enjoy the TypeScript language all
that much.

~~~
smt88
> _Porting to a typed language helped prevent type errors in a typeless
> language. Who woulda thunk?_

You say this sarcastically, but every JS-related thread on HN turns into a
flamewar between type lovers and haters.

The type haters make exactly the argument you're mocking: that typeless
languages do not result in more type errors. Their reasoning is along the
lines of, "I don't need a type system because I'm a professional and don't
make mistakes."

That sounds laughable or like an exaggeration, but it's a surprisingly common
line of thinking. The buggiest code I ever worked on was a PHP code base
written by someone who had been coding for 20 years and had a Master's in CS.
Before I inherited the code, he told me, "I code in the terminal. I don't need
IDE features because I don't really make mistakes in PHP anymore."

Again, seems satirical, but not uncommon.

~~~
giulianob
Those people are also missing the point that not having types means you are
inherently writing a slow program.

~~~
diggan
That's the first time I heard that any language that doesn't have types, is
slower than a program with types. I'm unsure if that actually makes sense.

You mean slower as in performance? I'm having a hard time understanding how
types == performance, so you have to mean something else.

I guess assembly would be the language you would use if you really need to
squeeze out the maximum amount of performance of a single CPU, and as far as I
know, it does not have types.

Also languages built on top of LLVM could get by with trading compile times
for run time performance. Maybe you meant compiling gets slower without types?

~~~
Arnavion
Strong typing means that at runtime dynamic behaviors don't need to be
accounted for.

For a compiled language, the compiler could know that `foo` and `bar` are
32-bit integers, and thus compile `foo + bar` to call the addition function
for 32-bit integers. In an interpreted language, the interpreter could do the
same thing.

Without that typing information, both would have to invoke a generic addition
function that detects the types of its arguments at runtime, notices they're
both 32-bit integers, and delegates to the corresponding addition function.

Of course there are tricks that can be played in the weakly-typed case, like
assuming the same call site will always have the same types, so that there can
be a short path that assumes that and only branches rarely. That way the first
or first few executions might be slow, but eventually they get almost as fast
as the statically-typed case. JS engines in particular and JITted runtimes in
general usually do this.

>I guess assembly would be the language you would use if you really need to
squeeze out the maximum amount of performance of a single CPU, and as far as I
know, it does not have types.

Assembly absolutely does have types. The addition function for 32-bit ints
only operates on 32-bit ints, and is distinct from the addition function for
64-bit ints.

~~~
mr_toad
> Assembly absolutely does have types. The addition function for 32-bit ints
> only operates on 32-bit ints

You can add any bits that will fit in the relevant registers. Assembly cares
not whether the programmer thinks they are ints.

~~~
Arnavion
You should read the part of the sentence you cut off in your quote.

------
TheSpiciestDev
I can't imagine not using class-transformer[0] or class-validator[1] in any
TypeScript project that deals with external/third-party APIs, remote or not.

[0] [https://www.npmjs.com/package/class-
transformer](https://www.npmjs.com/package/class-transformer) [1]
[https://www.npmjs.com/package/class-
validator](https://www.npmjs.com/package/class-validator)

~~~
exogen
I don't understand the point of transforming things to class instances,
though. All of TypeScript's strengths are available without the need for
things to be an actual `instanceof` something, right? Can you elaborate?

FWIW, I'm biased against `class-transformer` because we were wondering why
some of our (TypeScript-driven) API endpoints were so slow, and `classToPlain`
+ `plainToClass` were the culprits, comprising 2/3 of the time spent. If you
look at the source code for those functions, they're kind of insane.

~~~
eropple
Agreed that actually doing the transformations is a bit of a mug's game.
However, TypeScript's type system treats class definitions interestingly,
particularly around metadata. You can attach validation metadata to a class
and pass plain objects around that structurally match that class so long as
you don't define methods or a constructor.

As mentioned in my sibling comment to yours, I do this to define DTOs that are
then expressed in JSON Schema and passed through ajv, and it's pretty slick.
The objects being used are all just JavaScript objects, the class is just
being used as a metadata holder and something you can reference/get via
`design:type`/`design:paramtype`/`design:returntype`.

------
cryptica
The title is misleading because TypeScript does not implicitly do any kind of
runtime type checking on data which was sent by remote clients - If the client
and server both happen to be written in TypeScript by the same team, the
TypeScript type system can give those developers false confidence that the API
endpoints on their server enforces runtime type validation on remote user
input (I've seen this too many times), but it does not.

To implement API endpoints correctly in TypeScript, you're supposed to assume
that the arguments sent by the remote client are of 'unknown' type and then
you need to do some explicit schema validation followed by explicit type
casting. Some TypeScript libraries can make this easier but it's misleading to
say that this is a native feature of TypeScript; in fact, it is no different
from doing explicit schema validation with JavaScript (there are also
libraries to do this). You should always validate remote user input regardless
of what programming language you use.

Merely getting the compile-time static type checker to shut up because the
types in your client code match the types in your server code is not good
enough unfortunately - In fact, it may conceal real issues by giving
developers false confidence that runtime type validation is happening when in
fact it is not. A hacker could write a client in a different language and
intentionally send incorrect input to crash your server unless your server
explicitly validates the schema.

The reality is that there is no guaranteed type continuity/consistency between
the client and the server. Any tool which gives the illusion that there is any
kind of continuity is deceptive by design.

This is why I like plain JavaScript; it requires real discipline and it
doesn't give any sense of false confidence. Developers should always be on
their toes. The only way to improve code quality and security is by exercising
more caution, not using more tooling.

The benefit pointed out by the author of this article is in fact one of the
few genuine gaps in TypeScript's type safety capabilities. Praising TypeScript
for this fictitious feature is only going to give developers false confidence
that TS somehow takes care of input validation for them and this is going to
lead them to getting hacked.

~~~
gary_bernhardt
(I wrote the article.)

Our system uses io-ts to dynamically validate all incoming and outgoing API
data. The static API types are guaranteed to match the io-ts codecs, so the
runtime validation will match the static types.

Re: "it's misleading to say that this is a feature of TypeScript": I didn't
say that. TypeScript makes this kind of static verification possible. It's
impossible in JavaScript.

Re: "The benefit pointed out by the author of this article is in fact one of
the few genuine gaps in TypeScript's capabilities": yes, it's a gap in TS.
Again, TS makes it _possible_ (as opposed to JS). io-ts (mentioned explicitly
by name in the article) backfills the type erasure shortcoming for our
purposes, at the expense of being more verbose and producing worse error
messages when compared to first-class reflection in a non-type-erasing
language.

Re: "going to lead to them getting hacked": no, io-ts isn't going to lead to
that more than any other runtime validation scheme. If someone believes that
TS' static types provide runtime guarantees, yes, they could write highly
insecure API code. But it's hard for me to imagine someone getting to an
experience level where they can write a server-side router that's generic over
all possible API endpoint payloads, while at the same time not knowing that TS
erases types at runtime. Erasure comes up early in the process of learning TS
because it leads to surprising behavior, like objects at runtime having
properties that aren't present in their static type.

~~~
cryptica
>> It's impossible in JavaScript.

It's definitely possible (just as easy in fact). You just need to define a
schema for each API endpoint. TypeScript does not physically protect you from
having to enumerate all the properties and size constraints of the data one by
one. In terms of the actual schema validation step, there are lots of tools in
JavaScript which let you do the same thing; ajv, z-schema are just a couple of
examples.

My point is that TS adds no value there. TS's value is only in static type
checking. What the article is claiming is that TS somehow adds value with
runtime type checking.

The argument of saying that "TypeScript adds value because it can reuse its
internal type definitions for the purpose of validating remote input" is
circular - It refers to a problem which doesn't exist in JavaScript to begin
with.

JavaScript has no type definitions so being unable to reuse type definitions
for schema validation does not qualify as a drawback on its part. The argument
simply does not apply and cannot be used to claim TS's superiority.

At best you could claim that TypeScript cancels out one of its own
shortcomings:

\- TS shortcoming: You need to define types everywhere... That adds a lot of
work! (-1 point)

\- TS benefit: But you can re-use these type definitions for doing schema
validation of user input as well! That saves a lot of work! (+1 point)

But the net gain over JS is 0 because:

\- JS shortcoming: You need to define a schema for all your endpoints to
validate user input... That adds a lot of work! (-1 point)

\- JS benefit: But aside from that, you don't need to define types anywhere...
So that saves you a lot of work. (+1 point)

The only real argument to be had is whether or not compile-time static typing
adds value over dynamic typing.

~~~
gary_bernhardt
The post isn't about dynamic validation; it's about static type checking of
APIs, which is impossible in JavaScript.

~~~
cryptica
My point is that static type checking of APIs does not solve any new problem
which static type checking in general does not already solve - That's why I
don't agree that it solves API woes specifically. My argument is that it adds
no value in that area.

If a developer already does input validation correctly with JavaScript, then
switching to TypeScript will not add any value for them in that area.

Furthermore, my first argument (about TS giving false confidence) is that
TypeScript does not make it any more likely that an unskilled developer would
be able to identify and solve the problem of schema validation when compared
to JavaScript... From the unskilled developer's point of view, false
confidence (which TypeScript can sometimes provide) is worse than having no
confidence at all (which is what JavaScript always provides).

Type checking is 100% about confidence; it gives more confidence. Confidence
and correctness are two completely different things and I wanted to point out
that there is such a thing as false confidence.

~~~
gary_bernhardt
I rename the "email" key in our register endpoint to "emailAddress" and all
code that touches that key turns red less than 1s later, whether it's in the
client or server.

Edit: All of these edits to your posts after I've already replied are very
confusing.

~~~
cryptica
What you're describing applies to static type checking in general. I don't
argue this point. Sorry about the edits. It's very difficult to express such
things unambiguously.

I'm more concerned about the effects of the post's title on people's
programming practices than the actual content of it. HN does tend to have this
effect unfortunately.

~~~
cryptica
Also, I should point out that in terms of achieving correctness, it's possible
to get similar value as what you're describing simply by having good tests.

I don't want to get into this argument now though because it could last
forever but hopefully it highlights why it's important to keep arguments
tightly bounded and not make blanket statements when the domain is so large
and complex and we can go off on a tangent in an infinite number of
directions.

Ultimately, I enjoy JavaScript and I don't want future employers to force me
to use TypeScript because of articles like yours (with those kinds of
titles)... I was already forced to use TypeScript at my last company, it went
well but it would have been even better if founders had let me and my team use
JavaScript... For one, I might not have quit the company. I was tired of
debugging mangled JavaScript (compiled from TS) using vim over SSH whenever
there was a problem (it was a large decentralized P2P project so often that
was the only way to debug it). The drawbacks of TS were definitely not worth
its benefits for that specific project.

~~~
gary_bernhardt
You may enjoy this talk:
[https://www.destroyallsoftware.com/talks/ideology](https://www.destroyallsoftware.com/talks/ideology)

~~~
cryptica
This video convinced me even further that dynamic typing is superior. The
speaker basically admitted that static type checking still requires unit tests
because types in statically typed languages have too broad/imprecise
granularity. I was also thinking about issues related to timing, race
conditions and incorrect state mutations; static typing doesn't prevent any of
these. IMO, static typing doesn't even begin to address a tiny fraction of all
the possible programming mistakes that one might make. IMO its added utility
value is often so low that it's basically not worth the mere hassle of having
to come up with type definitions.

Personally, when I write a function definition, I always try to visualize the
set of possible values that the function will need to handle - So given that I
already have that precise set of possible arguments already in my mind as I
write the function, it doesn't add any value for me to then formally
generalize my function parameters as being integers or strings or some other
types which are too broad to effectively constrain my function input to the
required level. With static typed languages, merely specifying that a variable
is an integer doesn't save me from having to think about what specific subset
of integers I mean. For example, if my function only works with odd numbers as
input, using the integer type definition will not offer me any additional
safety.

I would argue that most functions are like this. The type definitions are
never granular enough to guarantee correctness. The type system only protects
you from the most basic/obvious mistakes - So obvious that you don't even need
the type checker to tell you.

~~~
seanmcdirmid
> The speaker basically admitted that static type checking still requires unit
> tests because statically typed languages have limited granularity in the
> typing.

Static types only cover one set of bugs, but it is an extremely rewarding set.
The payoff of static typing, and what TypeScript focuses on, is early feedback
(did I get the name right, did I forget to convert some arguments, what can
this value do?). If you focus on the "it eliminates some tests" aspect, you
are completely missing the point.

> Personally, when I write a function definition, I always try to visualize
> the range of possible values that the function will need to handle - So
> given that I already have that specific range already in my mind as I write
> the function, it doesn't add any value...

So what the function will handle remain in your head? Or you found a better
way to document them than type annotations? Do you write code that anyone else
has to read now, or even you have to read later?

> The type definitions are never granular enough to guarantee correctness.

They were never meant for that purpose.

