
Dynamic type systems are not inherently more open - jose_zap
https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-type-systems-are-not-inherently-more-open/
======
svat
Six notable things I took away from this post:

\- Structural typing, i.e. instead of "you eagerly write a schema for the
whole universe", just limit to what you need (basically, encode only the same
kinds of assumptions you would make in a dynamically-typed language).

\- _It’s easy to discover the assumptions of the Haskell program [...] In the
dynamically-typed program, we’d have to audit every code path_ — Left implicit
in many debates about these topics are the "weights" that one attaches to
these things, the importance and frequency of attempting such activities.
Surely they vary, depending on everything from the application (Is it "code
once and throw it away", or does it need to be maintained when the original
programmers have left?) and down to the individual programmer's approach to
life (What is the cost of an error: how bad would it be to have a bug? Etc).

\- _" This is a case of improper data modeling, but the static type system is
not at fault—it has simply been misused."_ — To me this shows that bugs can
exist in either the data modeling or the code: in a dynamically-typed language
the two tend to coincide (with much less of the former) and in a statically-
typed language they tend to be separate. (Is this good or bad? On the one hand
you can think about them separately, on the other hand you have to look for
bugs in two places / switch between two modes of thought, but then again maybe
you need to do that anyway?)

\- Structural versus nominal typing, where the latter involves giving a name
to each new type ( _" If you wish to take a subselection of a struct’s fields,
you must define an entirely new struct; doing this often creates an explosion
of awkward boilerplate"_).

\- _" consider Python classes, which are quite nominal despite being dynamic,
and TypeScript interfaces, which are structural despite being static."_ — This
is highly illuminating, and just this bit (and the next) elaborated with some
examples would make for a useful blog post on its own.

\- _" If you are interested in exploring static type systems with strong
support for structural typing, I would recommend taking a look at any of
TypeScript, Flow, PureScript, Elm, OCaml, or Reason [...] What I would_ not
_recommend for this purpose is Haskell, which [is] aggressively nominal "_

For what it's worth, my opinion is that posts like this, on hotly debated
topics, would do well to start with concrete examples and be written in the
mode of conveying interesting information/ideas (of which there are a lot
here) rather than being phrased as an argument for some position, which seems
to elicit different sorts of responses — already most of the HN comments here
are about static versus dynamic type systems in general, rather than about any
specific ideas advanced by this post.

~~~
sethev
I thought the article was good and addressed a real point of confusion, as
evidenced by the two included comments (from Reddit and HN). You can consume
arbitrary data using a program written in a statically typed or dynamically
typed language. Whether it's decoupled from changes in the data depends on how
the code is written and the data model, which have nothing to do with static
vs dynamic typing.

To me the stronger argument is that the boundary between programs is
dynamically typed (interpreted and checked at runtime). This is true in the
statically typed example as well - the JSON is interpreted and checked at
runtime, not at compile time. There's nothing your compiler can prove in
advance about what's in the JSON that you'll receive at runtime.

If systems that extend beyond a single program require dynamic typing, doesn't
it make sense to invest more in ways to do dynamic typing better?

~~~
erik_seaberg
> the boundary between programs is dynamically typed

Many interfaces have declared, enforced static types. I don't have to write
any code to handle

    
    
      SELECT birthdate FROM employees WHERE id = ?
    

returning "fish" because the database would never let it happen.

~~~
sethev
You know that but your compiler doesn't (generally speaking).

~~~
thu2111
Hence ORMs. It's possible to do much better than ORMs though. The
object/relational type system mismatch is a property of how technology
evolved, it's not fundamental.

------
UglyToad
I think there's a perhaps irreconcilable disconnect between 2 camps here, but
I also think that's ok and different people are allowed to like different
things.

My experience, having gone from dynamic typing to static typing and now pining
for more expressive type features in my chosen language is that static typing
changes where you have to spend the cognitive complexity budget.

To my mind if I return to a piece of dynamically typed code written any time
other than in the same session I have to load a whole mental model of the
entire code, what types all the variables are, what expectations are attached
to them, what mutation, if any, each method call performs. I suppose the
trade-off is that advocates say you can prototype things more easily when
requirements are not known.

With static typing I only need to load 2 things, the high level goal, what I
need to achieve, and the local context where I need to make the change. The
rest is taken care of by static typing, I can encode all that cognitive
complexity in the types and I never need to think about it again. But as I
say, that's how my brain works, yours might work differently and that's fine.

~~~
amelius
Yes. Perhaps this is an indication that we actually need automated type
annotation. E.g. you initially write/prototype your program in a dynamically
typed form, then you click a "magic" button, and a tool converts your program
into statically typed style.

~~~
chongli
This already exists in Haskell. Firstly, Haskell has type inference so you can
write entire programs without any type annotations. Second of all, since type
annotations actually help to document your code, it’s recommended as good
Haskell style to annotate all top-level definitions.

To aid you in the latter task, you can press a key in your editor to fetch an
automatically-inferred type for the name under the cursor and insert it into
your code, then refine it as necessary if you find it to be too general
(inference always gives the most general type possible).

~~~
hopia
On what editor setup you get this kind of functionality with Haskell? I mean
the auto-insert of the inferred types.

~~~
chongli
I’ve been out of the game for a number of years but I used something called
ghc-mod [1] with its associated vim plugin [2] to get this functionality.
Moving forward, it seems that all of the effort has moved over to Haskell IDE
Engine [3]. It looks like this exact feature hasn’t been brought over yet but
it is planned. In the mean time you could still use ghc-mod though.

[1] [https://github.com/DanielG/ghc-mod](https://github.com/DanielG/ghc-mod)

[2] [https://github.com/eagletmt/ghcmod-
vim](https://github.com/eagletmt/ghcmod-vim)

[3] [https://github.com/haskell/haskell-ide-
engine](https://github.com/haskell/haskell-ide-engine)

~~~
hopia
Thanks for the tip! Haskell IDE engine is exactly what I've been using til now
and would've been very pleased to find out it had such a convenient feature.

Ghcide is on my list to try next as it's gotten a lot of attention lately.

~~~
jose_zap
Yes, it already has a code command to automatically add top level type
annotations. The caveat I that it can only work if you add -Wall to your
compilations flags.

------
ema
I've come to the conclusion that the benefit dynamic typing brings to the
table is to allow more technical debt. Now of course technical debt should be
repaid at an appropriate moment but that appropriate moment isn't always "as
soon as possible". Let me illustrate, say you're adding a new feature and
create lots of bugs in the process. Static typing will force you to fix some
of these bugs before you can test out the feature. Then while testing out the
feature you decide that it was a bad idea after all or that the feature should
be implemented completely differently. So you scrap the implementation. In
this case fixing those bugs was a waste of time. Dynamic typing allows you to
postpone fixing those bugs after you're more certain that the feature and its
implementation will stay.

~~~
david_draco
Interesting, I would have said static typing allows more technical debt.
Illustrating example: Lets say you pass a double variable from some part of
your code through 13 layers of APIs until it is actually looked at and acted
upon. Now you realise that you not only need a double, but also a boolean. In
dynamic typing, you can make a tuple containing both and only modifying the
beginning and end points. In static typing you have to alter/refactor the type
everywhere.

~~~
sethammons
You'd love Perl. Just pass @_ through your callstack. Want a new value
available 20 functions deep that is available at the top? Just toss it in @_
at the top and you are done.

The problem? Every one of your 20 levels of functions/subroutines has an
unnamed grab bag of variables. You get to keep that in your head. If you want
to know if you have a value in a given branch of code, your best bet, aside
from reading the code of the entire callstack, is to dump @_ in a print
statement and run the entire program and get it to call the function you are
in. Oh, and if one of those values contains a few screens worth of data, you
will need to filter that out manually. Even "documentation" in the form of
comments or tests will be unreliable due to comment-rot or mocked test
assumptions.

Even in Python, I'll often have to go up the callstack to know what a named
parameter actually is. And if similar shenanigans are going on, I again have
to pull out a debugger or print statements to know what I can do with an
argument.

With a static type, I see plain as the text on my screen what type I have as a
parameter and I immediately know what I can do with it. As weak as Go's type
system is, it is worlds better than Perl and Python for maintaining and
creating large, non-trivial codebases. The price is passing it around at the
time of writing.

~~~
dmux
Isn't passing @_ around similar to a concatenative language pushing and
popping things off the stack?

------
youerbt
This "be liberal in what you accept" idea, applied to modern programming,
always struck me as strange.

Yes, taking an unknown structure in your program is the easy part. Programming
against an unknown structure is where the problem lies.

I'd love to hear more examples of programming against such input that are
beneficial over "parse don't validate" idea.

~~~
bitwize
It's called Postel's law, and it's one of the burdensome idiocies Unix
programmers have saddled us with, along with text-file formats and protocols,
null-terminated strings, fork(2), and the assumption that I/O is synchronous
by default.

Of course, once you adopt a "follow the spec or GTFO" stance, you reap other
benefits as well; for example you are free to adopt a sensible binary format
:)

~~~
hhas01
The problem right there is in the definition of “liberal”.

A cautious, forward-thinking designer-developer would interpret it as “allow
for unknowns”. Whereas sloppy-ass cowboys think it means “accept any old
invalid crap” and pass their garbage accordingly.

One of these philosophies gave us HTTP; the other HTML. And no prizes for
knowing which one is an utter horror to consume (an arrangement, incidentally,
that works swimmingly well for its entrenched vendors and absolutely no-one
else).

------
pizlonator
Dynamic type systems are inherently more open.

This article really bends over in strange ways to say otherwise.

Fact is: dynamic typing is all about making fewer claims in your code about
what you expect about the world around you. With dynamic types, to load a
property you might just have to say the property name and receiver. With
static types, you usually also have to say the type of all other properties of
the receiver (usually by calling out the receiver’s nominal type). Hence as
systems experience evolution, the probability that the dynamic property lookup
will fail due to something changing is lower than the probability of the
static lookup failing.

The heading “you can’t process what you don’t know” is particularly confused.
You totally can and dynamic typing is all about doing that. In the dynamic
typing case, you can process “o.f” without knowing the type of “o.g” or even
the full type of “o.f” (only have to know about the type of o.f enough to know
what you can do to it). Static type systems will typically require you to
state the full type of o. So, this is an example of dynamic types being a tool
that enables you to process what you don’t know.

~~~
ookdatnog
> Static type systems will typically require you to state the full type of o.

This is not the case for languages with support for structural typing (as the
article mentions in the appendix), and most modern statically typed languages
have some degree of support for abstract interfaces of some sort which also
support writing functions with only partially known information about the
type.

I think one of the core insights in the article is that your code will
_always_ make at least some assumptions about what it is processing (unless
you're implementing the identity function, or a function which does not
inspect its argument at all), and _these assumptions are your type_. So if you
expect "o" to have a field called "f", and this field should return something
on which a "+" operator is defined, then these properties form your type (in
structural typing this is easy and boilerplate-free to express, with
interfaces there's some boilerplate but it's definitely expressible).

In that light, the difference between static and dynamic typing isn't how many
assumptions you make in your code, but rather how explicit you are about them,
and to what extent you make your assumptions amenable to automated reasoning.

~~~
sparkie
The difference between static and dynamic typing isn't about how explicit you
are about the assumptions in your code. You can be equivalently explicit in
dynamic and static languages. The fundamental difference lies in _when_ you
want the types to be verified. In static typing, that is before the program is
run. In dynamic typing, that is before the code is executed if you are
explicit, or when the code is executed if you are not explicit enough.

An example of dynamic typing being used pervasively in what we commonly call
statically typed languages is the downcast. (SubType)superTypeObject is a
dynamic typing construct. It is saying "defer unification of these types until
runtime," because there is insufficient information at compilation type to
determine the real type of superTypeObject.

Of course, such downcasts are discouraged in statically typed languages
without explicitly checking the type of superTypeObject before performing the
downcast, but not all type systems are capable of asserting these checks are
all in place at compile time. Some statically typed languages don't even have
a downcast.

You can be completely explicit about checking types before using them in a
dynamically typed language, and have the proper error handling in place in the
case that the type unification you're expecting doesn't happen.

~~~
ookdatnog
Yes, you're right, it's not about how explicit you are, or even how explicit
you have to be. You can write "stringly typed" code in a static language where
none of your assumptions are apparent in the type system. Conversely,
regardless of your typing discipline, you can be completely explicit about
your assumptions just by writing comments.

This fits in a larger point that, in the end, the technical qualities of a
language in many ways matter less than the culture surrounding the language.
Even Haskell has unsafePerformIO which means that none of its guarantees about
referential transparency actually hold, in theory. In practice, the culture of
the language ensures that, when you pull in a library, you can be pretty sure
that it's not riddled with unsafePerformIO, and you can almost always treat
code as if referential transparency is guaranteed.

A language may facilitate certain habits and discourage other habits. I feel
that statically typed languages do not force me to be explicit about my
assumptions, but they certainly encourage it by providing immediate benefits
to doing so (automated reasoning), and through the culture that surrounds
these languages.

------
iamflimflam1
I have developed and maintained several large systems in both dynamic and
statically typed languages.

From a maintenance point of view, the statically typed ones definitely win
out.

Trying to reason about bits of code with no idea what was supposed to be
passed in. Working out which bits of code are actually dead and can be safely
removed. Refactoring bits of code. All incredibly difficult in the dynamically
typed systems.

Given the choice, I would not embark on a large complex system without the
benefit of a strongly typed language.

~~~
sparkie
There are two aspects to maintenance. One is in maintaining a codebase, which
is what you're referring to. This greatly benefits from static typing as you
can have good guarantees about your code before you deploy it.

The other aspect of maintenance is in keeping a running system up without any
downtime. There are plenty of of use cases where recompiling code and
relaunching an application is not a viable solution. You need to be able to
patch a running system. Dynamic typing is beneficial here because you need the
old running code to be able to call into the new code which it knew nothing
about when it was originally compiled.

------
phoe-krk
Disclosure: dynamic typer here.

The way I understand it, a part of the issue is about possibly deferring the
time at which the type of data is known to the time at which the data is
operated upon, since calling a numeric addition function on not-numbers, e.g.
strings, is a type violation, no matter which programming language you are
writing in. The question is where you want your parsing-or-validating to
occur, since this decision makes piece of the software either more tightly
coupled or more heterogenous in which kinds of data they accept. The author
makes and proves a claim that this decision, which is naturally postponable in
JS due to its highly dynamic and schemaless-by-default object nature, is
similarly postponable in Haskell.

The other part about ignoring unknown keywords is very simple and
understandable to me - you can indeed allow a statically typed program to
ignore unknown schema keywords as you can allow a dynamically typed one to
error on them.

~~~
zozbot234
The author actually argues _against_ Haskell as an example of their
philosophy, because Haskell has poor support for _open_ static types. They
generally have to be faked in unintuitive ways by relying on the typeclasses
feature, and something like OCaml's support for structural, extensible records
and variants seems to be entirely off limits out-of-the-box.

~~~
yakshaving_jgt
I don’t think that’s correct. The author’s point is that structural typing is
possible in statically-typed languages, but isn’t a thing in Haskell.

To learn more, Google for “Haskell row polymorphism”.

The author correctly points out that structural vs nominal is a separate
argument from static vs dynamic.

~~~
zozbot234
The whole _point_ of row polymorphism (for both records and variants) is that
it allow for implementing open types in a static typing context. And OCaml
supports this out of the box whereas Haskell does not.

~~~
yakshaving_jgt
Yes, that’s true. I’m not disputing that. Perhaps I misinterpreted your
previous comment. I think the point I want to make is that in the argument
between static vs dynamic, the structural vs nominal argument is not really
relevant.

------
mamcx
This have a mirror with the "sql"/"nosql" debate. Being "schemaless" is
supposedly a great advantage of nosql, but the thing is that sql/relational is
as schemaless as you want!

The only thing is that rdbms push to create a static modeling of the FINAL
STORAGE that is close to be the ideal of it. But anyway, you can
create/delete/change tables as you wish (is not impossible to see dbs withs
100/200 tables), and SQL in fact build on the fly relations with the schema
you want (SELECT field, field....).

Where is truly weak is that not extend the relational model to INNER values
(so you can't "SELECT (SELECT chars FROM name)) but is a limitation of sql,
not the relational model.

BTW, I think the relational model is very close to the ideal here? is nominal,
is structural, you can re-model data AND types(relations), can have reflection
and still see with clarity all the types you have use. What else it need to be
useful??.

~~~
fauigerzigerk
_> Where is truly weak is that not extend the relational model to INNER values
(so you can't "SELECT (SELECT chars FROM name)) but is a limitation of sql,
not the relational model._

True, but fortunately most database systems have proprietary ways to achieve
some of that. For instance in PostgreSQL

    
    
      select * from regexp_split_to_table('abc', '') chars;
    

gives you

    
    
       chars 
      -------
       a
       b
       c
    

It's called table-valued functions and it's extensible. Obviously, it's not
quite as general as what you are talking about because the values are not
actually stored as tables. To get a bit closer to that you could use composite
types and arrays. I'm not convinced it's worth the added complexity though.

------
andybak
Having moved from Python to C# I'm still coming to terms with how I feel.

It's really hard to express but I do have a nagging feeling that something has
been lost that other commentators haven't quite put into words either. You
just write different code in dynamic languages - even ignoring type
declarations. And in many cases it feels like better, more humane code.

One piece of evidence for this elusive difference is my observation that API's
in Python tend to much nicer and much less verbose (again - ignoring the
obvious differences based directly on type declarations). APIs tend to feel
like they were designed for usability rather than mapping directly on to
library code and user be damned.

Is this a cultural difference or does something about static typing lead to
different patterns of thought?

~~~
jacobsenscott
Unfortunately C# does not have a very nice type system compared to more modern
typed languages - elm, haskell, ruby's sorbet. That is probably part of the
reason you are feeling that way.

------
Ari_Rahikkala
"Nitpickers will complain that this isn’t the same as pickle.load(), since you
have to pass a Class<T> token to choose what type of thing you want ahead of
time. However, nothing is stopping you from passing Serializable.class and
branching on the type later, after the object has been loaded."

Is that actually true in Java? It seems to me that the way that you'd
implement that load() method is by using generics to inspect the class you
were passed, figuring out what data it wants in what slot, and pulling that
data in from the input. You _could_ hold on to the input and return a dynamic
proxy, and you would be able to see when someone calls a getFoo() on that
proxy, but then you wouldn't know what the type of the foo it expects is. And
I don't know whether you could even make your proxy assignable to T.

~~~
tsimionescu
Hmm, that may be true for JSON, but I think that Java binary serialization
holds enough information about the original types to allow deserialization
without having to explicitly pass the expected class (though what the runtime
system does behind the scenes may be equivalent).

------
GnarfGnarf
To paraphrase Churchill, “If a programmer doesn't like dynamic typing by the
time he is 20, he has no enthusiasm. If he is not static typing by the time he
is 30, he has not learned from experience.”

------
mcnamaratw
Definitely dynamic typing is not about modeling the world. And it's much
broader than that. Static typing, strict or loose, also isn't about modeling
the world. OO design isn't about modeling the world. These things are
successful because they're effective _methods for organizing code_. They're
about as useful for modeling the world as the Dewey Decimal System is.

And that's fine! 'Modeling the world' is a grandiose philosophical kind of
activity and usually not something you need to do when you develop software.

~~~
msangi
How do you write code without having a model of the world?

------
sweeneyrod
I don't think it makes much sense to compare between static and dynamic typing
in general. Given the choice between static typing as in e.g. Java and dynamic
typing in Python, I'd pick Python almost every time, because Java is just too
verbose and the type system is kind of bad so the benefits are minimal anyway.
But that's a false dichotomy: you can have static typing without the verbosity
and with more significant benefits (like no null pointer exceptions) by using
something like OCaml.

~~~
weberc2
It’s fine to compare static and dynamic, but it’s important to understand the
things you are comparing so you don’t make silly (and very common) false
dichotomies like the one you point out. The problem isn’t the comparison, but
laziness.

------
alerighi
Recently I worked on a project where initially we though about writing it in
Rust, because why not, it seemed initially a good idea. It turned out to be a
completely wrong idea. The project was a simple web API, nothing fancy,
GraphQL and a SQL database. After a lot of frustrations we decided it to
rewrite everything in TypeScript and did that in a week.

TypeScript gives you the benefit of statically typed languages with the
benefit of dynamic typed languages, in the sense that you type only what you
decide to type, you don't have to type everything from the beginning, if you
want to test something fast just stick and any type or simply ignore
TypeScript errors and come to fix the errors later.

Dynamic features are also necessary, for example Rust lacks of a decent ORM,
there is Diesel that is implemented with macros and breaks every time, plus is
not a real ORM but rather a layer on SQL syntax, doesn't automate nothing.

Also compilation time has to be considered. Rust is so slow to compile,
especially when you start to have 200 dependencies, with some that include a
lot of macros. That means that testing is so much slower. With dynamic
languages you can even have hot reloading of your code.

I'm not saying that statically typed languages are stupid, there are situation
where they make a lot of sense, for embedded or system programming for example
I would never use node, and surely Rust will have a future in these contexts.

But the point is that you should use the most appropriate tool for the job,
and a dynamic language is in a lot of contexts the appropriate tool.

~~~
fendy3002
To add to this, dynamic typing is proper tool to use for API, because the
input payload is just formatted text, without / with very few type definition.
My experience said that it's hard to play with JSON on static typing, without
adding a strict data validation / conversion at the start. Serialization comes
with a set of strict rules that need to follow and state beforehand too.

The contrary, on dynamic typing language nested object as well as array can
easily be parsed natively and be used immediately.

~~~
yakshaving_jgt
This is absolutely false. Not a single word you have written in this comment
is true. This is a pervasive myth that you are perpetuating. If you had read
the article, you would know that what you have said is demonstrably false.

There is _nothing_ stopping you from operating on arbitrary JSON in Haskell.

Literally nothing.

~~~
dboreham
I suspect the parent wants to convert between JSON and typed records without
needing the JSON to conform to the type.

~~~
yakshaving_jgt
Ok, but this works just fine in Haskell. If you have a well-formed JSON string
and you want to parse it as a generic value, you can parse it into the Value
type. If you have a typed record (something less general than a Value) and a
JSON string that has enough of the structure to be parsed into that typed
record, you can parse it into that typed record. If the JSON string has extra
structure/data/fields your parser didn't expect, it's fine. Your JSON parser
in Haskell can ignore those extra things. Just like any dynamic language.

All of this was demonstrated in the article.

------
mirekrusin
Structural typing is extremly helpful/practical, it lowers "type-friction" a
lot. I have a dream that flow/ts will be lifted up to js spec one day (yes, I
know, that's why I said "dream"). With this the language would be open to
things like multiple dispatch a'la julia/traits/typeclasses and/or pattern
matching - which would (imho) place the language at the top for couple of
decades. This kind of stuff is unfortunatelly outside of scope of flow/ts as
it would require runtime support, which flow/js explicitly don't provide (ie.
stripped types makes valid js, all type-stuff happens at "compile" time only,
without access to runtime at all).

But flow/ts is major step towards practical solution, it covers something like
65% of properly typed language (properly = algebra on types,
structural+nominal typing, opaque types, nullability exposed at type level,
aggressive inferrence etc.)

------
samatman
This is a decent description of how to handle unwrapping a payload in a
language with ADTs.

The swipe at Rich Hickey at the beginning is ill-considered, however. This is
the sort of use case that clojure.spec is designed for, and it can, and does,
handle validation at an equivalent level of detail.

You can watch him building his speaking career on the emotional appeal of
specifying the types of your payloads here:
[https://vimeo.com/195711510](https://vimeo.com/195711510)

~~~
IceDane
I don't think this is a swipe at Rich Hickey, unless it's considered a swipe
to call someone out for repeatedly making misinformed statements about things
they don't understand properly.. which is what Hickey does a lot in his talks,
about Haskell's type system in particular.

~~~
samatman
The author says that Rich Hickey has built a speaking career on emotional
appeals to the superiority of dynamic types.

I would prefer to say that Rich Hickey has built a dynamic programming
language which includes a powerful typing system and that he's largely
responsible for its design.

It's difficult to reconcile those two statements.

~~~
whateveracct
Not really he's clearly done both.

------
mckinney
The _Manifold_ project [1] nicely illustrates the author's proposition
regarding structural typing and static type systems. Using the Manifold
library you can indeed create structural interfaces [2] in Java similar to
those in TypeScript. Manifold also enables the use of structural interfaces
with Maps, it's pretty amazing.

[1] [https://github.com/manifold-
systems/manifold](https://github.com/manifold-systems/manifold) [2]
[https://github.com/manifold-
systems/manifold/tree/master/man...](https://github.com/manifold-
systems/manifold/tree/master/manifold-deps-parent/manifold-ext#structural-
interfaces-via-structural)

------
pchiusano
Also maybe of interest: “The advantages of static typing, simply stated”
[https://pchiusano.github.io/2016-09-15/static-vs-
dynamic.htm...](https://pchiusano.github.io/2016-09-15/static-vs-dynamic.html)

------
papito
We are seeing a natural evolution where the size and complexity of software is
making strong typing almost a necessity. Python now has a type hint system.
Types can be checked with external tools but Python 4.x may (?) have that
built in.

Ruby is also moving in the same direction, as far as I know.

------
falcolas
Every time this comes up, I think about how the pendulum swings back and
forth. For example, if we take static and dynamic type systems in a slightly
larger scope, we’ve just recently moved from a static system to a dynamic
system, because the static system was being mis-used by “libraries”.

I’m speaking of the recent move to deprecate user agent strings, and the
pushing of “feature detection”.

It’s just human nature in the end. We’re not computer. We will never use a
static spec to its full potential; we’ll break a type system wide open the
moment we get to our limit of tolerance for not getting the job done.

We need type inference more. The computer has built the AST, why do compliers
seem so incapable of traversing that tree and finding the type errors for us?

~~~
haizan
It seems to me that your user agent example fits better on the
structural/nominal typing dichotomy, rather than static/dynamic.

------
skybrian
Protobufs don't follow this model. For example, a User message type will often
be defined in one place and code generated from it in multiple languages.
Every server and client uses essentially the same type, modified as suitable
for that language. As a result, clients are usually working with types that
they don't control and that have many more fields than they actually use.

Maybe clients _should_ declare their own types containing just what they need
and copy the data into them from the protobufs? But this does get tedious. On
the other hand, in a schemaless JSON system, there is nothing statically
checking that the clients and servers have compatible ideas of what a message
should contain.

~~~
msangi
With protobufs you get bytes that you have to parse with defined rules on what
to do with unexpected and missing fields.

How’s it different from the proposed model where you start by parsing the data
you receive and, once it’s parsed and know it’s good, you process it?

~~~
skybrian
It probably would work, provided that each client takes the original proto
file and removes all the fields they don't use, so they are treated as unknown
fields?

I don't know if there's a supported way to do this, though. To leverage it for
refactoring, there would also need to be a way to do a query to find out out
which clients use which fields.

------
mckinney
Seldom mentioned in type system discussions like this is the notion of
_openness_. For instance, F# is one of the very few languages addressing
openness where schematic structure in data can be _directly_ and type-safely
reflected by the type system e.g., using "type providers." This general area
of research is relatively untapped and, in my view, offers huge potential
given weak links with conventional type system/data bridging solutions (eg
code generation). Structural types emitted from something like type providers
would be a game changer.

------
jmull
The author doesn’t really address the issue raised in the first (longer)
quoted post.

Given a set of reasonable requirements I think pretty much no one would claim
a general-purpose language couldn’t satisfy them without too much trouble. In
fact, that might be a fair definition of a general-purpose language.

I’m fine with strong and expressive static type-checking, but you need to hold
the main limitation in mind as you use it in a distributed environment: the
guarantees are static. They pertain only to your little binary and don’t say
anything about the rest of the system.

People can tend to make the mistake of assuming other components in the system
will continue to behave as they do on the day their own component was
compiled.

~~~
hopia
Wouldn't that boil down to the communication between the distributed
components? I think this author's solution to that is well detailed in his
post "parse, don't validate".

~~~
jmull
A general principle in a complex system of components is for each component to
accept the widest range of inputs is reasonably can (and generate the
narrowest range).

This improves reliability and minimizes the scope of changes needed for the
system to continue to work when something changes. This is particularly
important in distributed system, where you can’t generally update the system
coherently and instead need/want to update it piece-meal or progressively.

Strong type systems tend to encourage you to specify strong constraints on the
typed values. While you’re specifying your types you need to be careful to not
over specify, making assertions you don’t need to make, ending up with a
brittle system what will break unnecessarily.

There are nice advantages to strong expressive types, but the advantages are
all within the local component and it takes some extra care to avoid imposing
inflexibility on the system.

------
iovrthoughtthis
I would posit that, the more specific the rules of a system, the fewer
behaviours can be expressed by it.

When a language restricts the usage of things by the type of thing being used,
it restricts the set if behaviours that can be expressed in that language.

You can get around this by adding more rules to specify the behaviours needed.
(Boilerplate?)

In this way, a dynamic language permits more behaviours with less rules. It is
more “expressive”.

This can be good and bad, some of those behaviours are probably unwanted!

And this is just a theory.

------
dboreham
Nice that someone put in the time to write up arguments I've been having for
years, or at least since JS rose in popularity.

------
exdsq
Interesting how different the response has been, in general, to those from
r/programming.

[https://www.reddit.com/r/programming/comments/eqv6yd/no_dyna...](https://www.reddit.com/r/programming/comments/eqv6yd/no_dynamic_type_systems_are_not_inherently_more/)

------
millstone
I wrote the pickle.load comment [1]. I want to defend it here.

King (the author) places "dynamically typed" and "statically typed" in
opposition. But keep in mind that many languages have both: Java, ObjC,
TypeScript, C#, etc. I like to say that a a language "has" static and/or
dynamic types, but "is" dynamic if it supports runtime features like
reflection - of course this is a sliding scale.

King observes that static types allow you to make explicit what you know.
After calling `pickle.load` you don't know _anything_ about the result, so its
static type is going to be something quite constraining like `String ->
Object`. It seems we are in violent agreement on this point.

King then goes on to make a different claim: that it is possible to implement
`pickle.load` in Java and also in Haskell. It is possible in Java but not in
Haskell.

Here's the key line for Java:

 _However, nothing is stopping you from passing Serializable.class and
branching on the type later_

Notice the subtle shift in meaning for "type": from static type to runtime
tag. It is possible to branch on the type at runtime, because the type is
available at runtime, because Java has dynamic types. They're awkward and
painful to use, but it is possible.

For Haskell:

 _Can we do this in Haskell, too? Absolutely—we can use the serialise library,
which has a similar API to the Java one mentioned above._

Absolutely you cannot! The Serialise libary requires you to list all of the
possible result types up-front, "by providing a set of type class instances."
There is no way for a Haskell program to deserialise a value whose static type
was not visible when the program was compiled. There is no way in Haskell to
"get a list of all types" (it's hard to even formulate this).

Implementing Python-style pickle isn't really about types, it's about the
language's dynamic features. Can you look up a types by name, reflect on a
value at runtime? Dynamic features really do bring new capabilities to
programs.

1:
[https://news.ycombinator.com/item?id=21479933](https://news.ycombinator.com/item?id=21479933)

~~~
choeger
I can easily write you pickle in Haskell. In fact you can easily embed
Python's single type in Haskell and thus do _everything_ you can do in python.

The difference to Java has nothing to do with reflection, really, but rather
with a uniform object representation (in Java you know the shape of all values
by default).

~~~
millstone
I don’t believe one can write pickle.load in Haskell. Please show me how!

A Java pickle.load would need to lean heavily on its dynamic features
including reflection. For example, looking up a class by name.

~~~
choeger
Have a look at this typeclass, for instance:

[https://hackage.haskell.org/package/base-4.12.0.0/docs/Type-...](https://hackage.haskell.org/package/base-4.12.0.0/docs/Type-
Reflection.html#t:TypeRep)

If that's to complicated, just create an ADT of all the python values and
translate Pickle's implementation directly.

------
throwaway17_17
EDIT: I was in a snarky mood earlier when I wrote this comment, I was tempted
to just delete it, but I’ll leave the text as a reminder to myself to find
better wording for comments despite my personal mood in the future.

I clearly do not understand the overwhelmingly dedication type system
proponent have against dynamic languages, particularly as evidenced by the
previous comments in this thread. I would note that it seems to be the same,
but reversed, for proponents of dynamic languages. Other than two or three
mentions of Rust, all the examples and discussions have been about primarily
interpreted or JIT languages with extensive runtimes. So there is little
argument that these type systems are saving users (and I mean developers) from
“memory safety” bugs or potential vulnerability exposing errors (those would
tend to be problems with the interpreter/runtime implementation).

Maybe I’m missing something, but if we are talking about managed language
runtimes and, again based on the languages dominating the discussion, are
nearly universally garbage collected, who cares what, if any, type system is
in use for a given program?

I don’t know for certain (I don’t do any web based work) but I can’t recall
ever reading about using typescript to boost speed or memory efficiency over
vanilla JS. There is some performance benefit to say compiled Haskell vs
interpreted Python, but that is apples and oranges, but based on a limited
amount of googling, it looks like Clojure and Haskell are reasonably close for
performance concerns.

Also, most of the type systems being discussed are complex and high level, so
where does C and C++ fit into this topic? They are statically typed languages
(I am aware the type systems used are fundamentally weaker than those of HM
type systems) but no one seems to be arguing that they are good examples of
why static typing is beneficial. Rust, while a more typical functional
programming style type system, is still not really being discussed. I know
that the article doesn’t address these areas or languages, but for those
advocating so strongly for types, why no present something other than
technical debt, easier reading, and organizational benefits. For dynamic
proponents, why not address the lack of user visible benefits, i.e. no real
performance win in common usage scenarios, the lack of assistance weaker type
systems give to protect from differing types of errors, etc.

All I can tell is that some people like types and some people don’t. And most
of the discussion seems to center on your standard CRUD app, web based
applications, does it even matter?

~~~
whateveracct
Performance is not in any way the reason for static types. A GC does not
render static types less useful.

~~~
throwaway17_17
I think the reason for static types is up to the user of static types. There
may be multiple reasons or motivations for static typing. In particular,
static linear type systems are useful for, in addition to other reasons, the
ability to fully specify the creation, usage, and consumption of resources.
This is a direct performance benefit. Incidentally, this can be said of
substructal types in general. Types equipped with Algebraic effects for the
management of ‘side-effects’ and uniqueness types also are useful for direct
specification of several things, one of them being the soundness if updating
data structures in place. Again, a direct performance benefit.

I agree that there are more reasons than performance to advocate static
typing, I just don’t understand why that would not be a point of advocacy in a
discussion about the pros and cons of typing systems in general.

------
blain_the_train
Types are specifications, Knowing if and how something doesn't meet a spec is
useful both in development and at run time.

------
ukj
This is so predictable.

1\. Author attacks premise of dynamic type systems argument.

2\. Author presents premise for static type system (which is practically
confluent/equivalent/equifinal to premise he just undermined).

>static type systems allow specifying exactly how much a component needs to
know about the structure of its inputs, and conversely, how much it doesn’t.
Indeed, in practice static type systems excel at processing data with only a
partially-known structure, as they can be used to ensure application logic
doesn’t accidentally assume too much.

Chances are you are wrong about 'how much you need to know' irrespective of
your type system. Nobody gets it right the first time.

When we figure out that we were wrong about how much we need to know about the
structure of our inputs (e.g we assumed too little) we do this thing we
call... "extending our code" [1].

The question then is simply: which type-system is better at extensibility?

And I have no idea how to define "better".

[1]
[https://en.wikipedia.org/wiki/Extensibility](https://en.wikipedia.org/wiki/Extensibility)

~~~
yakshaving_jgt
This is so predictable.

1\. HN commenter misses the point of the article.

2\. HN commenter even misgenders the article's author.

~~~
dang
Please don't reply to a bad comment with a worse one. That only poisons the
thread further, and the site guidelines explicitly ask you not to do it.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

------
cryptica
>> but they simultaneously advance an implicit belief: that dynamically typed
languages can process data of an unknown shape.

The real difference is that dynamic type systems don't pretend to. Static type
systems have a way of putting developers at ease on auto-pilot. Arguments
which compare dynamic type systems with static type systems should be more
focused on reality rather than hypothetical, idealized use cases where the
developer is a completely rational agent who understands everything about the
project's direction and strategy. The reality is that the vast majority of
developers can't design software properly; they can't even select the best
data structure to solve the simplest problems and they over-engineer and can't
create good abstractions that make sense and aid the project's evolution. Most
developers don't know what a good abstraction looks like because they don't
have a long term vision for the product that they're working on.

All these safety features which static typing supposedly introduces don't
actually protect the project's code from the real threat which is poor design
at an architectural level. I would even argue that dynamic typing can
encourage worse design because it gives developers an incentive to keep
inventing strange abstractions/types that don't correspond to the high level
picture of what the software actually does... Most statically typed projects
I've seen tend to be full of weird and unnecessary abstractions like:
'Interactor', 'Builder', 'PrimaryAdapter'... these kinds of abstractions do
not bring the code any closer to alignment with the end user's perspective of
the software (which should be the primary goal of any abstraction), instead,
they impose constraints on developers working on the project... but as
requirements change, these constraints have a way of becoming redundant which
is why sometimes large amount of logic needs to be re-written.

Static typing encourages short term development strategies centered around
constraining developers from doing certain things based on speculative and
often completely arbitrary concepts that the lead developer felt in their gut
was important. Maybe if the abstractions were designed around some long-term
vision of the evolution of the project, but this is almost never the case (and
I've worked on many different projects, for many different companies for over
a decade), most of the time, abstractions and the constraints that they impose
are in fact based on nothing but arbitrary technical decisions that may as
well have been the result of a coin toss.

Developers should have an incentive to create fewer (and only necessary)
abstractions, not more. By forcing developers to rely on their memory instead
of their IDE to figure out which variables hold what kinds of objects, there
is a strong incentive for them to keep abstractions as simple an non-confusing
as possible.

Any tool which makes some aspect of life easier for the user will invariably
make the user lazier and complacent in that domain. Abstractions are not
something one should be complacent about, they're very difficult to get right.

~~~
cryptica
>> I would even argue that dynamic typing can encourage worse design

I meant static typing... By making it easy to keep track of interfaces/types,
it creates incentives to create more different kinds of abstractions.

------
kresten
Title is confusing refers to Open worlds Not Open software

~~~
tsimionescu
I don't think there was any reason to assume a post title starting with
'dynamic type' would use open to refer to open-source - how could there
possibly be a correlation?

~~~
shkkmo
The reason would be am unfamiliarity with the use of the term "open" in a
typing context. Without that knowledge, "open source" would be the next most
plausibly related usage.

------
luord
> We don’t have to respond to an ill-formed value by raising an error! We just
> have to explicitly ignore it.

The point of the article is basically "dynamic type systems are not inherently
more open because we can just ignore type errors in static type languages
[which means, of course, that we're making them behave like dynamic
languages]"... Well, shoot.

Ultimately, this seems like her previous article was just contributing to that
_six-decade-old_ flamewar. Since this is a flamewar, it got comments
disagreeing, and now she posted an article disagreeing with the disagreements.

Everything worked as expected and nothing that hasn't been said before (over
and over) was said.

Why can't people just be ok with "I like this paradigm and it's ok if you like
another"? Sure, the discussion has led to plenty of advances in programming
language design, but at this point it's pretty clear that, if it were possible
to absolutely prove that one type system is better than another, we would have
done so already. Meaning that everything pretty much comes down to opinion.

