
Diminishing returns of static typing - robgering
https://blog.merovius.de/2017/09/12/diminishing-returns-of-static-typing.html
======
alkonaut
There are 3 main areas of interest in the discussion of benefits of static vs
dynamic typing.

\- Quality (How many bugs)

\- Dev time (How fast to develop)

\- Maintainability (how easy to maintain and adapt for years, by others than
the authors)

The argument is often that there is no formal evidence for static typing one
way or the other. Proponents of dynamic typing often argue that Quality is
_not_ demonstrably worse, while dev time is shorter. Few of these formal
studies however look at software in the longer perspective (10-20 years). They
look at simple defect rates and development hours.

So too much focus is spent on the first two (which might not even be two
separate items as the quality is certainly related to development speed and
time to ship). But in my experience those two factors aren't even important
compared to the third. For any code base that isn't a throwaway like a one-off
script or similar, say 10 or 20 years maintenance, then the ability to
maintain/change/refactor/adapt the code far outweigh the other factors. My own
experience says it's much (much) easier to make quick and large scale
refactorings in static code bases than dynamic ones. I doubt there will ever
be any formal evidence of this, because you can't make good experiments with
those time frames.

~~~
irrational
10-20 years?! Holy Cow! Other than huge software projects (like Word or Mac OS
- and even then...) is there really software that still has that kind of
maintenance window? I've worked for a Fortune 150 company for nearly 2
decades. There is not a single piece of software at the company that has not
been rewritten from scratch (usually due to business changes) at least once
every 10 years. I can't even imagine something that would still be useful
after 10 years (honestly, even 5 years seems like a stretch). Just think -
software written 20 years ago would have been written when the WWW was still
soiling its diapers.

~~~
ex_amazon_sde
> I can't even imagine something that would still be useful after 10 years

Ah the HN perception bubble.

Good code last longer than that. Bad code gets replaced.

~~~
humanrebar
Good code is replaceable. Bad code is hard to get rid of.

Some code sticks around because it's great at what it does. Some code sticks
around because it works if you don't touch it and is _impossible_ to delete
due to various kinds of dependencies.

~~~
crdoconnor
All code is replaceable. It's bad APIs that are hard to get rid of.

Most POSIX APIs, for instance, are confusing, obtuse and unnecessarily
imperative but still _good enough_ in spite of being 40 odd years old. There's
way too much code that implements or calls them to justify making significant
changes as this point.

------
agentultra
I think what's often missing from these arguments is that statically checking
(or inferring) homogenous lists is probably one of the most superficial uses
of the type system in Haskell (and indeed not the interesting feature most
power-users of Haskell are interested in as far as I can tell).

What _is_ interesting is using the type system to specify invariants about
data structures and functions at the type level _before_ they are implemented.
This has two effects:

The developer is encouraged to think of the invariants before trying to prove
that their implementation satisfies them. This approach to software
development asks the programmer to consider side-effects, error cases, and
data transformations before committing to writing an implementation. Writing
the implementation proves the invariant if the program type checks.

(Of course Haskell's type system in its lowest-common denominator form is
simply typed but with extensions it can be made to be dependently typed).

The second interesting property is that, given a sufficiently expressive type
system (which means Haskell with a plethora of extensions... or just
Idris/Lean/Agda), it is possible to encode invariants about complex data
structures at the type level. I'm not talking about enforcing homogenous lists
of record types. I'm talking about ensuring that Red-Black Trees are properly
balanced. This gets much more interesting when embedding DSLs into such a
programming language that compile down to more "unsafe" languages.

~~~
danharaj
List typing isn't as superficial as it seems. The following has happened to me
multiple times, perhaps in the last _month_ :

I have a large code base. I want to replace a fundamental data structure to
support more operations/invariants/performance guarantees. I change the type
at the roots of the code base. My instance of ghcid notifies me of the first
type error. I fix it. This repeats until the program compiles again. I run the
tests. All the tests pass.

This is _insane_ in Python/C/Ruby. I've had to do it in C and Python. In
Haskell I do it with impunity.

The type system doesn't just check what my program does, it is the compass,
map, and hiking gear that gets me through the wilderness.

~~~
Sir_Cmpwn
I don't really the lumping of C in with Python and Ruby here. The C compiler
picks up on that, too. All over this comment section people are calling C
weakly typed, I don't get it. Is it because void* exists? Every language has
something like that.

~~~
mikeash
It's not just that void * exists, but that it's basically mandatory.

C's built-in arrays are super weak, so you need some library to do proper
resizable arrays. Since C doesn't have generics, such a library will use void
* as the type for putting values into the array and getting them back out
again. You'll be casting at every point of use, and nothing will check to make
sure you got the cast right, other than running the code and crashing.

~~~
flukus
> C's built-in arrays are super weak, so you need some library to do proper
> resizable arrays. Since C doesn't have generics, such a library will use
> void * as the type for putting values into the array and getting them back
> out again. You'll be casting at every point of use, and nothing will check
> to make sure you got the cast right, other than running the code and
> crashing.

There are other options though like macros and code generation. Code gen in
particular can give you more options than generics without sacrificing any
type safety.

~~~
mobiletelephone
I use a code generator. It has a great type system, it's composable, mature
and it can even compile pretty fast with the right tooling. It's called C++!

~~~
flukus
Templates are good for some things but they only do a fraction of what code
generators can do. With code generation you can generate types from database
tables, web APIs, etc. You can do things like declaratively declaring database
views and generate huge chunks of an application. It can handle all sorts of
boiler plate code that you can't do with templates alone.

------
catpolice
Static typing prevents bugs in code to the degree that the programmer can
correctly encode the desired behavior of the program into the type system.
Relatively little behavior can be encoded in inexpressive type systems, so
there's a lot of room for bugs that have nothing to do with types. A lot more
behavior (e.g. the sorts of invariants mentioned in agentultra's top level
comment) can be encoded in a more expressive type system, but you then have
the challenge of encoding it /correctly/. A lot of that kind of thinking is
the same as the kind of thinking you'd have to do writing in a dynamic
language, but you get more assurances when your type system gives you feedback
about whether you're thinking about the problem right.

For my money, I work in a primarily dynamic language and I already have a set
of practices that usually prevent relatively simple type mismatches so I very
rarely see bugs slip into production that involve type mismatches that would
be caught by a Go-level type system, and just that level of type information
would add a lot of overhead to my code.

But if I were already using types, a more expressive system could probably
catch a lot of invariance issues. So I feel like the sweet spot graph is more
bimodal for me: the initial cost of switching to a basic static type system
wouldn't buy me a lot in terms of effort-to-caught-bugs-ratio, but there's a
kind of longer term payout that might make it worth it as the type system
becomes more expressive.

~~~
pdonis
_> Static typing prevents bugs in code to the degree that the programmer can
correctly encode the desired behavior of the program into the type system._

Exactly. The author of the article implicitly equates "statically verified
code" with "bug-free code". But that's not correct. It's quite possible (and
even, dare I say it, fairly common) to have code that expresses, in perfectly
type-correct fashion, an algorithm that doesn't do what the user actually
wants it to do. Static typing doesn't catch that.

~~~
zbobet2012
It depends on your type system. In new languages like Idris or F* you can
encode _in the type_ the correctness of an algorithm and it _will not compile_
if the compiler can not prove that correctness.

For example I can prove my my string reverse works in Idris
([https://www.stackbuilders.com/news/reverse-reverse-
theorem-p...](https://www.stackbuilders.com/news/reverse-reverse-theorem-
proving-with-idris)). Or I could prove that my function squares all elements
in a list. Etc.

Now a big part of the problem is expressing with sufficient accuracy what the
properties of the algorithm you want to prove are. For example for string
reverse I may want to show more than that `reverse (reverse s) = s`. Since
after all if reverse does _nothing_ that would still be true. I would probably
want to express that the first and last chars swap when I just call reverse
xs.

~~~
s17n
There's no such thing as "proving correctness". You can have bugs in the type
definitions. You can have bugs in the english (or whatever your native
language is) description of what you think the algorithm should be doing. You
can prove a program does what the types say it should do but that is not what
"correctness" means.

>Now a big part of the problem is expressing with sufficient accuracy what the
properties of the algorithm you want to prove are. For example for string
reverse I may want to show more than that `reverse (reverse s) = s`. Since
after all if reverse does nothing that would still be true. I would probably
want to express that the first and last chars swap when I just call reverse
xs.

This is no different from writing tests in a dynamic language.

~~~
naasking
> This is no different from writing tests in a dynamic language.

Types and tests are not equivalent. This is a prevalent myth among dynamic
typing enthusiasts. There is no unit test that can ensure, eg. race and
deadlock freedom, but there are type systems that can do so. There are many
such properties, and tests can't help you there.

Types verify stronger properties than tests will ever be able to, full stop.
You don't always need the stronger properties of types, except when you do.

~~~
zbobet2012
To add some color here on the difference between a type proof and a test:
consider that _you can never test all possible strings_ for reverse.

However, a type proof can show that reverse, reverses all possible strings.

It is possible to test that a function on 16 bit integers returns the correct
value for all inputs. Doing so would be a proof by exhaustion.

Type based proofs let us prove things using other methods than exhaustion,
which is the only possible way to prove things with tests. That is an
important property.

~~~
naasking
Indeed, or to summarize as a soundbite: tests can only prove existential
properties but types can prove universal properties.

------
simon_o
The biggest issue with claims like "there are only diminishing results when
using a type system better than the one provided in my blub language" is that
it assumes people keep writing the same style of code, regardless of the
assurances a better type system gives you.

"I don't see the benefit of typed languages if I keep writing code as if it
was PHP/JavaScript/Go" ... OF COURSE YOU DON'T!

This is missing most of the benefits, because the main benefits of a better
type system isn't realized by writing the same code, the benefits are realized
by writing code that leverages the new possibilities.

Another benefit of static typing is that it applies to other peoples' code and
libraries, not only your own.

Being able to look at the signatures and bring certain about what some
function _can't_ do is a benefit that untyped languages lack.

I think the failure of "optional" typing in Clojure is a very educational
example in this regard.

The failure of newer languages to retrofit nullabillity information onto Java
is another one.

~~~
Merovius
The article makes two main points: a) static typing has a cost and b) thus,
any benefit it brings should be examined against that cost.

I am sorry, but I don't really see how you stating more benefits of static
typing really counters either of them.

I recommend reading the article again. But this time, try not to read it as
defending a specific language (I only mentioned my blub language so that it's
a more specific and extensive reference in the cases where I use it - if you
are not using my blub language, you should really just ignore everything I
write about it specifically) and more as trying to talk on a meta-level about
how we discuss these things. Because your comment is an excellent example of
how _not_ to do it and the kind of argument that prompted me to this writeup
in the first place.

~~~
Retra
Those are not really 'points', though; they are far too trivial. Obviously,
nothing counters them, because they are tautologies that could just as well
apply to any subject.

The point is to explore a _comparative_ difference in value, and that is
realized through mastery of the tool, not merely living in a world where it
exists.

~~~
Merovius
> they are far too trivial.

You'd have thunk I didn't have to make them than. But I did, judging from
literally every argument I had about this.

------
flavio81
What amuses me in all "static typing versus..." discussions, is that it
usually it is the comparison between two camps:

Camp A: Languages with mediocre static typing facilities, for example:

    
    
         -- C (weakly typed)
         -- C++ (weakly typed in parts, plus over-complicated
            type features) 
         -- TypeScript (the runtime is weakly typed, 
            because it's Javascript all the way down)
    

Camp B: Languages with mediocre dynamic typing facilities, for example:

    
    
         -- Javascript (weakly typed) 
         -- PHP 4/5 (weakly typed) 
         -- Python and Ruby (no powerful macro system to 
            help you keep complexity well under control 
            or take fulll advantage of dynamicism)
    
    
    

Both camps are not the best examples of static or dynamic typing. A good
comparison would be between:

Camp C: Languages with very good static typing facilities, for example:

    
    
         -- Haskell
         -- ML
         -- F#
    

Camp D: Languages with very good dynamic typing facilities, for example:

    
    
         -- Common Lisp
         -- Clojure
         -- Scheme/Racket
         -- Julia
         -- Smalltalk
    
     

I think that as long as you stay in camp (A) or (B), you'll not be entirely
satisfied, and you will get criticism from the other camp.

~~~
andrewla
It's Camp D that I'm least familiar with here; outside of academic projects in
lisp/scheme I've never used them for anything serious.

What exactly does it mean to have "good dynamic typing facilities"?

~~~
Lutia
> What exactly does it mean to have "good dynamic typing facilities"?

To quote Peter Norvig on the difference between Python and Lisp, but you could
apply it to most other mainstream dynamic languages vs Lisp :

> Python is more dynamic, does less error-checking. In Python you won't get
> any warnings for undefined functions or fields, or wrong number of arguments
> passed to a function, or most anything else at load time; you have to wait
> until run time. The commercial Lisp implementations will flag many of these
> as warnings; simpler implementations like clisp do not. The one place where
> Python is demonstrably more dangerous is when you do self.feild = 0 when you
> meant to type self.field = 0; the former will dynamically create a new
> field. The equivalent in Lisp, (setf (feild self) 0) will give you an error.
> On the other hand, accessing an undefined field will give you an error in
> both languages.

Common Lisp has a (somewhat) sound, standardized language definition, and
competing compiler/JIT implementations that are much faster than anything that
could ever possibly come from the Python camp because the latter is actually
__too __dynamic and ill-defined ( "Python is what CPython does") and making
Python run fast while ensuring 100% compatibility with its existing ecosystem,
without putting further restraints into the language, is akin to a mirage.

~~~
catnaroek
What does “somewhat sound” mean?

~~~
flavio81
I think he refers to some of the usual criticisms of Common Lisp:

1\. The language specification is very big. This is true, it is a very big
specification. On the other hand, this is mostly caused because the language
spec also includes the spec for its own "standard library", unlike what
happens in C or Java, for example, where the Std. lib is specified elsewhere.
CL's "standard library" is very big, because there are many, many features.

The other reason the spec is so big, is that this is a language with a lot of
features - you can do high level programming, low level, complex OOP, design-
your-own OOP, bitwise manipulation, arbitrary precision arithmetic,
dissasemble functions to machine language, redefine classes at runtime, etc
etc etc.

Probably the extreme of the features is that there is a sql-like mini-
programming language built in just for doing loops (!), "the LOOP macro". On
the other hand, you can choose not to use it. And if you use it, it can help
you write highly readable and concise code. More info:

[http://cl-cookbook.sourceforge.net/loop.html](http://cl-
cookbook.sourceforge.net/loop.html)

2\. The "cruft"; Common Lisp is basically the unification ("common") of at
least two main Lisp dialects that were in use during the 70s. So there are
some parts (mind you, just _some_ ) in which some naming or function parameter
orders could have been more consistent; for example here everything is
consistent:

    
    
        ;; access a property list by property
        (getf plist property)
    
        ;; access an array by index
        (aref array index)
    
        ;; access an object's slot
        (slot-value object slot-name)
    

... but here the consistency is broken:

    
    
        ;; gethash: obtain the element from a hash table, by key
        (gethash key hash-table)
    

There is also sometimes some things that seem to be redundant, like for
example "setq", where "setf" can do everything you can do with "setq" (and
more); or for example "defparameter", and "defvar" where in theory "setf"
might be enough. But there are differences, and knowing such differences help
to write more readable and better code. And it's really nitpicking, for these
are easy to overcome.

3\. Because of the above, CL is often criticized because of being a language
"designed by committee". But, unlike other "committee-designed languages",
this one was designed by selecting, from older Lisps, features that were
already proven to be a Good Thing, and incorporating them into CL without too
many changes. So you can also consider it to be "a curated collection of good
features from older Lisps..."

4\. Scheme, the other main "Lisp dialect", has a much, much smaller and
simpler spec, so it's easier to learn. But on the other hand this also means
that many features are just absent, and will need to be implemented by the
programmer (or by external libs), without any standarization. On the other
hand, due to the extensive standarization, usually Common Lisp code is highly
portable between implementations, and often code will run in various CL
implementations, straight away, with zero change.

Historically, Scheme was more popular inside the academic community while
Common Lisp was more popular with production systems (i.e. science, space,
simulation, CAD/CAM, etc.) Thus, there used to be an animosity between
Schemers and Lispers, although jumping from one language to other is rather
easy...

~~~
catnaroek
0\. There is nothing wrong with big standard libraries, so long as they are
not redundant and the core language is small.

1\. This is a serious criticism, but it has nothing to do with soundness.

2\. There is absolutely nothing wrong with a language being designed by a
committee, so long as the committee's members are all competent.

3\. Back to 0.

~~~
flavio81
_> 2\. There is absolutely nothing wrong with a language being designed by a
committee_

It sort of has a bad stigma, because two well-known, unloved languages were
designed by committee: COBOL and PL/I.

~~~
kazinator
Today, those roles are played by C++17 and C11.

------
fny
There's one huge benefit to static typing people often forget: self
documentation.

While, yes, top-quality dynamic code will have documentation and test cases to
make up for this deficiency, it's often still not good enough for me to get my
answer without spelunking the source or StackOverflow.

I feel like I learned this the hard way over the years after having to deal
with my own code. Without types, I spend nearly twice as long to familiarize
myself with whatever atrocity I committed.

~~~
hellofunk
Many dynamically typed languages offer excellent runtime contract systems
(Racket, Clojure) that serve as an implicit documentation at least as well as
a statically-type language. Often more so, because you can express a lot of
things in contracts that are not easily expressed in type systems.

~~~
suprfnk
> because you can express a lot of things in contracts that are not easily
> expressed in type systems.

Can you give an or some example(s) of this?

~~~
sigstoat
you can put arbitrary functions in a contract. with static typing that
requires dependent types. and while i'm a fan, that's an enormous can of
complexity to bust open.

say you've got a function that takes a list of numbers, and some bounds, and
gives you back a number from the list that is within the bounds (and maybe
meets other criteria, whatever). your contract for the function could require
not only that the list be comprised of numbers, and the bounds are numeric,
but also that the lower bound is <= the upper bound, and that the return value
was actually present in the input list.

------
mpartel
Having programmed in languages ranging from Ruby to Coq, for web apps and
games, I feel the sweet spot is somewhere in the neighborhood of Java/C#, i.e.
include generics but maybe leave out stuff like higher kinds and super-
advanced type inference (and null!).

The main use case of generics, making collections and datastructures
convenient and readable, is more than enough to justify the feature in my
view, since virtually all code deals with various kinds of "collections"
almost all of the time. It's a very good place to spend a language's
"complexity budget".

I wrote an appreciable amount of Go recently, with advice and reviews from
several experienced Go users, and the experience pretty much cemented this
view for me. An awful lot of energy was wasted memorizing various tricks and
conventions to make do with loops, slices and maps where in other languages
you'd just call a generic method. Simple concurrency patterns like a worker
pool or a parallel map required many lines of error-prone channel boilerplate.

~~~
runT1ME
> An awful lot of energy was wasted memorizing various tricks and conventions
> to make do with loops, slices and maps where in other languages you'd just
> call a generic method.

I feel the same way going from languages with HKTs back to Java/C#...

Not sure why you think they're not as useful, it sounds like you're making the
same argument as OP but just moving the bar one notch over...

~~~
mpartel
I am. I think the OP is fundamentally right about the sweet spot being pretty
far from either extreme, I just disagree slightly about where exactly :)

Subjectively, I use ordinary generics all the time, but see the need for HKTs
only occasionally. It's entirely possible I'm not experienced enough to see
most of their possible use cases, but then I'd wager most programmers aren't.

~~~
willtim
In retrospect, HKTs are arguably Haskells greatest innovation, enabling
extremely general abstractions and huge amounts of code reuse.

~~~
mpartel
In my subjective opinion, Haskell has taken abstraction way past the point of
diminishing returns, at least for the problems I tend to work on.

A large portion of advanced Haskell type system features seem to be about
emulating things you could do with side-effects. I guess I prefer Rust's
approach to managing side-effects, or even just Scala's implied convention of:
use 'var' very sparingly, and mostly locally. Yes, some guarantees get traded
away, but so much simplicity is gained.

I'm not very experienced with Haskell, but I've written a fair bit of Scala
and I've utterly failed to see the value in scalaz and similar libraries,
despite trying them a few times. They always seem to add lots of complexity
without a tangible benefit.

Coming at it from another angle, I just don't see many cases where I feel I
have to repeat myself due to a shortcoming of, say, Java's or C#'s type
system. If I could add one feature to either, it'd actually be support for
variadic type parameters.

~~~
willtim
As a counterexample, C# needed expensive language extensions to accommodate
both LINQ and Async/await. Both can be implemented in Haskell purely as a
library, thanks to HKTs.

Both Java and C# tend to rely heavily on frameworks such as Spring to
workaround issues with the expressivity of the languages. This causes problems
when one needs two frameworks (they don't in general compose). In Haskell,
HKTs allow one to write polymorphic programs that are parametric with respect
to certain behaviours and dependencies, no dependency injection framework
needed.

Please don't judge Haskell using Scala and scalaz.

~~~
mpartel
Not sure about LINQ, I thought that was "just" syntactic sugar for a bunch of
collection methods. Are you refering to extension methods as an unfortunate
prerequisite?

But I think I get your general point: things like 'Control.Concurrent.Async'
('async'/'await') and 'Control.Monad.Coroutine' ('yield') are libraries that
implement some and very generic type classes: 'Functor', 'Applicative',
'Monad'. This then lets you use features that are generic over those type
classes ('do' syntax, 'fmap', ...).

It's been many years since I had a proper look at Haskell. Maybe it just takes
more practice than I had back then to fully "get it". But I still don't see
those abstractions being that useful in everyday programming. They seem to
have huge potential for hard to follow code as you need to mentally unpack and
remember more layers of abstraction, and the gain is not clear to me. Even the
features that have trickled down to C# are not _that_ crucial I feel. The way
mainstream languages pick the most useful use cases of those abstractions
seems pretty OK to me.

(Also, macros and compiler plugins are another interesting avenue towards very
powerful abstractions, with a different set of problems.)

As for Spring and dependency injection, I don't follow how HKTs would help
there. Could you give an example? Aren't DI frameworks mostly about looking
things up with reflection magic to automate, and arguably just obfuscate, the
task of wiring things up in 'main'?

~~~
runT1ME
>They seem to have huge potential for hard to follow code as you need to
mentally unpack and remember more layers of abstraction

That's the beauty of abstraction without side effects, you don't need to
unpack anything. If you know what the inputs are and the outputs are, you
don't need to know how it works or what type classes are even used to
transform certain things.

People use sequence

all the time in Scala, not realizing it's only able to be implemented with
HKTs of Applicative and Traverse. FYI, sequence flips a list of Futures to a
Future of List, or a vector of Trys to a Try of Vector, etc.

~~~
mpartel
Fair point about 'sequence'. There are probably a bunch of these I use
regularly in Scala without realizing it. Though as a counterpoint,
'Future.sequence' wouldn't really lose _that_ much if it didn't return a
collection of the same type. And I haven't yet felt the need for a generic
`sequence`, which I'm sure scalaz has.

I don't buy your point about not needing to unpack side-effectless code,
however. There are _always_ reasons to dig into code, be it bugs, surprising
edge cases, poor documentation, insufficient performance, or even just
curiosity. And those high-level abstractions tend to be visible in module
interfaces too. I remember some Haskell libraries being very hard to figure
out how to use if you didn't know your category theory :)

~~~
hota_mazi
It's a pretty typical symptom I've seen a lot of hardcore FP developers
exhibit: they forget how much time it took them to reach their level of
mastery.

It's like spending ten years learning to speak Russian and then criticizing
anyone who says that learning Russian is difficult.

Puzzling out scalaz code is difficult and requires an enormous investment in
hours and practice, investment that a lot of people prefer to put into
different learnings.

~~~
runT1ME
Yea, puzzling out _some_ scalaz code takes investment. On the other hand, the
library is used for web apps, network servers, database based applications,
streaming libraries etc.

It's incredibly multipurpose, more so than even Spring or Guava or LINQ, and
these are things that developers regularly have to invest serious time in.

The argument is just that FP libraries (like Scalaz) have a bigger payoff in
the investment.

At Verizon Labs were have 20+ microservices that I have touched/looked at.
Some use Akka, some use Play, some use Jetty, some use Http4s but everyone
makes use of Scalaz somehow.

~~~
hota_mazi
> The argument is just that FP libraries (like Scalaz) have a bigger payoff in
> the investment.

It depends on the people, not everybody has the inclination to dive so deep
into hard core FP and they will be more productive using a different approach.

Don't make the mistake of thinking you've found the only software silver
bullet that exists and that people who don't use it "don't get it", which is
another attitude I've seen a lot of hardcore FP advocates embrace.

------
mattnewton
I just don’t buy that go is some sort of sweet spot because it doesn’t have
generics. Generics pretty much exist for maps and slices, because they are
needed in real programs. The language designers just don’t let you make your
own generic collections.

~~~
namelost
Yeah far from finding a sweet spot, Go exists in some kind of type system
ghetto, because its type system is so crippled users have to resort to code
generation (go generate).

Neither Python nor Java programmers have to do that.

~~~
lopatin
Yeah Go is weird in that its static type system doesn't to provide you with
_great static typing power_ but instead it's just there as a sort-of sanity
checker. If there's logic, they say write it with data structures and
functions. Have invariants? Enforce them yourself.

If Go is annoying with how little power it provides, that's fair, but other
type systems can be just as annoying then, because when given the ability to,
type astronauts will blast off into space, purely as a matter of honor or
instinct.

Besides, code generation isn't all that bad. Java programmers will eventually
find some kind of code generation in their build setup (serialization/schema
tools).

~~~
namelost
There's nothing wrong if users independently choose to use code generation.
However when a programming language starts to _rely_ on it, it becomes a major
problem.

We've been here before with the C preprocessor. There's nothing wrong with
having a preprocessor, but in C it is _necessary_ to use the preprocessor and
that causes a lot of problems, like making it especially difficult to write
tools.

------
evmar
In this thread: people will bring out the same tired arguments for or against
static typing, without commenting on the actual content of the post, which was
quite good!

I have come to see type systems, like many pieces of computer science, can
either be viewed as a math/research problem (in which generally more types =
better) _or_ as an engineering challenge, in which you're more concerned with
understanding and balancing tradeoffs (bugs / velocity / ease of use / etc.,
as described in the post). These two mindsets are at odds and generally talk
past each other because they don't fundamentally agree on which values are
more important (like the great startups vs NASA example at the end).

~~~
mattnewton
I think this post was extremely hand wavy. It stated the same divide that is
already known, but doesn’t actually make any arguments to why Go or whatever
lies on some part of the curve, because it assumes that the way you program at
different points on the curve are roughly the same but with more type
boilerplate. Higher kinded types offer entirely new ways to program, and stuff
like optional typing in Python makes it all much more complex than just “how
long do I spend writing and reading type declarations”. I was left with an
impression that the author was content with go, and that’s pretty much it.

~~~
yorwba
I agree. The graph of static checking vs. lines of code should really be
factored into static checking vs. amount of annotations to achieve that level,
amount of annotations to write vs. how much that slows you down, and amount of
annotations _that are already written_ (in your own code or libraries you use)
vs. how much that speeds you up. And those will vary wildly depending both on
the language and the programmer.

------
oldandtired
It has been interesting to see the to and froing of arguments for and against
static typing in the discussions here.

Though I am not a type theorist (I only dabble in compilers and language
design), I have noted that many people conflate static typing and dynamic
typing with other additional ideas.

Static typing has certain benefits but also has certain disadvantages, dynamic
typing has certain benefits but also has certain disadvantages.

What I find interesting is that few people fall into the soft typing arena,
using static typing where applicable and advantageous and using dynamic typing
where applicable and advantageous.

Static typing has a tendency in many languages to explode the amount of code
required to get anything done, dynamic typing has a tendency to produce
somewhat brittle code that will only be discovered at runtime. The
implementation of static typing in many languages requires extensive type
annotation which can be problematic.

But what is forgotten by most is that static typing is a dynamic runtime
typing situation for the compiler even when the compiler is written in a
static typed language.

Instead of falling into either camp, we need to develop languages that give us
the beast of both world. Many of the features people here have raised as being
a part of the static typing framework have been rightly pointed out as being
of part of the language editors being used and are not specifically part of
the static typing regime.

Many years ago a similar discussion was held on Lambda-the-Ultimate, and the
sensible heads came to the conclusion that soft typing was the best goal to
head for. Yet, in the intervening years,when watching language design
aficionados at work, they head towards full static typing or full dynamic
typing and rarely head in the direction of soft typing (taking advantage of
both worlds).

S, the upshot, this discussion will continue to repeat itself for the
foreseeable future and there will continue to NOT be a meeting of minds over
the subject.

~~~
zzbzq
Maybe part of the problem is I can't picture what you're actually talking
about with soft typing. I can tell you C#/.NET has the DLR which allows you to
do dynamic types whenever you want. Outside of a few gimmicks, you rarely see
these used. I've rarely even seen them for quick prototyping, because
generally you mess around with using them for prototying, then the first time
they go bad, it's _really_ obnoxious, and you realize you're compiling the
code and writing function signatures anyway, might as well save the time later
and do it right the first time.

Then there's the whole tooling aspect of trying to mix type systems. It's
different lifestyles. Dynamic programmers aren't going to start compiling
their code to run it, static programmers aren't going to switch to a language
with weaker tooling around the IDE-ish features, which are mostly built on the
type system.

My conclusion is this: New languages should all be statically typed, because
we shouldn't need new languages at all. We should be fine. The reason we need
new languages at all, is because the trifecta of C++/Java/C# basically
encompassed the entire statically typed world, but they're all infected with
this fully overblown OOP obsession, and the null pointer bug--which newer
languages have fixed, through more static typing. Basically we need to replace
those languages with similar ones and then just stop making languages for a
few decades, until whatever we're doing now looks as dumb as OOP and null
pointers. In the long run, Go/Swift/Kotlin/Rust will take over the statically
typed world and it's going to be great.

~~~
oldandtired
Soft typing could be characterised by having the compiler do static type
analysis where it can, but leave the type analysis to the runtime when it
can't.

A simple example of this is a list. Now in statically typed languages, list
are homogeneous (this is includes type unions). In dynamically typed
languages, list can be heterogeneous, essentially anything can be added at
runtime.

In soft typing, we can indicate that a list is homogeneous and the compiler
will ensure that this is true or we can specify no type checking (as such) and
this will be done at runtime.

Contrived yes, but I regularly use other aggregates (tables and sets) into
which I do not want them to homogeneous.

One of the aspects that I like about functional languages is the polymorphism
available, but in all that I have come across, there is no way to make a tree
or list heterogeneous without declaring union types before hand.

My problem with C#, C++, Java, and their ilk, is that code is multiplied with
their generics.

How the IDE and compiler and type systems interact is a design function and is
not inherent to any type system.

One of the reasons I don't use specific main stream languages such as C#, C++
or JAVA is that they don't provide the specific programming features that I
desire.

I have looked at Go, Swift and Rust and I am not at all impressed by the
"relative stupidities" within those languages. For other programmers, what
they consider to "relative stupidities" is entirely up to their experience and
outlook.

------
willtim
Our industry has not yet even scratched the surface of what types can offer:
Types for enforcing architectures and controlling effects, types for checking
correct use/free of scarce resources, types for verifying protocol
implementations etc etc. Currently, half the industry is using schema-less
json and dynamic languages; so really it is far too early to generally talk
about any diminishing returns.

~~~
hwayne
There's a lot of great things our industry doesn't use: contracts, proper fuzz
testing, cleanroom, formal specification, constraint solvers, _checklists_. We
might (not necessarily, but _might_) be in a place where types are diminishing
returns with respect to other low-hanging fruit.

~~~
willtim
Yes it's true that retrofitting better type systems into existing languages
may not be low-hanging fruit. But developers have shown a willingness to adopt
new languages when they see clear benefits.

~~~
hwayne
> Yes it's true that retrofitting better type systems into existing languages
> may not be low-hanging fruit.

Disagree here, actually! Javascript (Typescript) and Python (mypy) are both
seeing pretty big benefits from adding gradual typing.

~~~
willtim
Glad to hear it!

------
solatic
OP draws a false one-dimensional relationship between types vs tests in terms
of code quality. Writing expressive types instead of tests does much more than
affect a quality curve - it changes the way you approach the problem you are
trying to solve. The classic Haskell example is understanding how IO being a
monad allows you to push impurity to the edge of your system.

Start-ups decide not to write MVPs in languages like Haskell or Idris not
because those languages aren't "rapid" enough, but because it's too difficult
to find programmers experienced in those languages on the labor market. It's
already difficult enough to find competent programmers - no founder wants to
make their hiring woes even more difficult.

~~~
sordina
Sorry to contradict you, but we wrote an mvp in rails even though we have 3.5
experienced Haskell programmers on staff. We did this because we knew we could
build some web stack apps with all the trimmings much faster in ror. So there
is at least one counter example.

~~~
yawaramin
I don't think it's really a contradiction. In a startup you still have to
choose the quickest path that you think will lead to success. It just depends
on what your definition of success is. RoR can be a safe choice even for
Haskell devs if they just want to build an off-the-shelf webapp with all the
trimmings. But if your definition of success is that you want to create a
formally-verified smart contract platform and cryptocurrency, you're going to
use something like Haskell or OCaml:
[https://github.com/tezos/tezos](https://github.com/tezos/tezos)

------
barrkel
There's a point beyond which you spend more time proving things about your
code than writing it, all the way up to the point where your ability to prove
things about your code in your chosen type system starts to affect the kinds
of solutions you can construct, and a different kind of complexity creeps in;
representational complexity rather than implementation complexity. This can be
a source of error, not just inefficiency.

------
mannykannot
Firstly, thank you for wanting to take an open-minded look into the issue,
rather than simply defend a position that you have already committed to.

You write "Why then is it, that we don't all code in Idris, Agda or a
similarly strict language?... The answer, of course, is that static typing has
a cost and that there is no free lunch."

I take it that you wrote "of course" here through assuming that there must be
some objective reason for the choice, and that it depends solely on
strictness, but languages don't differ only in their strictness, so choices
may be made objectively on the basis of their other differences, and we also
know that choices are sometimes made on subjective or extrinsic grounds, such
as familiarity. I don't know what proportion of professional programmers are
familiar enough with Iris or Agda to be able to judge the value proposition of
their strictness, but I would guess that it is rather small.

Now, to look at the sentences I elided in the above quote: "Sure, the graph
above is suggestively drawn to taper off, but it's still monotonically
increasing. You'd think that this implies more is better." As the graph is
speculative, it cannot really be presented as evidence for the proposition you
are making. I could just as well speculate that static program checking does
not do much for program reliability until you are checking almost every aspect
of program behavior, and that simple syntactical type checking is of limited
value. That would be consistent with the fact that there is little empirical
evidence for the benefit of this sort of checking, and explain why most people
aren't motivated to take a close look at Iris or Agda. In this equally-
speculative view of things, current language choices don't necessarily
represent a global optimization, but might be due to a valley of much more
work for little benefit between the status quo and the world of extensive-but-
expensive static checking.

------
tree_of_item
Yeah, actually I'm gonna go ahead and roll my eyes at the idea that parametric
polymorphism is on the wrong side of the "diminishing returns of static
typing". Less than ONE percent of Go code would benefit from type-safe
containers?

------
geokon
I think talking about a sweet spot is correct

I've been thinking about the trajectory of C++ language development recently
and the emphasis has definitely been on making generics more and powerful. You
watch CppCon talks and see all this super expressive template spaghetti and
see that while it's definitely a better way to write code - the syntax is just
horrifying and hard to "get over"

Just like when "auto" took off and people starting thinking about having
"const by default" \- I'm starting to think that generic by default is the way
to go. The composability of generic code is incredible powerful and needs to
be more accessible

However the other end of the spectrum: dynamic code leaves a lot of
performance on the table and leads to runtime errors

------
CoolGuySteve
When I went from working at Apple to a language implementation group at
another company, my views on Objective-C's duck typing + warnings for classes
being useful and good was pretty heretical. It's nice to see other people
agree with me.

Especially when it comes to GUI programming, I really don't care if a
BlueButton.Click() got called instead of RedButton.Click().

------
ruskimalooski
These graphs really mean nothing. There is no data behind them. I might as
well make a graph that conveys a non-descript correlation between how much an
article bashes static typing & assertion and how high it is on HN.

~~~
gipp
They're just sketches. That's part of the point, and the article says that
directly. The point isn't the exact shape or slope of the curves, but just
their asymptotic behavior and the relationship of "correct features/day" to
the other two. I.e. As long as the two curves have that general shape, then
the "sweet spot" exists _somewhere_ between 0-100%, the exact location of
which depends on language, developer experience, and business priorities. The
exact numbers are irrelevant to the article's point.

~~~
ruskimalooski
But even the asymptotes are an assumption derived from pure thought
experiment.

~~~
dwaltrip
More realistically, it's an educated guess based off the author's personal
experience as well as their understanding of the experiences of other
developers operating under different constraints.

The author makes it clear that the analysis is not perfectly rigorous. There
is a very wide landscape between perfectly rigorous and completely useless.

Do you think the article fails to hint at any of the fundamental dynamics of
how type systems affect software development? How so?

~~~
raquo
I'm not who you're replying to, but for me the charts didn't make sense
either.

For one example, I don't think it's a given that the green line (velocity vs %
type-checked) should have a negative slope. Maybe in some cases, for some
projects or some people, but certainly not universally. At least part of it
would have been positive on almost all projects that I've worked on, and I'm
not doing rocket science.

Then, the combined chart just looks at the amount of bug-free output,
completely ignoring the amount of bug-ridden output. That latter part doesn't
just get discarded, it needs fixing, and bugs that were only discovered in
production are expensive to discover, debug and fix.

This is in addition to pretty much every other top level comment in this
thread, a lot of which bring up important points that are unaccounted for even
conceptually in the charts.

------
k__
I had the same experience, but I also have to say that the static type systems
of some FP-languages feel really light-weight.

So year, static typing doesn't buy you much, but in some languages it's at
least cheap.

~~~
hwayne
> So year, static typing doesn't buy you much, but in some languages it's at
> least cheap.

I think this is key. The benefit of static typing isn't that they provide
safety, it's that they provide _low-cost_ safety. For a large class of
problems, types are cheaper than tests are. For other classes, tests are
cheaper than types. The main downside of nonstatic languages is that you have
to use tests for everything, even that class where types are a better choice.

------
stephengillie
One of my favorite parts of Powershell is optional typing. Variables are a
generic "Object" type by default, which can hold anything from a string to
array to "Amazon.AWS.Model.EC2.Tag" or other custom types.

Or, type can be specified when setting the variable:

[String]$myString = "Hello World!"

This would generate a type error:

[Int]$myString = "Hello World!"

Often, typed and untyped variables will sit together:

[Int]$EmployeeID,[String]$FullName,$Address = $Input -split ","

~~~
GenericsMotors
Indeed! I think one of my favourites has to be:

    
    
        [xml]$someXmlDocument = Get-Content "path\to\file.xml"
    

And you get a deserialized version of the XML text.

Also the fact that you can use types when declaring function arguments,
removing the need to manually test if an object of the desired type was
passed.

Powershell definitely strikes a good balance on type safety for a scripting
language.

------
coding123
I'm converting a codebase of Javascript of about 200+ js files to Typescript
today. I am about 5% complete... already found two places where the argument
list was wrong and was being sent into a void. I also see the code that was
making up for the fact that the third argument was being ignored (basically
patching downstream because they thought the feature was broken).

Now this codebase was written with a high degree of quality (it's pretty good
but not perfect), but the lack of compile (and of course runtime)-time checks
has caused waste.

The second phase of my project to convert all promises to RX Observables :)

~~~
kjaer
If you're just rewriting these Promises because the syntax is too verbose, you
might be interested in checking out async/await as another alternative; I just
rewrote some Promises to that recently, and it's really, really nice. Of
course, if you prefer RX Observables, go right ahead :)

~~~
coding123
Thanks for the note, I am looking into it right now. One area that may grind
my head with async await however is that there is a lot of Promise.all work in
this codebase. Would you still use async/await constructs when you need to do
a lot of fork/join/merge stuff? (sorry for the derail HN)

~~~
kjaer
If you're using Promise.all to run code in parallel, then async await can't
really replace that, as far as I know. But you can still use `await
Promise.all(...)`, which will free you from having callbacks everywhere;
running parallel code will no longer have to look so different from running it
sequentially, which is quite nice.

------
cm2187
The benefit of static typing isn't just reliability. Tooling is another major
argument. Won't appeal to certain hardcore programmers who think that even
notepad has too many features. But it is great for refactoring, finding all
references to a function or a property or navigating through the code at
design time. Basically all the features visual studio excels at for .net
languages.

And I disagree with the barrier to entry argument. Static typing, by enabling
rich tooling, helps a beginner (like it helped me) a lot more by giving live
feedback on your code, telling you immediately where you have a problem and
why, telling you through a drop down what other options are available from
there, etc. Basically makes the language way more self-discoverable than
having to RTFM to figure out what you can do on a class.

~~~
Eridrus
I think dynamic typing proponents get hung up on the auto-complete aspect. The
real benefit is when you find someone writing a property with a common-ish
name to a data structure and you want to know "who the hell uses this", you
can answer that question pretty easy in statically typed languages. In
dynamically typed languages you kind of just grep and hope the name is not too
common.

~~~
gopalv
> In dynamically typed languages you kind of just grep and hope the name is
> not too common.

For four days, I spent debugging a python production script because in one
place I had typo'd ".recived=true" on an object and just couldn't understand
why my state machine wouldn't work.

And very quickly, the whole team became fans of __slots__ in Python.

I still write 90% of my useful code in python, but that one week of debugging
was exhausting & basically wouldn't have even compiled in a statically
declared language. Even in python, the error is at runtime, after I got the
__slots__ in place.

~~~
kazinator
TXR Lisp, a dialect I created:

    
    
      $ txr
      This is the TXR Lisp interactive listener of TXR 185.
      Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
      1> (set a.b 3)
      ** warning: (expr-1:1) qref: symbol b isn't the name of a struct slot
      ** warning: (expr-1:1) unbound variable a
      ** (expr-1:1) unbound variable a
      ** during evaluation of form (slotset a 'b 3)
      ** ... an expansion of (set a.b 3)
      ** which is located at expr-1:1
    

Both warnings are static. If we put that into a function body and put that
function into a file, and then _load_ the file, we get the warnings.

The diagnostics after the warnings are then from evaluation.

Those are nothing; TXR Lisp will get better diagnostics over time. I'm just
starting the background work for a compiler.

There is dynamic and then there is crap dynamic.

Don't confuse the two.

There is crap static too. Shall we use C as the strawman examples of static?
Hey look, two argument function called with three arguments; and there's a
buffer overrun ...

~~~
lomnakkus
It seems to me that you've just invented a static type checker. (Combined with
a run-time type checker.) Am I mistaken?

I mean, we can argue the semantics of what, exactly "static type checker"
means, but...

~~~
kazinator
Static checking doesn't make a "static language".

A "static language" occurs when we have a model of program execution that
involves erasing all of the type info before run-time. Or most of it. (Some
static languages support OOP, and so stuff some minimal type info into objects
for dispatch.)

Note how above, my expression executes anyway; the checks produce only
warnings. The warning for the lack of a binding for the _a_ variable is
confirmed upon execution; the non-existence of the slot isn't since evaluation
doesn't get that far.

If we retain the type info, we have dynamic typing. There is no limit to how
much checking we can do in a dynamic setting. The checking can be incomplete,
and it can be placed in an advisory role: we are informed about interesting
facts that we can act on if we want, yet we can run the program anyway as-is.

~~~
lomnakkus
This really sounds like "semantics" to me (not PL semantics! :).

For example, these days it's quite possible to ask GHC to defer type errors to
runtime. Does that mean that the GHC dialect of Haskell is dynamically typed?
This is basically a command line switch away, btw.

Retention of type information does not "dynamic typing" make. As a trivial
example, consider C++ RTTI.

You really have just reinvented static (type) checking and a good runtime.
There's no shame in that, but let's not pretend that these are opposing
forces.

~~~
kazinator
C++ RTTI is only for class objects, and only useful when they are manipulated
by pointer or reference. I believe I covered that sort of thing with my
statement, _" Some static languages support OOP, and so stuff some minimal
type info into objects for dispatch."_. It's a gadget which provides an
alternative to the structure of doing everything via virtual functions on a
base class reference.

~~~
lomnakkus
Ok, fair enough, but what about e.g. "-defer-type-errors" for GHC?

I still think you're just arguing semantics.

EDIT: Incidentally, the statically typed crowd can even go the "other way",
namely from runtime -> compile time. For example, it's quite possible to
derive a _static_ proof/type from a runtime value in e.g. Idris by pattern
matching as long as you're meticulous about building up the proof.

------
seasoup
I really enjoyed how the analysis shows that different developers can have
different equally valid opinions on this topic. It's where you place your
values and preferences of programming, modified by what you are programming.
The failure state of a cat photo sharing web app likely isn't as dramatic or
important as that of a financial system or driverless car code. Great article.

~~~
continuational
Static typing reduces the time you spend on debugging. Automatically reducing
errors in code is not just for reducing errors in the resulting program. It
also greatly reduces the time you spend on hunting bugs, especially if you
have a poorly designed type systems where errors are reported far from their
origin. Null, interface{}, NaN etc. propagates errors and thus gives you a
stacktrace that is worthless when it finally fails. It's a waste of time.

~~~
hellofunk
In my experience, the time saved from writing in a statically typed language
where the compiler catches the bugs for you is made up by having to work more
closely with the compiler, typically write more code (type annotations and
other things) and in general spend that same time on compile-time rather than
run-time bug hunting. Dynamically typed languages typically involve a lot less
code, which is time gained.

That both forms of languages are popular shows that there are benefits in
overall productivity to each; they are just different benefits.

~~~
whyever
The thing is that errors at compile time get reported almost instantly, but
errors at runtime might be reported hours after you started your program if
you are unlucky.

~~~
hellofunk
That's entirely correct, and one of the tradeoffs.

However, in a statically-typed language, you must satisfy the type checker for
everything, which adds development time. In reality, there might be a small
percentage of functions in your code base for which errors (either compile-
time or run-time) would likely crop up, yet you must pay that cost for 100% of
them.

So that's really where the debate comes from.

Dynamically typed languages can get around this problem by generative testing
(in Clojure's case) which allow very fine-tuned aspects of your system's
requirements to be automatically tested before run-time without writing tests,
which offers some of the same confidence as a compiler.

------
btown
Also depends on your problem domain. If you have good test coverage but you're
parsing strings found in the wild, you're going to spend a lot more time
"debugging" your assumptions than AttributeErrors which would be caught by
typing. Bug free code is not always the same as working code.

Disclaimer: Python user scarred by email header RFC violations

------
noncoml
I think there are two kind of static typing languages. The ones that static
typing is for helping the compiler(eg C) and the ones that it’s for helping
the user(eg Typescript).

I think Go with its lack of algebraic type is more of the first, helping the
compiler, so I wouldn’t use it as a good example of static typing.

Haskell, OCaml and Rust would make excellent case studies, but we have nothing
to compare against.

So IMHO the best way to compare static typing vs dynamic typing is by
comparing Typescript against JS. And in my experience the difference when
writing code is huge. It completely eliminates the code-try-fix cycle during
development.

------
thesz
The effort to fix a defect is proportional to the time between introduction of
a defect and it's discovery.

This is a basic intuition behind all good practices, including CI, QA, etc.

Types allow one to discover program defects (even generalized ones, when using
some of the programming languages) in (almost) shortest possible amount of
time.

Types also allows one to constrain effects of various kind (again, use good
language for this), which constraintment can make code simpler, safer and, in
the end, more performant.

~~~
millstone
Also, retaining dynamic types at runtime enables you to find type errors that
the static type system could not discover, or that were worked around.
Language implementations that discard dynamic types make it harder to find
defects.

~~~
thesz
Algebraic data types allow you to get any amount of dynamism you would needed.

Have you familiarized yourself with Haskell?

------
valuearb
The two languages I develop in are Javascript and Swift. Couldn't be more
different in type safety.

I love everything about Swift except the compile times and occasionally
inscrutable compile error messages.

I love the interactivity of Javascript, but despise the lack of types, it's
like I'm sketching out the idea for a program instead of directly defining
what it is. And the lack of types burns me occasionally.

------
avg_programmer
What are the costs of statically typed languages? The author stated "thinking
about the correct types" and "increases compile times" among some other,
weaker (imo) costs. What is wrong with "thinking about the correct types"? You
are thinking about the same things in a dynamic language, right? For example,
say you need to know about things that are "thennable". Weather you are in a
statically typed language or not, you are still checking for the same thing:
does it have the then() method? The tradeoff is in reading vs implementing
code. With a statically typed language, you can easily search for implementers
of the Thennable interface and you are guaranteed to be show every
implementer. The downside is that you have to write a few more lines of code
to satisfy the static typing. With a dynamically typed language, you have to
find the implementers yourself, but you can just slap a then method on
anything and it will work. I am biased toward static typing so I am interested
to hear counter points.

~~~
hellofunk
One very simple and significant cost is developer time. It simply takes less
time to write code in a dynamically-typed language. You don't have a compiler
to please, you don't write extra code to massage types, annotate types, etc,
and most dynamically typed languages are pretty elegant (i.e. Clojure), where
you can pack a lot of punch in just a few characters.

So the trade off is: static typing gives you more compile-time certainty, but
at a cost of spending more time developing your code. Dynamic typing gets you
to a working product or prototype typically much much faster, but with added
run-time debugging.

Each has its benefits and costs.

In my experience, there is no doubt that dynamically typed languages are
faster-to-production than statically-typed. This doesn't mean that I don't
admire static typing, though, because most developers appreciate some degree
of purity in their work.

------
_Codemonkeyism
I like for example Refined

[https://github.com/fthomas/refined](https://github.com/fthomas/refined)

not only for the static checking,

    
    
        scala> val i: Int Refined Positive = -5
        <console>:22: error: Predicate failed: (-5 > 0).
                val i: Int Refined Positive = -5
    

but the expressive descriptions of a domain model.

------
hwayne
Sometimes I wonder if we're arguing the wrong thing, where we think we're
arguing static vs dynamic typing but what we're _actually_ arguing is static
vs no-static typing. Haskell is static and not dynamic. Ruby is dynamic but
not static. Python, starting with 3.5, is sorta both. C# is definitely both.

All static typing means is that type information exists at compile time. All
dynamic typing means is that type information exists at runtime. You generally
need _at least_ one of the two, and the benefits each gives you is partially
hobbled by the drawbacks of the other, so most dynamic languages choose not to
have static typing. I also feel that dynamic languages don't really lean into
dynamic typing benefits, though, which is why this becomes more "static versus
no static".

One example of leaning in: J allows for some absolutely crazy array
transformations. I don't really see how it could be easily statically-typed
without losing almost all of its benefits.

~~~
brightball
Honestly, I think you've nailed it.

The key is balance. Pure static does create a lot of extra up front cruft at
the expense of long term safety. Pure dynamic does create a much faster path
to features at the expense a lot of long term confusion.

The reason we have this conversation is because of web applications where
everything is travelling over the wire as a string, consumed by the web server
as a string, converted by whatever language the server is in...into something
that it can use...9/10 times validated to make sure it reflects what we need
and then stuff into a database.

In the case that you're using a SQL database, a huge number of people are
enforcing types at the database layer and the validation layer. Since so much
is "consume and store" followed by "read and return" the types at that server
layer end up creating a ton of extra work that in many cases shows little to
no benefit.

At the point that you're doing more in server layer, suddenly it becomes a lot
more useful. At the point you're working on desktop, mobile, embedded,
console, computational and graphics...static is going to provided more value.

At the point you're working on web in front of a database, the value is much
more questionable.

This is really one of the reasons I'm such a huge Elixir fan because IMO it
strikes that perfect balance where I live...on the server in front of a
database. You get static basic types with automatic checking via dialyzer and
you can make it stricter as necessary.

------
hellofunk
There is one aspect to this debate that is worth pointing out. What about
generative testing, which is possible in static or dynamically typed
languages? The article mentions that testing is perhaps more important in a
dynamically typed language since there is less compiler support. But for
example, Clojure rolled out the very clever Clojure.spec library that allows
you to precisely specify all details relating to function arguments, data
structures, etc, in even more fine-tuned methodology than just types; you can
specify that the second argument to a function must be larger than the first,
or that a function should only return a value between 5 and 10, etc. These
"specs" have the interesting property of being run-time checked or compile-
time checked in the form of automatic tests, which can generate inputs based
on the specs.

In such a case, the line between these two type environments narrows.

~~~
yawaramin
Clojure.spec is very clever, but it can be exactly duplicated in a statically-
typed language by unit or property testing. It doesn't bring anything to the
table that is totally a superset of static typing.

> In such a case, the line between these two type environments narrows.

Not really. Static types still offer you total proofs of the properties you
encode as types, not just experimental results of tests.

~~~
hellofunk
Generative testing is just one application of Clojure.spec. It does more than
just aid in testing. It doubles as a runtime contract system, a data coercion
system, and some folks are using it for compile-time checks as well (not in
the testing sense, though I haven't read up on how they are doing that).

It is not a proof-like system, but outside of dependent typing, static typing
does not catch value-related bugs, but Clojure.spec can. In a static type
system, how easily would it be to exactly specify and guarantee that a
function's second parameter is of a higher value than its first, or that a
function's output is an integer between 5 and 50, etc? Clojure.spec is just
predicate functions composed together to define the flow of data in a program,
and those compositions can be used in a variety of ways.

~~~
yawaramin
> ... static typing does not catch value-related bugs, but Clojure.spec can.

Can you provide an example?

> In a static type system, how easily would it be to exactly specify and
> guarantee that a function's second parameter is of a higher value than its
> first, or that a function's output is an integer between 5 and 50, etc?

Scala:

    
    
        def foo(param1: Int, param2: Int): Int = {
          require(param2 > param1, "Param2 must > param1")
    
          param2 - param1 ensuring { result =>
            result >= 5 && result <= 50
          }
        }

------
bad_user
Those line charts are totally made up, with arguments pulled out of thin air
to support this line:

> " _Go reaps probably upwards of 90% of the benefits you can get from static
> typing_ "

That _90%_ number is totally made up as well. I don't see evidence that the
author actually worked with Haskell, or Idris, or Agda these being the three
static languages mentioned. Article is basically hyperbole.

If I am to pull numbers out of my ass, I would say that Go reaps only 10% of
the benefits you get with static typing. This is an educated guess, because:

1\. it gives you no way to turn a type name into a value (i.e. what you get
with type classes or implicit parameters), therefore many abstractions are out
of reach

2\. no generics means you can't abstract over higher order functions without
dropping all notions of type safety

3\. goes without saying that it has no higher kinded types, meaning that
expressing abstractions over M[_] containers is impossible even with code
generation

So there are many abstractions that Go cannot express because you lose all
type safety, therefore developers simply don't express those abstractions,
resorting to copy/pasting and writing the same freaking for-loop over and over
again.

This is a perfect example of the Blub paradox btw. The author cannot imagine
the abstractions that are impossible in Go, therefore he reaches the
conclusion that the instances in which Go code succumbs to interface{} usage
are acceptable.

> " _It requires more upfront investment in thinking about the correct types._
> "

This is in general a myth. In dynamic languages you still think about the
_shape of the data_ all the time, except that you can't write it down, you
don't have a compiler to check it for you, you don't have an IDE to help you,
so you have to load it in your head and keep it there, which is a real PITA.

Of course, in OOP languages with manifest typing (e.g. Java, C#) you don't get
full type inference, which does make you think about type names. But those are
lesser languages, just like Go and if you want to see what a static type
system can do, then the minimum should be Haskell or OCaml.

> " _It increases compile times and thus the change-compile-test-repeat
> cycle._ "

This is true, but irrelevant.

With a good static language you don't need to test that often. With a good
static type system you get certain guarantees, increasing your confidence in
the process.

With a dynamic language you really, really need to run your code often,
because remember, the shape of the data and the APIs are all in your head,
there's no compiler to help, so you need to validate that what you have in
your head is valid, for each new line of code.

In other words this is an unfair comparison. With a good static language you
really don't need to run the code that often.

> " _It makes for a steeper learning curve._ "

The actual learning is in fact the same, the curve might be steeper, but
that's only because with dynamic languages people end up being superficial
about the way they work, leading to more defects and effort.

In the long run with a dynamic language you have to learn best practices,
patterns, etc. things that you don't necessarily need with a static type
system because you don't have the same potential for shooting yourself in the
foot.

> " _And more often than we like to admit, the error messages a compiler will
> give us will decline in usefulness as the power of a type system increases._
> "

This is absolutely false, the more static guarantees a type system provides,
the more compile time errors you get, and a compile time error will happen
where the mistake is actually made, whereas a runtime error can happen far
away, like a freaking butterfly effect, sometimes in production instead of
crashing your build. So whenever you have the choice, always choose compile-
time errors.

~~~
Silhouette
The author addresses that point extensively in the second half of the article,
beginning around this part:

 _Now if we are to accept all of this, that opens up a different question: If
we are indeed searching for that sweet spot, how do we explain the vast
differences in strength of type systems that we use in practice? The answer of
course is simple (and I 'm sure many of you have already typed it up in an
angry response). The curves I drew above are completely made up. Given how
hard it is to do empirical research in this space and to actually quantify the
measures I used here, it stands to reason that their shape is very much up for
interpretation._

------
iamleppert
It's far more useful to implement validation and type checking via
introspection and interrogation of type, quantity, structure, size, or some
other property at runtime in a dynamic programming language than to
pedantically have to type all your variables. Most interesting types are far
from the basics of different size numbers, string and objects anyway. It's
better to trade a fast and quick runtime type error than a lengthy compile-
time type checking process, because less code needs to be evaluated at run-
time to expose the type error. See the "Worse is better" principle in language
design.

Wouldn't it be great if we can use the computer to figure out what the types
should be by a runtime evaluation of the code and save precious human time for
things only humans can do?

I don't have to think or decorate my speech with types of noun, verb, pronoun,
adjective etc. when I speak, but I'm still able to communicate very
effectively, because your brain is automatically adding the correct type
information based on context that helps you understand what I'm saying, even
with words that have multiple types. Granted, natural language is different
than programming language but there was once a trend to try and make
programming languages more like human language, not less so.

~~~
yawaramin
> It's far more useful to implement validation and type checking via ...
> runtime in a dynamic programming language than to pedantically have to type
> all your variables.

How is that? I'm not seeing the increased utility.

> It's better to trade a fast and quick runtime type error....

What if the runtime type error crashes your app in production and loses your
company money? What if it's something that slipped through your end-to-end
integration testing because certain unlikely conditions never got covered, but
they happened in production?

> ... than a lengthy compile-time type checking process,...

There are several modern compilers which are quite fast: D, OCaml, Java.

> ... because less code needs to be evaluated at run-time to expose the type
> error.

With static type checking, _no code_ needs to be evaluated at runtime to
expose a type error. Does dynamic typechecking offer a reduction over that?

> Wouldn't it be great if we can use the computer to figure out what the types
> should be by a runtime evaluation of the code and save precious human time
> for things only humans can do?

Wouldn't it be great if the computer would figure out the types at compile
time and save us from having to manually input them? Well, the computer can do
that, thanks to type inference. Several popular languages offer full, powerful
type inference.

------
platz
[https://www.theatlantic.com/technology/archive/2017/09/savin...](https://www.theatlantic.com/technology/archive/2017/09/saving-
the-world-from-code/540393/)

Software failures are failures of understanding, and of imagination.

The problem is that programmers are having a hard time keeping up with their
own creations.

dynamic typing simply doesn't scale.

------
jon49
Languages like F# give a nice sweet spot between static typing and dynamic
typing. It has Type Providers that "generate" code on the fly as you are
typing. You don't need to specify all the types, it will infer many types for
you. So, you almost feel like you are writing in a dynamic language but you it
tells you if you are writing something incorrectly.

I would not consider a language to be modern unless it has Type Providers I
consider this to be such an essential feature. I believe Idris and F# are the
only languages that have it. People are trying to push TypeScript to add it -
who knows if it will happen.

Many are saying that if you have a dynamic language you just need to be
disciplined and write many tests. With good static typed languages like F# you
can't even write tests on certain business logic since the way you write your
code you make "impossible states impossible", see
[https://www.youtube.com/watch?v=IcgmSRJHu_8](https://www.youtube.com/watch?v=IcgmSRJHu_8)

------
hyperpallium

      1. performance dominates (like 80:20)
      2. tooling
      3. doc (becomes crucial on large projects)
      4. correctness
    

Formal correctness doesn't really matter. Anecdotally (since that's really all
we have), I find in practice, very few bugs are caught by the type-checker.

Further, code is usually not typed as accurately as the language allows. i.e.
the degree of type-checking is a function of the code; the language only
provides a maximum. In a sense, every value has a type, even if it's not
formally specified or even considered by the programmer, in the same sense
that every program has a formal specification, even if it's not formally
specified.

Upfront design is the price. Which is difficult to pay when the requirements
are changing and/or not yet known.

~~~
nv-vn
What language in specific are you applying this to? I.e. what is the type
checker that is catching few bugs?

------
js8
Like other commenters, I disagree there are diminishing returns to static
typing itself, but rather diminishing returns to proper engineering in certain
cases (i.e. do something as perfectly as possible).

By adding types (and in the extreme, dependent types), you're allowing
compiler to prove more things about the code (to check correctness or generate
more optimal code). If you actually need to prove more things, then it's
better to leave that for a compiler rather than human.

Of course, if you're writing e.g. web scraping script, you don't need these
guarantees and then you don't have to care about types. But the better
engineering you want, the more static typing will help and there is no
diminishing returns.

------
FranOntanaya
It bothers me that types as representation of hardware constraints are mixed
up with types as a machine readable subset of validation.

It makes the higher level types seem more transcendental than they are, and
also seems to put actual validation on a second rate level. End of the day if
an argument is the right scalar or interface you'll get the same result on
runtime whether you hinted it -- for one's quality of life improvements -- or
checked it with some boilerplate validation. Worst case scenario people will
forgo encoding known stricter constraints after generally hinting the expected
type.

------
tabtab
I've generally felt that each shines in different areas. Static typing is best
for lower-level infrastructure and shared API's, while dynamic is better for
gluing these all together toward the "top" of the stack, closer to the UI and
biz logic. The problem is that languages tend to be all one or the other so
that we have to make choice. What's needed is a language (or language
interface convention) that can straddle both. A given class or library can be
"locked down" type-wise to various degrees as needed.

------
cleandreams
My 2 cents: dynamic typing works okay for library consumers. For libraries
themselves though, or platform code, the disadvantages are real. It is harder
to fix and extend code when you don't know who calls it, how they call it,
what they get in return. Complex code becomes littered with 'black holes'.
That is a big part of why facebook implemented Hack. I heard a talk by one of
the developers. Even now there are PHP blackholes in the Facebook code base
that they can't migrate to Hack.

------
lisper
100% statically-type-checked code != 100% bug-free code. That would require
solving the halting problem. So you have to test everything anyway if you need
high reliability.

~~~
voidmain
This argument is incorrect. The "halting problem" is the problem of
determining if an _arbitrary_ program halts. It is not impossible to prove,
and verify mechanically, that a particular program halts.

The state of the art is not up to proving every desirable property of every
program that we would like to build. But that has nothing much to do with
computability. And some extremely impressive things have been done, like the
seL4 separation kernel, which has static proofs of, among other things,
confidentiality, integrity, and timeliness, and a proof that its binary code
is a correct translation of its source.

~~~
lisper
> It is not impossible to prove, and verify mechanically, that a particular
> program halts.

OK, let's put that to the test. Here is a particular program:

    
    
        let x = 6
        let y = 3
        while true:
          if y>x then halt
          if is_prime(y) and is_prime(x-y) then
            x = x + 2
            y = 3
          else
            y = y + 2
          endif
    

Can you tell me if it halts or not?

> The state of the art is not up to proving every desirable property of every
> program that we would like to build.

Isn't that exactly the same as what I said?

> But that has nothing much to do with computability.

What does it have to do with then?

> some extremely impressive things have been done

Yes, in some very particular cases. But note that even a proof of correctness
is not a guarantee that the code is bug-free.

[http://spinroot.com/spin/Doc/rax.pdf](http://spinroot.com/spin/Doc/rax.pdf)

~~~
voidmain
I think you have missed my point. I am not saying that humans are able to
solve the halting problem! Nor am I saying that static verification is always
better than testing. I am saying that you don't need a halting oracle to
express and verify arbitrary properties in a static type system, because a
static type system can and will reject programs that would not have type
errors dynamically.

If you write this program in a statically checked language:

    
    
        let x : int = 6
        let y : int = 3
        while true:
          if y>x then break
          if is_prime(y) and is_prime(x-y) then
            x = x + 2
            y = 3
          else
            y = y + 2
          endif
        x = "foobar"
    

I can tell you that it will not type check. And for the same reason, if you
write the same program in a language that can express termination and claim
that it terminates, the program will not type check until you have supplied a
proof of (edit: the negation of!) Goldbach's conjecture in a form that the
type system understands.

~~~
lisper
> you don't need a halting oracle to express and verify arbitrary properties
> in a static type system

Replace the word "arbitrary" with "some" and I'll agree with you. There are
some things a static type system will tell you. Some of those things are even
useful things to know. But there are some things a static type system will not
tell you, and cannot tell you, and some of those things are useful things to
know too.

Furthermore, the way static type systems are used in practice, they don't just
tell you things. They will actually refuse to let you run the program unless
it conforms to some preconceived notion of correctness that is built in to the
type system. Personally, that's the part that rubs me the wrong way. It is
sometimes useful to me to run a program even if I know that it has certain
kinds of errors in it.

> it will not type check

I'm pretty sure it would. Why do you think it would not?

~~~
seanwilson
> I'm pretty sure it would. Why do you think it would not?

Languages like Coq require you to prove a function halts before it will
compile. Yes, for an arbitrary function it can be arbitrarily difficult or
impossible to prove termination. In most cases though, termination proofs
aren't that complex (e.g. "it halts because the collection gets smaller each
recursive call").

Besides, you're argument is basically sounding like "because you can't prove
all functions halt it's a waste of time proving any functions halt". See the
sel4 OS for an impressive example of what formal proofs can do.

~~~
lisper
> Languages like Coq require you to prove a function halts before it will
> compile.

Well, that's incredibly stupid. That means you can't write, for example, a web
server in Coq unless you intentionally introduce undesirable behavior to
satisfy the compiler.

> because you can't prove all functions halt it's a waste of time proving any
> functions halt

No. That's obviously a straw man. Can you please consider the possibility that
I might not be a complete idiot?

My argument is: because the halting problem is undecidable, there are an
infinite number of properties of programs that are also undecidable. So there
are only two possibilities:

1\. None of the infinite undecidable properties of programs are things we will
ever care about or

2\. There are properties of interest that cannot be decided by static typing

Which of those is the case is an empirical question but I submit that #2 is
much more likely to be the case. Therefore, static typing cannot obviate the
need to be prepared for your program to exhibit unexpected behavior at run
time except in the most trivial cases.

~~~
seanwilson
> Well, that's incredibly stupid. That means you can't write, for example, a
> web server in Coq unless you intentionally introduce undesirable behavior to
> satisfy the compiler.

There's ways around it (e.g. proving progress is always going to be made
instead of termination) and there's a web server in Coq: [http://coq-
blog.clarus.me/pluto-a-first-concurrent-web-serve...](http://coq-
blog.clarus.me/pluto-a-first-concurrent-web-server-in-gallina.html)

> 2\. There are properties of interest that cannot be decided by static typing
> > > Which of those is the case is an empirical question but I submit that #2
> is much more likely to be the case. Therefore, static typing cannot obviate
> the need to be prepared for your program to exhibit unexpected behavior at
> run time except in the most trivial cases.

Again, look at the sel4 project. It verifies the correctness of an entire OS
showing that formal verification is powerful, practical and useful. Google for
all the algorithms that have been formally verified with Coq, Isabelle and
other proof assistants.

Why do you think it would be common properties of interest wouldn't be
provable? Do you think mathematicians have this issue (there's not a lot of
difference when you have expressive enough types)? You yourself must have an
intuition about why the properties would be true so you should be able to
write a formal proof of that although it can be very challenging currently.

~~~
lisper
> Why do you think it would be common properties of interest wouldn't be
> provable?

Because proving all common properties of interest is tantamount to proving all
interesting mathematical theorems.

> Do you think mathematicians have this issue

Yes, obviously. If they didn't they wouldn't have jobs.

> it can be very challenging currently

Yes indeed, and that is exactly my point. Humans just keep finding new and
more complicated things to care about. Math doesn't converge.

------
snambi
Any program that is non-trivial meaning 100K+ lines of code, involves many
developers over 2+ years of time, should be written in a statically typed
language.

~~~
hellofunk
That really means nothing. 100K+ lines of code is an arbitrary number. For
that many lines of C++, a similar Clojure solution to the same problem would
be a small fraction of that. And many widely-used Clojure libraries are in
production all over the industry for many years.

------
tiuPapa
So the article does praise Go, but how is Rust? Does it strike that sweetish
spot? Is it a language a startup should use?

~~~
djur
Rust's type system is much closer to Haskell than Go, and even advocates of
the language will admit that it can sometimes be very difficult to convince
the compiler that your program is valid. Compile speed isn't great either,
although it's been improving. I would say that Rust is pretty much on the
other end of the scale from the author's supposed "sweet spot".

------
ratherbefuddled
I guess the only bit I don't really agree with is this:

> upfront investment in thinking about the correct types

being a cost. Surely you have to do this whether the compiler will check your
work or not, and if you just don't do the thinking you'll end up with bugs?
Isn't this a benefit?

------
zengid
Couldn't these discussions benefit from an inclusion of actual empirical
evidence? Here's a list of some such studies: [http://danluu.com/empirical-
pl/](http://danluu.com/empirical-pl/)

------
z3t4
While the made up graphs might help understanding his reasoning, I think it's
way too abstract/philosophical. It's like walking into a dark room making
assumptions and arguments based on your belief of what color the walls are.

------
magice
[https://dl.acm.org/citation.cfm?id=2635922](https://dl.acm.org/citation.cfm?id=2635922)

Just ONE study, so don't take too much heed. That said, apparently:

* Strongly type, statically compiled, functional, and managed memory is least buggy

* perl is REVERSELY correlated with bugs. Interestingly, Python is positively correlated with bug. There goes the theory about how Python code looks like running pseudo-code... Snake (python's, to be more precise) oil?

* Interestingly, unmanaged memory languages (C/C++) has high association with bugs across the board, rather than just memory bugs.

* Erlang and Go are more prone to concurrency bugs than Javascript ¯\\_(ツ)_/¯. Lesson: if you ain't gonna do something well, just ban it.

All in all, interesting paper.

------
shalabhc
Question for all static or dynamic typing proponents: do you see your
language/type-system as a great and scalable way to program large distributed
systems in 10 years? 20 years?

------
amelius
Can't we have tools that automatically perform the static typing for us,
perhaps in an interactive way?

(I'm not talking about systems which just infer types automatically).

------
vhiremath4
> “And more often than we like to admit, the error messages a compiler will
> give us will decline in usefulness as the power of a type system increases.”

Can someone explain this?

------
woolvalley
I would like lots of static typing, even more than we have now, but an ability
to turn it off for faster compile times during some parts of development.

------
jugg1es
In my experience with growing companies, even business-critical code bases get
rewritten within 3-4 years to account for flexibility that the previous
strongly-typed system just can't handle. A well designed system uses strong
types for the "knowns" but allows changes via dynamic types for the
"unknowns". Those are the systems that last.

------
danharaj
Just a technical point that hints at a significant philosophical idea: The
asymptote cannot reach 100% of program behavior in any finitary way. That
would solve the halting problem. The x-axis should go off to infinity. Also,
it's not a smooth progression. There are huge jumps in expressivity involved
here. Going from Java-style types to Hindley-Milner to full System F are all
massive jumps in expressivity. There are also _incompatible features_ of type
theories. Type theories are a fractal of utility and complexity.

A type system doesn't only describe the behavior of the program you write. It
also informs you of _how_ to write a program that does what you want. That's
why functional programming pairs so well with static typing, and in my opinion
why typed functional languages are gaining more traction than lisp.

How many ways are there to do something in lisp? Pose a feature request to 10
lispers and they'll come back with 11 macros. God knows how those macros
compose together. On the other hand, once you have a good abstraction in ML or
Haskell it's probably adhering to some simple, composable idea which can be
reused again and again. In lisp, it's not so easy.

A static type system that's typing an inexpressive programming construct is
kind of a pain because it just gets in the way of whatever simple thing you're
trying to do. A powerful programming construct without a type system is
difficult to compose because the user will have to understand its dynamics
with no help from the compiler and no logical framework in which to reason
about the construct.

So, a static type system should be molded to fit the power of what it's
typing.

The fact that every Go programmer I talk to has something to say about their
company's boilerplate factory for getting around the lack of generics tells me
something. This is only a matter of taste to a point. In mathematics there are
a vast possibility of abstract concepts that could be studied, but very few
are. It's because there's some difficult to grasp idea of what is good,
_natural_ mathematics. The same is in programming: there are a panoply of
programming constructs that could be devised, but only some of them are worth
investigating. Furthermore, for every programming construct you can think of
there's only going to be a relatively small set of _natural_ type systems for
it in the whole space of possible type systems.

Generics are a _natural_ type system for interfaces. The idea that interfaces
can be abstracted over certain constituents is powerful even if your compiler
doesn't support it. If it doesn't, it just means that you have to write your
own automated tools for working with generics. It's not pretty.

~~~
tom_mellior
> The asymptote cannot reach 100% of program behavior in any finitary way.
> That would solve the halting problem.

There are languages that enforce termination. They only accept programs that
can be shown to terminate through syntactic reasoning (e.g., when processing
lists, you only recurse on the tail), or where you can prove termination by
other means.

Coq is like this, as is Isabelle, as is F* , as are others. They also provide
different kinds of escape hatches if you _really_ want non-terminating things,
like processing infinite streams.

This "we can never be sure of anything, because the halting problem" meme is
getting boring. Yes, you cannot write the Collatz function in Coq. No, that is
not a limitation in the real world.

~~~
danharaj
I'm aware of strongly normalizing systems and the escape hatch of coinductive
programming. But when we're talking about the space of _all_ programs, the
fundamental limit of incompleteness is important. How else do we judge the
merit of a type system except by seeing how it fits into the overall space of
computable processes?

There are two ways to see type systems. In the first way you construct terms
along with their types, this is called Church style. In the second way, the
terms exist before their types and you use types to describe their behavior,
this is called Curry style. In particular take System F. In Church style the
terms of System F come with their types. In Curry style we see System F types
as a way to describe the behavior of untyped lambda terms.

I used to think Church style was more important but lately I've been more
partial to Curry style. Programs exist before you type them, type systems tell
you how they behave. They also tell you how to construct programs but this is
subordinate to the more fundamental descriptive capacity.

~~~
tom_mellior
> But when we're talking about the space of all programs, the fundamental
> limit of incompleteness is important.

I agree with this. But I think that usually we are _not_ really talking about
all programs. We are talking about _useful_ programs, and those are usually
terminating. In theory, not always, see first-order theorem provers; but in
practice, we always call a prover with a time limit because nontermination
isn't useful.

> Programs exist before you type them, type systems tell you how they behave.
> They also tell you how to construct programs but this is subordinate to the
> more fundamental descriptive capacity.

That's an interesting point, and I agree in many respects. I think when
programming in a dynamically typed language, I approach things in one style,
and in statically typed languages in another style. But specifically for
termination, I don't think so. I never want to write a nonterminating program;
the termination property for my programs exists (as a requirement) before the
program does.

------
201709User
If I don't have to maintain the thing you can give me any Python, JS or Go you
want!

------
katastic
This site has a strange fascination with hatred of static languages. I really
don't get it. My only guess is that modern colleges teach dynamic languages to
students and so they're more familiar with it. Perhaps their teachers even
stress that static languages are inferior.

To me, it's right tool for the right job. I have no problem spinning up a
static language for performance and outsourcing the scripting to a dynamic
language like Python for the best of both worlds in terms of speed, and rapid
development.

------
guicho271828
Correct and useless programs are useless. Quite simple.

------
brango
Why my favorite color is red not blue...

------
nwellinghoff
Time and time again I can make a well written functioning program in Java or
C# at least twice as fast than using js and brothers. Sure it might have more
"lines". Who freaking cares. My team and I square off all the time. "K, you
use node I will use java" And the Java dev always wins. Its just so much
faster, cleaner and mature. Its NO CONTEST.

------
zzzcpan
"I don't think it's particularly controversial, that static typing in general
has advantages"

That's not really true, just a belief. I give you an example to start
understanding these things: the exact same program written in a very high
level and very expressive language, like Perl, instead of Go, is going to have
at least 3 times less code and since defect rates per line of code are
comparable, you would end up with at least 3 times less bugs. Suddenly
reliability argument of static typing doesn't make any sense. That's because
in PL research there is a huge gap in understanding of how programmers
actually think.

~~~
scarmig
Although I'm skeptical about the 1 to 3 ratio, let's run with it.

Given a million line codebase written in Perl vs a three million line codebase
written in Go, which do you think most engineers would prefer?

~~~
3pt14159
Honestly the Ruby or Python one, but I've never seen them because you don't
need a million lines in Ruby or Python to get something productive built.

