
The advantages of static typing, simply stated - ingve
http://pchiusano.github.io/2016-09-15/static-vs-dynamic.html
======
StefanKarpinski
Interestingly, most of these advantages are not specific to static typing, but
derive from having a language that talks about types – even a dynamic one. For
example, most of these advantages apply to Julia as well, a dynamic language
that has type declarations, which:

\- Lets you create typed collections so that if you insert the wrong kind of
value, you get an error immediately, albeit only when code runs, not before;

\- Has abstract types that serve the same function as Haskell's type classes,
and allow you to inherit a huge amount of functionality for a very brief type
definition;

\- Serve as documentation of APIs;

\- Allow performance similar to C/C++.

Another advantage that wasn't listed is that in languages that don't allow
type annotations, you end up writing a lot of boilerplate type-checking code
in libraries to give better error messages.

~~~
tigershark
How can you define throwing an error at runtime instead of compile time an
advantage?

~~~
StefanKarpinski
I didn't claim it was. That's one of the tradeoffs you make in exchange for a
simpler, more forgiving programming model (and the ability to do serious work
in a REPL or notebook). However, given that Julia programs can be mostly type
inferred before execution, it _is_ possible to check for this kind of error
before running a program. There have been some projects to do exactly that [1]
[2], and I suspect that once Julia reaches 1.0, we'll focus more effort on
static analysis tools.

The point is that you don't need a static language to get many of the benefits
of types, you just need your language that allows you to express type-based
properties. One of the way to do that is to have a set of formal rules for
deriving the type of every expression in a program, but that's not actually
necessary.

[1]
[https://github.com/astrieanna/TypeCheck.jl](https://github.com/astrieanna/TypeCheck.jl)

[2]
[https://github.com/tonyhffong/Lint.jl](https://github.com/tonyhffong/Lint.jl)

~~~
jblow
Being able to get these errors without running the code is almost all the
benefit.

It enables a whole class of highly effective program manipulations that are
just unavailable to a non-statically-checked language.

~~~
StefanKarpinski
The claimed benefit is getting the error earlier and "at the location the
erroneous value is inserted." That's exactly what happens in a dynamic
language with typed collections – but, as I admitted, not at compile time. Is
it better to get an error at compile time? Sure, but most of the benefit comes
from the error being raised _at_ the erroneous insertion and not later when
the value is used by some completely unrelated code. Hence, my claim of
getting _most_ of the benefit. Now there are certainly other arguments to be
made about "highly effective program manipulations", but that's not the case
being presented by the author here.

~~~
jdmichal
Those other "highly effective program manipulations" include refactoring and
modifications which involve changing parameter and return types. While I
understand what you're saying, to myself it's pretty clear that there's a wide
gap between these two scenarios:

* Changing the collection's type and not knowing until runtime -- when your program crashes -- that it's being used wrong.

* Changing the collection's type and the program refusing to compile until all usage is corrected.

Because of this, I don't agree with the _most_ modifier you're attempting to
apply. The difference to me is _confidence_. In the former, how confident am I
that I didn't miss a use-case on some branch? Considering that the program
will crash or otherwise fail to operate, that seems like something I want to
be pretty confident about.

It's very useful to have something telling you that everything looks OK. That
something can be testing, but why rely on writing good tests when we have
formal proof systems available?

------
hacker_9
It's frustrating that this static vs dynamic 'battle' still goes on. The
longer everyone thinks this is actually a problem, the longer we have to wait
for innovations to happen. Look at the web in 2016, the technology is a
complete disarray. Look at the game industry in 2016, where C++ is thrown
around as the the cause and solution to all life's problems. A language
created 33 years ago now with no intention of being used as it is today.

The advantage of static typing is obvious to me; the more my computer
understands of my code, the _less_ work I have to do. I can offload my memory
and instead think about things that are more interesting, not be looking up
Python obscurities on Stackoverflow.

But static typing is not perfect, and there is still more the computer could
be doing if it had more understanding. That is where we should be focusing,
that is where the real problem is.

~~~
sesm
With static typing the computer doesn't "understand" your code. It just
automatically checks the correctness proof you've presented to it. And the
correctness is limited only to certain properties of the program, that a given
type system supports.

~~~
hacker_9
We can argue about the definition of 'understand' all day, but it's a simple
fact that the computer has _more_ of an understanding of the structure of your
program with a type system, and thus can help out more.

~~~
bb88
I still wouldn't use 'understanding' in that context. It makes as if a static
typing compiler sounds "magical".

I could say "python is better at understanding what the programmer wants to
do". But I think you'd disagree with me.

The compiler is doing lexical analysis of the program being fed into it.
That's it.

------
panic
It's important to remember that the terms "statically typed" and "dynamically
typed" cover a huge range of different language features. To have a good
discussion about the tradeoffs, it's better to talk about individual features
rather than "statically typed" or "dynamically typed".

For example,

 _Like puzzle pieces with shapes that we can observe fit together, we can
think of types as specifying a grammar for programs that ‘make sense’._

All programming languages have a grammar for programs that make sense (it's
specified by the parser). Algebraic data types give you a grammar for _values_
that make sense. Separately, a type checker assigns types to expressions and
checks that they're consistent. You can have algebraic data types without a
type checker (e.g. Racket's 2htdp/abstraction), and you can have a type
checker without algebraic data types.

You can also have constraints on values that are too complex to be easily
checked at compile time (e.g. clojure.spec) or that can be checked at compile
time but only incompletely (e.g. Erlang's -type and Dialyzer).

Anyway, I don't mean to be critical of the article: it does a good job
covering the high-level tradeoffs. I just think people are too quick to
generalize their experience with particular languages to entire classes of
language features.

------
kbanman
I've recently joined a team writing primarily in Clojure, whose proponents
often tout repl-driven programming as a unique benefit to the language.

In a statically typed language (the stronger the better), aided by a good IDE,
I don't need to be constantly executing my code against data during
development; My editor is constantly validating my code, and when it stops
complaining, my code will work. And months later when I or someone else uses
that code in another part of the system, they won't need to execute that code
to see how it behaves, as the types themselves provide documentation and as-
you-code feedback.

~~~
tigershark
with F# you have a REPL, it is not only prerogative of dynamic typed
languages.

~~~
jdmichal
You could argue that with PowerShell you have an entire .NET REPL. And I have
actually used it that way, particularly when I run into some function that has
bad MSDN documentation.

------
jondubois
I started programming with PHP and JavaScript (both dynamically typed) then I
started writing games with ActionScript 2 (dynamically typed) then I switched
to ActionScript 3 (statically typed), then I got into Java (statically typed)
and later C/C++ (statically typed) - So I spent a lot of time with both.

For the past few years I've been coding almost exclusively in dynamically
typed languages - I did some Python (dynamically typed) but mostly a lot of
JavaScript/Node.js (dynamically typed). I understand all the pros and cons,
but for me personally, I am much more productive with dynamically typed
languages than statically typed ones.

It was a long time ago, but I still remember clearly when I switched from AS2
to AS3 - I was writing games for Flash at the time; I did feel an improvement
and I really liked the additional structure which types brought to my code.
There was a certain satisfaction that came with defining fixed classes and
interfaces and making use of polymorphism and various formal 'design
patterns'. It gave me extra 'confidence' in my code.

In retrospect, after having spent years praising statically-typed languages,
and then later switching back to dynamically typed languages, I think a lot of
the benefits that I felt during my static typing phase came down to one simple
fact:

"Statically typed languages force you think more before you do things" \- This
was really valuable early in my career when I had a tendency to rush things.
However, now that I more fully appreciate how complex programming is (and how
easy it is to break stuff), I am always very careful (regardless of the
language).

Static typing for me has become a tedious process through which I no longer
derive much value - Though it was really useful at a specific point in my
career.

That said, I think there is some stuff (anything to do with low-level
hardware/systems and optimizations) where statically typed languages cannot be
avoided.

Also I wouldn't say that people who like statically typed languages are
inexperienced - I know some very experienced engineers who are just addicted
to that extra feeling of 'confidence' and structure which statically typed
languages give you.

~~~
jdmichal
Forgive me, because I only have what you have said to work with. It sounds
like this was a lot of smaller projects with smaller teams? If so, static
typing increases value as project size and head count increases.

The more pieces of the project that you didn't write or otherwise have low
knowledge of their inner workings, the more dangerous modifying code becomes.
It's very useful to have _something_ telling you that everything looks OK.
That something can be testing, but why rely on writing good tests when we have
formal proof systems available?

------
contextfree
"This is sort of a subtle point: when programming in a static language, you
always have a choice about what information you encode in the types, and how
you encode things. ... In a static language, you do always have the option of
building a less typeful API, where less is enforced by the types, but it is
often tempting to spend more time encoding things statically (and then proving
things to the typechecker) than would be saved by avoidance of potential
future bugs. With experience, you develop a good sense for what is worth
tracking statically and what to keep dynamic, but newcomers to static
languages can make bad tradeoffs here, which in turn contributes to needless
complexity in the language’s library ecosystem. "

This is a very important point that I've rarely seen talked about explicitly
though I think all programmers develop a tacit sense of it.

~~~
jdmichal
It's an important point, but it can also be fairly well generalized to a lot
of the "softer" points of program design. For instance, when to introduce an
abstraction layer, and where the boundaries of the abstraction are. You could
find-and-replace instances of "static typing" with "abstraction layer" into
the quoted paragraph and it still makes perfect sense.

------
jondubois
I think the concept of refactoring in dynamically typed vs statically typed
languages is a double-edged sword comparison.

On one hand, refactoring code written in a statically typed languages is less
error-prone - But on the other hand, such refactorings tend to affect more
code than those of dynamically-typed code.

If your business requirements change often and refactorings are common, it can
be a pain to keep having to rethink your code structure.

Dynamically-typed code is often easier to extend and modify. I find that with
statically typed code, if you start messing with a small part of your code,
sometimes you have to rethink your entire class hierarchy. With dynamic
languages, your code can handle quite a few changes before it gets to a point
were you need to rethink the overall structure.

~~~
talideon
However if you're dealing with a language with type inference, you can get a
good chunk of the ease benefits of dynamic languages.

------
xapata
> Is it safe to call f(x)?

The author asserts that static typing allows the compiler to answer this
question, but this only allows the compiler to spot type errors in advance.
There are many other kinds of errors that are completely invisible to the
compiler.

In a dynamically typed language, if I don't spot the error from reading the
code, I must wait until runtime/testing to discover the error. This is also
true for a statically typed language for every kind of error except type
errors. Personally, type errors haven't been the kind of errors that haunt my
dreams. I guess that's why I'm not enthusiastic about static typing.

~~~
NickPollard
You are right - in a tautalogical way - that type systems only catch type
errors. However, in modern languages (including Haskell, Scala, as well as
newer, more experimental languages like Idris), those type errors can be
extremely powerful.

Many people assume that 'types' are simply primitives like Int and String, and
that a type checker just makes sure you don't pass an Int to a function
expecting String. However, it is possible to express far more powerful
statements about your data using a good type system.

For example, you can express the idea of non-emptiness of a container, as
mentioned in the article. Then you know that, say, taking the max element of a
non-empty container is guaranteed to give you an element, whereas with a
possibly-empty container you might not have any element at all, causing a
null, or exception, or at least requiring an Optional type.

You can express safety properties such as a sanitized string vs. unsanitized.
You can have a Sanitized type that can only be created by calling a sanitize
function - which carefully escapes/handles any invalid characters - and then
functions that might, say, pass a value into an SQL instruction can be typed
to only take Sanitized strings. Now the representation in memory of Strings
and Sanitized strings is identical, but by using different types and a certain
set of allowed functions on those types, you can encode the invariant that a
string cannot be inserted into an SQL query until it has been sanitized. Now
your type checker can catch SQL insertion vulnerabilities for you. How's that
for a type error?

~~~
xapata
You make a good point, so I'll answer in parts.

First, when most people talk about static typing, they're talking about the
near-useless version -- just types like Int and String. I think we agree
there, so I won't mention it further.

Second, a dynamically typed language like Python has more typing information
than some folks first assume. Python's AttributeError is quite similar to a
TypeError. In fact, with old-style classes (v2.1 and earlier), many errors
that are now TypeErrors were AttributeErrors. Calling len() on an
inappropriate object would raise "AttributeError: no __len__". In many cases
where folks talk about wanting a static type system, they really just want
interfaces.

The Sanitized string example is a good counter-point because the interface
needs to be near-identical to a regular string. I'm not certain a more complex
memory representation (caused by defining a different class) would cause
noticeable inefficiency. We're probably not doing vectorized operations on
strings.

This brings me to my third point, that Python 3 has a similar split between
two types: bytes and str. The memory representation is slightly different,
bytes vs unicode, but the interfaces are nearly identical. Two differences
would be decode vs encode and that getting an element from bytes (annoyingly)
gives an int. The distinction between the two types is enforced mostly inside
builtin functions, implemented in C. This was a big deal, causing backwards
incompatibility, many flamewars, and we're still resolving it, though I think
it's clear to most people now that Python 3 is the future.

Is it possible that the Python 2/3 split could have been avoided if we had a
static type system? Perhaps, if we had multiple dispatch, the function
signatures could have remained the same, avoiding backwards incompatibility...
I'm just speculating here. My guess is no, getting rigorous about unicode
would cause incompatibility regardless of the type system. I'll get back to
the main topic now.

> Now your type checker can catch SQL insertion vulnerabilities for you.

This sounds useful, but a good interface solves the problem just as well. I'm
a Pythonista (if you haven't noticed), so my example is PEP 249 that specifies
a DB API for all database wrapper implementers to follow. It states that it's
the wrapper dev's responsibility to implement a sanitizing string
interpolation for the cursor's execute method.

My conclusion is that designing a good interface is important whether you have
dynamic or static typing. Static typing errs on the side of safety, dynamic
typing errs on the side of flexibility. Both can mimic the other. Arguing that
one is better is like saying linear regression is better/worse than k-nearest-
neighbors.

~~~
the_af
> _First, when most people talk about static typing, they 're talking about
> the near-useless version -- just types like Int and String. I think we agree
> there, so I won't mention it further._

I don't agree. Who is "most people"? Certainly not PL designers and not most
of what I've seen here in HN. More importantly, it's also not what the article
under discussion is saying, either.

> _Static typing errs on the side of safety, dynamic typing errs on the side
> of flexibility. Both can mimic the other. Arguing that one is better is like
> saying linear regression is better /worse than k-nearest-neighbors._

In my experience, this isn't true. Modern statically typed languages have all
the convenience of dynamically typed ones, such as REPLs and elegance, plus
the safety of early warnings _and_ the guidance that static types give you
while writing your code (if you've ever written code like this, you'll know
the feeling of working with building blocks that "fit" with each other). So
you can have your cake and eat it, too.

Also in my experience, not having experience with these languages is what
leads some people to think their type systems can only state trivial things
such as "this is a String". They can do more. They can say things such as
"this expression/function doesn't write to disk as a hidden side effect",
which is useful!

~~~
xapata
> you can have your cake and eat it, too.

Like a dynamic language with optional type hints? As I said, both techniques
can mimic each other, with the corresponding tradeoffs. As you use more
generics in a statically typed language, you're sacrificing safety. As you use
more type hints in a dynamically typed language, you're increasing syntax
clutter and decreasing flexibility.

~~~
wz1000
> As you use more generics in a statically typed language, you're sacrificing
> safety

Actually, in languages like Haskell, the more generic your type, the more
"safe" you can expect it to be.

As an example, consider a function that gives you the first element of the
tuple you pass to it.

The most generic type of this function is

    
    
        fst :: (a,b) -> a
    

However, it can also have the type

    
    
        fst1 :: (Int, Int) -> Int
    

Now, you can be sure of the behaviour of fst immediately by looking at its
type, but that doesn't hold for fst1

Pretty much the only definition of fst that the compiler will accept is

    
    
        fst (x,y) = x
    

However, the compiler will accept all the following definitions of fst1

    
    
        fst1 (x,y) = x+y
        fst1 (x,y) = x*y
        fst1 (x,y) = 2^x
        fst1 (x,y) = 7
        ...

------
gaius
Why are we even still having this debate in 2016? The question is not if
strong types are better but how to migrate existing codebases to those
languages. I'm an OCaml guy myself but Swift looks pretty nice...

~~~
crdoconnor
Partly because people still get mixed up between strong typing (something
without any clear definition) and static typing...

~~~
sesm
I'm not aware of any universally accepted definition of "strong typing"

~~~
goatlover
No implicit conversions?

------
TheAceOfHearts
One of my problems with static types in a lot of languages is that they're
usually very limited in what they allow you to express.

For example: a function that takes an integer between 1 and 100. That's a
straightforward constraint! Elixir is an example of a language that lets you
express some of those kinds of constraints, by using guard clauses. Along with
its powerful pattern matching, you end up with very nice code.

What are other languages that are known for these kinds of constraints?

~~~
hepta
Even though that may be factually correct, it doesn't seem to be related to
static typing. Haskell has guards but they are slightly different.

~~~
TheAceOfHearts
I figured it was relevant, since it relates to types as a whole.

Consider the following: I have a "rating" type, which is a whole number from 0
to 5. If you try rendering a view with a number outside of that range, that's
a bug! Which means you probably made a mistake somewhere.

I want software to help me catch bugs. If something can't be confirmed with a
linter or compiler, getting a good error message during runtime is also fine.
Once I've established that I expect some value, I don't want to write extra
checks to confirm that my expectations are met.

~~~
hepta
It relates to typing as a method of encoding invariants, yes. I was trying to
say that it is unrelated to which "type" of typing. That is, I can perfectly
imagine a statically typed language where you could encode such restrictions
as easily as you do with Elixir and you would get the same result, a runtime
error.

Regarding your rating example, you could encode that restriction in a type (or
class in OO) and have a guarantee that once you have an instance of that type
you no longer need to check for it's validity. Rendering a Rating would never
fail at runtime, you'd get a type error first.

------
flyx86
> Some tasks, especially around generic programming, can be very easily
> expressed in a dynamic language, but require more machinery in a static
> language.

I think this is not generally true and not a point against static typing, but
rather against statically typed languages with poor support for generics. Java
stands out as the poorest implementation of generics I have ever seen.

> For instance, a generic serialization library can be written in a dynamic
> language, without anything fancy, but providing the same thing in a static
> language requires more machinery, and is sometimes more complicated to use.

As a counterexample, have a look at NimYAML (my work):

    
    
        http://flyx.github.io/NimYAML/
    

The examples there show how easy it is to provide the user with a generic
interface for serialization in a statically typed language. The implementation
does not differ much from what you'd do in a dynamic language: Provide a pair
of serialization/deserialization handlers for each of [simple types (string,
int, float, enums), array/sequence types, tuple/object/struct types, dict/map
types, pointer/reference types].

~~~
jdmichal
Type-erasure was a crime against the Java community, perpetrated to make the
JVM writers' job easier. Backwards compatibility was a poor excuse, as I can
think of at least two ways to mitigate that without type-erasure:

* Introduce new generic types in a new namespace. .NET did this with `System.Collections` and `System.Collections.Generic`.

* Default unspecified generics to `Object`, including usage of generic types in bytecode tagged with pre-1.5 versions.

------
rer
> the programmer ends up with less to specify

I don't see how this can be true. Wouldn't there be less to specify if the
programmer didn't have to specify types at all?

~~~
dllthomas
Sometimes behavior follows from the types. Combined with inference you
sometimes wind up writing substantially less code (because it's basically
being generated for you by a prolog program).

------
sesm
My personal favorite blog post on this topic:
[http://blogs.perl.org/users/ovid/2010/08/what-to-know-
before...](http://blogs.perl.org/users/ovid/2010/08/what-to-know-before-
debating-type-systems.html)

------
thedonkeycometh
tldr; knows haskell

------
susan_hall
My disagreement starts with the beginning of this:

"A large class of errors are caught, earlier in the development process,
closer to the location where they are introduced."

I would re-state this as:

"A large class of errors are introduced, which otherwise would not exist."

Consider dealing with JSON in Java. Every element, however deeply nested,
needs to be cast, and miscasting leads to endless errors, elsewhere in the
code. Given JSON whose structure changes (because you draw from an API which
leaves out fields if they don't have data for that field) your only option is
to cast to Object, and then you have to guess your way forward, figuring out
what the Object might be.

Consider the Salesforce clone of Java, Apex, which I have had to work in this
month.

The if() statements here are the same one's that I would have to write in Ruby
or Python or PHP, but meanwhile I've had to do a bunch of other, useless work:

    
    
        public Object deserializeJson(String sandi_data) {
            System.debug(sandi_data);
            Object objResponse = JSON.deserializeUntyped(sandi_data);
            
            if (objResponse instanceof Map<String, Object>) {
                Map<String, Object> mapResponse = (Map<String, Object>)objResponse;                
                List<Object> dataList = (List<Object>)mapResponse.get('data');
                if(dataList == null) {
                    String err = 'The Sandi API field for data was null';
                    System.debug(err); 
                    ApexPages.Message msgErr = new ApexPages.Message(ApexPages.Severity.ERROR, err);
                    ApexPages.addmessage(msgErr);
                    return null;
                } else if (dataList.isEmpty()) {
                    String err = 'The Sandi API field for data was empty';
                    System.debug(err); 
                    ApexPages.Message msgErr = new ApexPages.Message(ApexPages.Severity.ERROR, err);
                    ApexPages.addmessage(msgErr);
                    return null;
                } else {
                    System.debug('dataList:');
                    System.debug(dataList);
                    return dataList; 
                }
            }
            return sandi_data;
        }
    

And then, downstream of this:

    
    
                List<Object> dataList = (List<Object>)deserializeJson(sandi_data);
    
                for(Integer i=0; i < dataList.size(); i++) {                
                    Map<String, Object> dataMap = (Map<String, Object>)dataList[i];
                    System.debug('dataMap:');
                    System.debug(dataMap);
                    String response = fetchCompany(dataMap);
                    SearchResult__c profile = saveProfileResult(response);
                    cr.add(profile);
                }
    
    

I'm leaving out the code that is downstream of this function, but it is full
of more of the same: guessing at fields, guessing at how they should be cast,
using if() to guard against null or empty. Tons of unnecessary bloat. Lots of
easy errors to make.

Again, some of the if() statements need to be made in Ruby or Python or PHP,
but the rest of it is just pure bloat. Verbose, unneeded and unhelpful.

In a dynamic language I could simply work with a deeply nested data structure
of maps and lists, and I'd handle the casting at the very end of the process.
In a dynamic language, I could treat everything as a string till the very end,
and then cast to integers or dates or floats or strings as needed. In a
dynamic language, I could write the code faster, with less errors, and with
less code.

Static typing does not live up to its promises.

[ Edit to add ]

We have no control over the API that we draw from. We are drawing from the API
of a different company. I wish they didn't use JSON. If they have to use JSON,
I wish they at least enforced a consistent schema. But they don't. And that is
why static type checking fails: because the real world is chaotic, and when
you have to interact with that real world, you are often forced to do so
dynamically, because of the mistakes that other companies have made. The real
world is dynamic.

The idea that you can know an external API perfectly is a fantasy. The real
world is messy. The real world does not always conform to a strict schema.

The notion that An External API Is Reliable is as stupid as the notion The
Network Is Reliable:

[https://blog.fogcreek.com/eight-fallacies-of-distributed-
com...](https://blog.fogcreek.com/eight-fallacies-of-distributed-computing-
tech-talk/)

[[ Further edit to add ]]

the_af wrote:

"Using dynamic typing will just hide the problems under the rug, and they will
explode in your face later on. Static typing just made those problems
explicit."

What I wrote was:

"In a dynamic language I could simply work with a deeply nested data structure
of maps and lists, and I'd handle the casting at the very end of the process"

I'll simplify this: there are 3 times when we can enforce a schema:

1.) when the API call returns with a string

2.) on every line, scattered through dozens of functions

3.) at the end, when I have the data that I want

In my original comment, I advocated for #3. Here are the reasons I don't like
the first 2 options:

#1 - the external API is bloated, so writing a schema for the whole thing
would be difficult to justify in terms of business. We only need a tiny slice
of the data.

#2 - having casting discovery information scattered through dozens of
functions makes the code brittle and refactoring difficult.

With Ruby or Python or PHP or any dynamic language I have the option of #3:
grab the data, cast everything as a string, grab the tiny sliver of data I
actually need, and then enforce the schema on that tiny sliver. This is the
data that I can cast to integers, floats, dates, etc -- whatever is actually
needed.

In static-type languages such as Java, I'm forced to go with either #1 or #2,
and they are both bad options.

About this, from tigershark:

"it is only the usage of an awful JSON library in a not so nice language"

Bad JSON is part of the real world. If your static-type language can not
handle bad JSON, then it can not handle the real world. That is my point:
static-type checking is too academic, too pure, for the real world.

As to "not so nice language", you are engaging in the No True Scotsman
fallacy, which goes like this: no True statically typed language would be this
bad! But following the No True Scotsman illogic, the rest of your unstated
assumptions amount to: It's only the statically typed languages that most
programmers actually use that are this bad! But somewhere there is a
statically-typed language of such unbelievable purity, it overcomes all of
these problems!

~~~
sidlls
Have you considered the possibility instead that JSON is not an appropriate
serialization protocol or storage mechanism for the problem to be solved? Or
that it isn't being used effectively/correctly (that is, it isn't being parsed
correctly, e.g. with a library or some other mechanism that makes checking
underlying types less cumbersome)? I noted in a comment downstream from here
that plenty of C and C++ JSON libraries offer very easy parsing (e.g.
"json_get_int(element, key)") that fails early. I almost don't even think
about JSON twice if I have to use it as a data format in a C++ program I'm
working on. It's just another library to link against and use almost
effortlessly. Doesn't Java, or this Salesforce implementation of it, have
something similar?

Furthermore every dynamic language I've used has had tooling spring up, e.g.
in the form of linters, that includes at least some basic checking for type
mismatch where possible. I'm not so sure static typing leads to problems. It
isn't problem free--no programming language or paradigm is--but I do think the
lack of it leads to some problems that are easily avoided without too much
extra overhead.

~~~
susan_hall
We have no control over the API that we draw from. We are drawing from the API
of a different company. I wish they didn't use JSON. If they have to use JSON,
I wish they at least enforced a consistent schema. But they don't. And that is
why static type checking fails: because the real world is chaotic, and when
you have to interact with that real world, you are often forced to do so
dynamically, because of the mistakes that other companies have made. The real
world is dynamic.

~~~
sidlls
The real world being dynamic has almost literally nothing to do with this
issue. The concepts are completely unrelated.

Although I'd agree that I find development with dynamically typed languages a
bit more chaotic than static or strongly typed ones. And not necessarily in a
fun, good way, either.

