
Learning Elm has gotten me confused about static vs. dynamic typing - mrundberget
http://rundis.github.io/blog/2016/type_confused.html
======
kazinator
> _It’s hard to describe the experience of doing a major refactoring, having a
> really helpful and friendly compiler guide you along step by step and when
> finally everything compiles it just works._

A truly large refactoring is one that you want to try long before getting
everything to compile.

"Hmm, I have 75 operations involving data structure A. I want it to be B.
Let's try it with five operations; I know that such and such features of the
program only use those. Let's build the inconsistent program and try it out
with with the B structure to the extent that it is possible."

The dynamic world lets us have inconsistent code that partially works, rather
than forcing us to follow up on all the places reached by the ripple effect of
a change. Maybe the change turns out to be a bad idea in the end; then it was
a waste of effort.

Psychologically, having to deal with a big ripple effect is discouraging; it
creates an "activation potential barrier" which opposes change.

~~~
tikhonj
This is why Haskell recently got a flag to turn type errors into warnings: now
you came make a change, get a bunch of type errors (as warnings!) and fix them
one at a time, loading and testing each part as you change it. You don't have
to update everything to play around with your module.

This isn't dynamic typing though: if you tried to use an expression that did
not typecheck, you will get a runtime value immediately. It's the moral
equivalent of automatically replacing the broken expression with an
`undefined` exception.

~~~
talles
That's very, very nice. Is this exclusive to Haskell?

~~~
jerf
Well, Haskell got it from Agda [1], but Agda is even farther in the direction
of Haskell than Haskell relative to mainstream programming languages, and
beyond the "currently practical" language horizon.

[1]:
[https://wiki.haskell.org/GHC/Typed_holes](https://wiki.haskell.org/GHC/Typed_holes)

~~~
dllthomas
Typed holes are quite a different thing than deferred type errors. Does Agda
actually have something like the latter? (Results seem negative from the tiny
bit of searching I've just done, but that's awfully low confidence...)

~~~
jerf
Whoops, you're right! I was too hasty.

------
gelisam
> Does this mean I’m a static typing zealot now ?

That part made me chuckle, because it made me realize that a typing zealot is
exactly what I've become! I used to be a huge fan of Ruby and I was using it
for everything, and now 10 years later I'm a huge fan of Haskell instead, I
give talks about how JavaScript is horribly error-prone and how Elm is so much
better because not only is it statically-typed, it's also purely-functional,
and yadi-yadi-ya.

The transition occurred so smoothly that I didn't notice it. Beware! You might
not be a zealot now, but if you continue on this path, you might
unintentionally become one too :)

~~~
cutler
Whatever Scala devotees may believe, I think FP and OOP are irreconcilable
opposites which means if you really get into FP you're going to hate doing
what the industry demands of you in your day job, ie. OOP. Maybe FP will one
day become the dominant paradigm but until then treat it like alcohol - too
much and you cease to function in the "real" world. I exposed myself to Rich
Hickey's Sermons From The Hammock a few years ago and Clojure just blew my
mind. Since then I get a churning feeling in the stomach when I have to dealt
with OOP, ie. Ruby, PHP, Python. The effects are irreversible so beware -
immutable data mixed with pure functions is a potent drug.

~~~
acjohnson55
FP and OOP are totally not irreconcilable. Most modern languages offer both,
including the most highly used dynamic languages. Javascript really cracked
this nut with its function expressions. Scala's main contribution here is
applying static typing to that mixture in a largely coherent way.

~~~
cutler
As soon as you create a class you're binding data and methods inside a black
box. It's the very antithesis of FP. Of course you can freeze and fiddle with
your class to make it "more functional" but it doesn't wash because in FP
functions are supposed to be "free", ie. unboxed.

~~~
knucklesandwich
I wouldn't necessarily agree with this. Many functional programmers actually
prefer some of the advantages of black boxes. Even methods as a calling
interface vs functions are more a stylistic difference (when you aren't
factoring in subtype polymorphism). OO languages tend to have the convention
of not keeping effectful operations in methods referentially transparent (with
effectful values), but there's nothing inherent to the idea of OO that says
this must be the case. Scala could just as easily have something like
haskell's IORefs instead of vars and make use of an IO monad to achieve a
similar effect.

I'd argue the bigger difference between the two styles is with methods of
polymorphism, but even functional programmers in different camps probably
don't agree on the reasons for why they prefer not to have subtype
polymorphism.

Your opinions about encapsulation probably have more to do with your
preferences wrt typing discipline. Functional programmers who prefer static
types often view black boxes as an asset. In fact black boxes form the basis
of the free theorems, which can provide some nice assurances about the
behavior of code that has few assumptions about the data it receives:
[https://bartoszmilewski.com/2014/09/22/parametricity-
money-f...](https://bartoszmilewski.com/2014/09/22/parametricity-money-for-
nothing-and-theorems-for-free/)

Similarly, data without exposed constructors can be used to enforce strong
invariants about code using techniques like smart constructors:
[https://wiki.haskell.org/Smart_constructors](https://wiki.haskell.org/Smart_constructors)

Black boxed data also conveys advantages for library maintainers. It makes it
much easier to change implementation details, because you're only required to
preserve the semantics of functions operating over data rather than its exact
innards. Not doing this for the String data type in haskell is arguably the
largest reason why it won't be phased out to the superior Text type:
[http://dev.stephendiehl.com/hask/#string](http://dev.stephendiehl.com/hask/#string)

------
Programmatic
> It’s hard to describe the experience of doing a major refactoring, having a
> really helpful and friendly compiler guide you along step by step and when
> finally everything compiles it just works.

This is my experience in a nutshell, my "home language" has been Python since
high school after trying to learn C/C++ and persisted through learning Java in
college. It was easy to use and get started with, and didn't require a ton of
extraneous typing in Vim or looking up class types and returns every time you
tried to use something.

For simple projects as a beginner dynamic/implicit typing is wonderful because
you need fewer references. Having gone the way of the IDE, however, having
good autocomplete, safe refactoring, error flagging before
compilation/runtime, and an easy reference for your libraries has made me
incredibly efficient in explicitly typed languages. I'm not sure how much
longer I'll be using Python as my home language...

~~~
mordocai
To be fair, at least good autocomplete + easy reference is quite possible in
an IDE with dynamic languages.

I have it for Common Lisp in emacs for example(it requires running a live repl
and having the editor connect to it).

Pre compilation errors are harder, and I personally don't feel safe
refactoring in any language unless there are good automated tests. Static
typing makes it better, but nowhere near full proof.

~~~
Programmatic
It is, but it has a lot harder time deciding what type of object I'm working
with in a function vs. being able to specify the object's class in the
function arguments. You can get that with docstrings but it's not as
straightforward as just saying "def frobozinator(FooBar thing):"

~~~
pekk
Either it's your code, in which case why don't you know what type of object
you are passing, or it's a documented part of a framework or something.

With polymorphism and inheritance in the mainstream languages, the method
signature still doesn't exactly tell you what you are going to get, so you
have all of the pain of micromanaging type annotations and you still don't
really get an invariant.

Also, Python has function annotations if your purpose is to document the
signature without using docstrings. I'm unsure why you don't mention these
because they are exactly analogous to what you say is straightforward.

~~~
jdmichal
> With polymorphism and inheritance in the mainstream languages, the method
> signature still doesn't exactly tell you what you are going to get, so you
> have all of the pain of micromanaging type annotations and you still don't
> really get an invariant.

This is exactly what is addressed by the Liskov substitution principle [0] --
the "L" in SOLID [1]. If your subtype does not have the same semantics as the
type it extends, then it fails this criteria. It is violating the (implicit)
contract of the type it is extending.

[0]
[https://en.wikipedia.org/wiki/Liskov_substitution_principle](https://en.wikipedia.org/wiki/Liskov_substitution_principle)

[1] [https://en.wikipedia.org/wiki/SOLID_%28object-
oriented_desig...](https://en.wikipedia.org/wiki/SOLID_%28object-
oriented_design%29)

------
pron
> I’m convinced that functional programming vs imperative programming is a
> much more important concern...

Except that you can be functional _and_ imperative, like Clojure, Erlang and
even OCaml. Those are functional-imperative languages (you can even be _pure_
and imperative yet not functional, like Esterel and maybe Céu).

