
Idioms of Dynamic Languages - wcrichton
http://willcrichton.net/notes/idioms-of-dynamic-languages/
======
glv
The point about partial programs is a particularly strong one, especially when
it comes to test-driven development. Being able to write a test for a method
or function before that method or function even exists may seem like a small
thing, or even pointless ... but that flow and rhythm of "write the test, make
it pass, refactor" is so strong and natural (once you've grown accustomed to
it) that having to break that flow for brand new methods feels quite
disruptive.

~~~
Jweb_Guru
It's pretty easy to add typed holes to a statically typed language in many
cases, which will simply cause the program to panic if they end up being
instantiated. I'm not really sure what issue the author takes with inserting
panics / bottom until you're ready to actually deal with the problematic cases
--the only way to make it more ergonomic would be to make it the default
behavior, which pretty much means you're not getting any benefits out of type
checking.

------
MageSlayer
My main reason for using _proper_ dynamic languages (or better - dynamic
languages VM - Lisps, APLs, etc.) is REPL/interactive debugger (stack
restarts, walking the stack freely, changing/fixing functions while
debugging).

The overall idea is keeping state across changes. That's of utmost importance
for doing complex things.

And vice versa: Restarting/recompiling to fix a small error while calculating
something heavy? No, thanks.

BTW, Rust guys promised REPL by the end of 2018 :)

------
sitkack
I'd specifically ask the Racket or Erlang folks. Racket because of academic
rigor and Erlang because of serious production usage. That said, every large
Erlang codebase gets Dialyzer Treatment [1] at some point.

The two systems are converging, it is inevitable.

[1]
[http://erlang.org/doc/man/dialyzer.html](http://erlang.org/doc/man/dialyzer.html)

------
doomrobo

        // By contrast, Rust (and I believe Haskell?) require  exhaustive matching:
        let MyValue::String(ref s) = map[&200]; // Invalid!
    

This is called irrefutable pattern matching in Rust terminology. Haskell
actually doesn't enforce this at compile time. Consider the program

    
    
        data Foo = A Int | B Int
    
        x :: Foo
        x = A 10
    
        -- Do an invalid pattern match
        y :: Int
        y = let (B i) = x in i
    
        main :: IO ()
        main = do
          print y
    

This compiles without warning and gives the error at runtime:

    
    
        foo: foo.hs:8:9-17: Irrefutable pattern failed for pattern B i

------
yogthos
I've worked with many kinds of statically typed languages, including Java,
Haskell, and Scala for nearly a decade. I ultimately found that static typing
introduces a lot of mental overhead in practice, and limits the way you're
able to express yourself. Many dynamic patterns such as Ring middleware
[https://github.com/ring-clojure/ring/wiki/Middleware-
Pattern...](https://github.com/ring-clojure/ring/wiki/Middleware-Patterns)
become difficult in static languages. I've been working with Clojure about 8
years ago, and I don't miss types in the slightest. If I did, I would've gone
back to a typed language a long time ago.

Dynamic typing tends to be problematic in imperative/OO languages. One problem
is that the data is mutable, and you pass things around by reference. Even if
you knew the shape of the data originally, there's no way to tell whether it's
been changed elsewhere via side effects. The other problem is that OO
encourages proliferation of types in your code. Keeping track of that quickly
gets out of hand.

What I find to be of highest importance is the ability to reason about parts
of the application in isolation, and types don't provide much help in that
regard. When you have shared mutable state, it becomes impossible to track it
in your head as application size grows. Knowing the types of the data does not
reduce the complexity of understanding how different parts of the application
affect its overall state.

My experience is that immutability plays a far bigger role than types in
addressing this problem. Immutability as the default makes it natural to
structure applications using independent components. This indirectly helps
with the problem of tracking types in large applications as well. You don't
need to track types across your entire application, and you're able to do
local reasoning within the scope of each component. Meanwhile, you make bigger
components by composing smaller ones together, and you only need to know the
types at the level of composition which is the public API for the components.

REPL driven development [http://blog.jayfields.com/2014/01/repl-driven-
development.ht...](http://blog.jayfields.com/2014/01/repl-driven-
development.html) also plays a big role in the workflow. Any code I write, I
evaluate in the REPL straight from the editor. The REPL has the full
application state, so I have access to things like database connections,
queues, etc. I can even connect to the REPL in production. So, say I'm writing
a function to get some data from the database, I'll write the code, and run it
to see exactly the shape of the data that I have. Then I might write a
function to transform it, and so on. At each step I know exactly what my data
is and what my code is doing.

Where I typically care about having a formalism is at component boundaries.
Spec provides a much better way to do that than types. The main reason being
that it focuses on ensuring semantic correctness. For example, consider a sort
function. The types can tell me that I passed in a collection of a particular
type and I got a collection of the same type back. However, what I really want
to know is that the collection contains the same elements, and that they're in
order. This is difficult to express using most type systems out there, while
trivial to do using Spec.

