
The End of Dynamic Languages - elbenshira
http://elbenshira.com/blog/the-end-of-dynamic-languages/
======
azakai
> This is my bet: the age of dynamic languages is over.

I doubt it.

Yes, all the points the author makes are valid benefits of static typing. But
the benefits of dynamic languages still exist. There are reasons people like
both types of languages.

However, I think a more relevant point was made in the article: that
statically-typed languages are looking more and more dynamic. And we can add
to that that we see signs of dynamic languages adding more 'static' features,
like optional types. Those are signs of convergence, of both major classes of
languages learning from each other and improving.

But I still don't think we'll end up in the middle with "static, but feels
totally dynamic". We'll still have both types of languages around.

~~~
elbenshira
Yes, I may have failed in getting it across, but a big point is this: I don't
think dynamic languages will suddenly die out. Instead, strongly-typed
languages will become more convincing. We are seeing better tools, and they
are becoming more ergonomic.

Elm's latest blog post said it really well:

> Compilers should be assistants, not adversaries. A compiler should not just
> detect bugs, it should then help you understand why there is a bug.

[http://elm-lang.org/blog/compilers-as-assistants](http://elm-
lang.org/blog/compilers-as-assistants)

~~~
lispm
Please call it 'dynamically-typed' languages. You don't say 'strong language'.
You say 'strongly-typed language'.

------
emidln
The author seems to have missed the Ring Spec [1]. It provides a complete
breakdown of what is in a ring response and a ring request for the core spec.

The middleware (at least those that ship with ring) are all documented in
docstrings explaining the keys they add or work on. This is available in the
repl via `clojure.repl/doc` or inspective `(:doc (meta foo))`. It's available
in your editor if your editor has clojure support (I only use vim and emacs,
and both of these can search/display docs on clojure code). There is also
nicely formatted api docs at [2].

[1] - [https://github.com/ring-
clojure/ring/blob/master/SPEC](https://github.com/ring-
clojure/ring/blob/master/SPEC) [2] - [http://ring-
clojure.github.io/ring/index.html](http://ring-
clojure.github.io/ring/index.html)

That said, it would be nice if more of clojure was documented using an
executable schema or type system, ala Schema or core.typed. The author would
have done well to pick less well-documented libraries though.

~~~
elbenshira
Yes, I did miss the ring spec. But as a beginner to ring, I'm not sure how I
was suppose to know that such a document exist.

For the most part, Ring is well-documented. But, as you noted, human-typed doc
strings go only so far. I applaud the writers of ring in their discipline, but
certainly we can do better than depending on human discipline.

You even noted, I picked "less well-documented libraries." I didn't pick them
because they were not well-documented—I picked them because I'm using them!

~~~
emidln
I suspect you might have encountered the SPEC if you opened the README of the
ring project or visited its github page (which displays the README by
default). It's mentioned and linked in that README:

From [https://github.com/ring-clojure/ring](https://github.com/ring-
clojure/ring):

    
    
        Ring is a Clojure web applications library inspired by Python's WSGI and Ruby's Rack. By abstracting the details of HTTP into a simple, unified API, Ring allows web applications to be constructed of modular components that can be shared among a variety of applications, web servers, and web frameworks.
        The SPEC file at the root of this distribution provides a complete description of the Ring interface.

------
ZLeviathan
Totally agreed, using dynamic languages, all became a guessing game, everyone
is guessing, the IDE is guessing, the programmer is guessing what's inside
this variable?

~~~
lispm
Strange. I'm just asking the Lisp system:

    
    
        CL-USER 1 > (class-of *standard-output*)
        #<STANDARD-CLASS EDITOR::RUBBER-STREAM 40E035A2E3>
    

Oh, the class of the value of

    
    
        *standard-output*
    

is EDITOR:RUBBER-STREAM... From there I can find the source, find the
applicable methods, the slots, the super classes...

------
ngrilly
I agree with the author. Even in a "statically typed" language like Go, which
has a simple type system, compared to languages like Scala or Haskell, the
benefits of static typing are very real and enables a lot of extremely useful
tools (code completion, refactoring, linting, whole program analysis, etc.).

~~~
emidln
Code completion, refactoring aids, linting, and whole program analysis also
exist in dynamic languages. Take a look at emacs with anaconda-mode for python
or cider for clojure. IntelliJ in the forms of Cursive and PyCharm (or the
Python IDEA plugin) also support all of these. Even vim can do this stuff
(python-mode, fireplace.vim). I don't Ruby very often, but I suspect these
tools exist here too.

~~~
jskulski
def some_function(some_obj): ...

How does any IDE or editor know what some_function would be passed without a
severe amount of inspection?

I might be wrong on my definitions, but almost by definition, there's just not
enough information before runtime in dynamic systems to infer enough to do any
real refactoring or smart completion.

I use PyCharm everyday, and I like it, but it's refactor menu is limited, can
fail, and pales in comparison to my uses of java w/ IntelliJ or even a well
hinted php app w/ PHPstorm.

~~~
crashedsnow
I agree. I haven't yet encountered an IDE that has any hope of doing anything
more than being able to enumerate the defs in a module or the functions in an
object. JavaScript is one of the worst because there is no canonical "include"
semantic so you really have no idea what's going on. The corollary seems to be
that many of these languages are fast to develop (and run) so excessive
console.logs appears to be the "solution"

------
rbanffy
A completely contrary case can be made where static typing is doomed because
hardware is getting more and more powerful, JIT and tracing is getting better
and dynamic typing is convenient to the programmer while static typing is very
convenient to the compiler and the processor.

It's nonsense, both ways. Both static and dynamic type systems have their uses
and, unless we substantially change the way we use computers (which is
plausible) I don't see either side wiping out the other.

~~~
jskulski
I think the main point is that its very convenient to the IDE/developer, not
for the performance. You know what things are before runtime. You can explore
and have discoverable APIs much easier.

Most dynamic languages I've used in my career (javascript, python, php) have
either metadata to typehint, a hinted superset of the language or new language
support for typing.

------
elbenshira
A note on Typed Clojure: Ambrose is doing excellent work with it, and I'm
excited to see it develop.

But I wished there was a bigger push from the top, from Rich himself, on
"first-classing" typed Clojure. Aim for 100% annotation coverage for the most
popular libraries. Start with core (maybe it already is? I haven't kept up)
and ring, then move outwards from there.

Library writers will be more motivated to add annotations if the big libraries
are doing it.

~~~
oberstein
I doubt Rich will ever push for it, but stranger things have happened. Types
get in the way of data. When you ask a request "What's in you?" in Clojure you
can just look at the data and find out. In Java you can ctrl+space in your IDE
and look for public methods that by convention start with 'get', and then for
the return values of those things also look for things that start with 'get',
but that's a pretty impoverished way to get at the data. Try printing it out
and you're likely to just get a memory address, great. So what do types buy
you? Sometimes performance, but if that were always true then all static
languages should be about as fast as C. The other thing is this "certainty"
mentioned, and that certainty is just that you didn't make a typo or spelling
error, woop-dee-doo, those are among the fewest sets of errors dynamic
programming language users run in to, especially when you change your workflow
to take advantage of the nature of dynamic languages, and a more valuable
certainty is that this piece of data you have isn't going to change underneath
you. You have to go all the way to Haskell (I'd argue Shen) to get real
benefits of typing beyond performance and typo-protection, and most
programmers are unwilling to do that for very good reasons.

~~~
nrinaudo
I fear I must be misunderstanding your points. Are you really stating that, in
order to know what you can do with data, you need just need to inspect it at
_runtime_? Or that types only protect you from typos?

Types allow the compiler to do a lot of validation before your code is even
executed, removing whole classes of bugs before they even have a chance to
manifest themselves. You thought that value was an int and divided it by two?
It was a string. I'm glad I didn't have to wait until runtime to find out
about it, or to write tests to make sure that _every single code path_ to that
division results in the value being, in fact, an int.

Types allow developers to trust whatever data they receive without feeling the
need to protect against hostile, or less experienced, application programmers
passing incorrect data. They allow you to know, with absolute certainty, that
what you wrote can only be executed the way you meant it to be executed
(whether that's correct or not is another question altogether). A function in
a dynamic language is never complete, you never know what data, or shape of
data, you will receive. Types buy you that certainty.

Code efficiency is not the point - developer efficiency is. When you can trust
your data, you can focus on what your code needs to do - that's usually
complicated enough without adding the complexity of not being able to trust _a
single value_.

The main argument against static languages is that, in their struggle to be
sound, they end up not being complete - and it's absolutely true: there are
perfectly legal programs that you can't write with a static language, but that
a dynamic one would run without batting an eyelash. Duck-typing comes to mind,
I'm sure there are other examples.

Another argument is that type systems get in the way of what you want to do.
There are a few cases where it's true - see the point just above. But in my
entirely anecdotal experience, the vast majority of times someone complains
the compiler won't let him do what he wants, it's because _what he wants to do
is not correct_. The complaint is not that the compiler is too restrictive -
it's that your bugs are shoved in your face much, much more frequently and
quickly than they would if you had to wait for runtime. And that's a _good_
thing.

~~~
oberstein
"Runtime", or "REPL" time, or "doc time" are all ways you can just "look at
it". Do you think knowing "x" is type "Foo" tells you anything about what you
can do with "x", other than use it where you can use "Foo"s? You need to
figure out what you can do with "Foo" by reading the docs around "Foo". Yeah,
I know a priori I can call a method on Foo that returns an int and pass that
to other int-taking methods, but is that what I want to do? Which method
returning an int do I want to call? You need to look at what "Foo" is to
determine this. Rich's example is HttpRequestServlet in
[https://www.youtube.com/watch?v=VSdnJDO-
xdg](https://www.youtube.com/watch?v=VSdnJDO-xdg) I'd recommend watching that
and ignoring me. A lot of the issues he raises, as contextfree points out, are
around OOP rather than static typing in general so take these as my views and
not necessarily his...

If "Foo" is just dynamic data, perhaps in a map, composed of other data, and
it's values all the way down, any of those three points of time are easy ways
to look at "Foo" and can generally be faster than reading a JavaDoc page or
ctrl+spacing figuring out what you have access to and what it all means and
what you want to do with the data and code associated with "Foo". Since you're
going to be looking at the thing you want anyway, I find it very uncommon in
practice that I accidentally divide a string by 2. Static typing would let me
know before I run the thing that I was an idiot (maybe, see next paragraph),
but I'll find out almost as quickly when the code I just worked on (either in
the REPL, which is very nice, but even from the starting state) actually does
what I want -- you do that regardless of static/dynamic typing, right? You
don't just write code and never verify it does what you want, trusting that it
compiled is good enough? Anyway when I run it, I'll immediately hit a runtime
error (which can be handled in Lisp in ways way more flexible than exceptions
in other languages) and realize my stupidity. So I don't find that particular
situation or class of bugs very compelling.

Nor do the presence of declared types alone guarantee my safety -- I need to
know about any conversion rules, too, that may let a String be converted to an
int for division purposes, or if there is operator overloading that could be
in play. Conversion rules and operator overloading are interesting regardless
of the static/dynamic typing situation because too much or too few can lead to
annoyances and bugs. Java lets you concatenate ints into strings with "+",
Python doesn't, but Python lets you duplicate strings with "*".

What I have run into in both static and dynamic languages is division-by-zero
errors, which, unless you go all the way to something like Shen that supports
dependent typing, your type system with "int" declarations will not help you
with. I've run into errors (in both static and dynamic languages) parsing
string contents to extract bits of data. I've run into errors around the
timings of async operations. I've run into null pointer exceptions (though
much less frequently in dynamic languages, maybe once the Option type is in
wider use I can rely on that too but its issue is poisoning the AST). I've run
into logic errors where I thought the system should be in state C but instead
it transitioned to state E. The bugs that take the longest to diagnose and fix
are simply due to not understanding the problem well enough and having an
incorrect routine coded, so I have to figure out if it's a logic error or an
off-by-one error or simply using the wrong function because it had a
suggestive name and I didn't read through the doc all the way. Rich made an
interesting point in one of his talks that all bugs have both passed your type
checker (if you have one) and your test suite. My point is that I'm not
convinced static typing as I'm given in the languages I work in is worth it
for any reasons beyond performance because I don't see any other improvements
like making me more productive or writing less bugs. I'm not opposed to types
and schemas themselves in general, just static typing. I'd rather see pushes
towards full dependent typing and systems like TLA+ (which has seen more
industry usage than Haskell) and Coq than things like core.typed in Clojure.

The last point I'd raise is that I agree with you developer efficiency is more
important than code efficiency. But this has been one of the mainstay
arguments from the pro dynamic typing camp for many decades, with static
typing advocates conceding that yes the developer will be more productive in a
dynamic language but the performance is terrible for anything serious. If you
want to argue static typing was actually more efficient all along from a
developer perspective than dynamic typing, you have a lot of work to do. I'm
not particularly interested in that argument (at least today) but the point
still needs to be raised.

------
lispm
I don't like when people don't understand the difference between dynamic
languages and so-called dynamic typing.

For example in Java a dynamic feature is that it has class loaders which
enable to load classes into a running Java program. This way a program can be
updated and extended at runtime.

Other dynamic features are 'late binding', runtime evaluation, code as data,
introspection/reflection up to a full blown meta-object protocol which enables
things like modifying inheritance, slot allocation, instance creation, method
dispatch and so on.

One of the features of Erlang for example is that it enables application
updates, while providing zero downtime. The demand for that in the telco
business is far from 'end'ing.

What actually 'dynamic typing' is and if it is seeing its end is a totally
different question.

------
likeclockwork
I think Hiccup is way better than separate HTML templates, much less noisy.

~~~
emidln
Hiccup is different than HTML templates. Hiccup is trivially composable
because it's a library based around native data structures. If you want
something your designers don't have to think about, Enlive (uses HTML as
templates and provides a jQuery-esque transform system) and Selmer (Jinja2 or
Handlebars, mixing HTML and a template language) both exist and are well-
tested.

------
realharo
I agree. Dynamic languages had their appeal when the only mainstream
alternatives were Java and C++.

But in today's world, with verbosity-eliminating type inference, more
expressive type systems, and tools that can make use of all the extra
information to greatly increase productivity, it's really hard to think of any
meaningful benefit dynamic languages can provide (that is related to their
choice to have dynamic typing, not other parts of the language).

------
jack9
> that we disdain verbosity

Dynamic languages show there's a tradeoff. Repeating yourself is often useful,
when maintenance is expected and code is cheap to produce, without having to
worry about strict typing.

