
The sky is not falling – about the supposed end of dynamic languages - dgellow
http://yogthos.net/posts/2015-11-28-TheSkyIsNotFalling.html
======
sanxiyn
All examples in the post is Haskell. I think static typing is good, but
Haskell is bad. Let me quote Ben Lippmeier's words in his PhD thesis:

"I am thinking of a data structure for managing collections of objects. It
provides O(1) insert and update operations. It has native hardware support on
all modern platforms. It has a long history of use. It's proven, and it's
lightning fast.

Unfortunately, support for it in my favourite language, Haskell, appears to be
somewhat lacking. There are people that would tell me that it's not needed,
that there are other options, that it's bad for parallelism and bad for
computer science in general. They say that without it, programs are easier to
understand and reason about. Yet, it seems that every time I start to write a
program I find myself wanting for it. It's called the store.

The mutable store, that is. I want for real destructive update in a real
functional language. I wanted it for long enough that I decided I should take
a PhD position and spend the next several years of my life trying to get it."

[http://benl.ouroborus.net/papers/thesis/lippmeier-impure-
wor...](http://benl.ouroborus.net/papers/thesis/lippmeier-impure-world.pdf)

While I admire him, I think the better solution is to give up Haskell.

~~~
tome
Well, er, Haskell does have destructive update. You are misunderstanding Ben's
point is somewhat. "We present a type based mutability, effect and data
sharing analysis for reasoning about such programs [containing mutable data]".

[https://hackage.haskell.org/package/base-4.8.1.0/docs/Data-I...](https://hackage.haskell.org/package/base-4.8.1.0/docs/Data-
IORef.html)

[https://hackage.haskell.org/package/base-4.7.0.2/docs/Data-S...](https://hackage.haskell.org/package/base-4.7.0.2/docs/Data-
STRef.html)

~~~
sanxiyn
How am I misunderstanding? I am quoting exact words, after all. Re IORef and
STRef, let's quote some more:

"Representing effects in value types is a double edged sword (p. 42). Haskell
has fractured into monadic and non-monadic sub-languages (p. 44). Monad
transformers produce a layered structure (p. 45)." These are section titles,
be sure to read the entire sections which explain in details what the title
means.

~~~
RyanZAG
I can't work out why you're being downvoted. Kneejerk reaction for anything
that is critical of Haskell maybe? Unfortunately that does seem to be very
common on HN.

Ben Lippmeier makes a very good point and is being quoted in full here.
Haskell does have issues, and people are trying to fix those issues. If they
can be resolved isn't known yet, and the resulting language may be different
from Haskell. Haskell is a long way from some pinnacle of perfection, and it
is very much not a silver bullet to solving problems. Not that anything in
computer software is a silver bullet [1].

[1]
[https://www.youtube.com/watch?v=3wyd6J3yjcs](https://www.youtube.com/watch?v=3wyd6J3yjcs)

------
bollockitis
Erlang is probably not the best example. Programmers using Erlang are almost
certainly of a different caliber than those using Ruby and Python. I don't
think it's very productive to study dynamic versus static typing without also
controlling for the experience and discipline of the programmer. This is an
element that many of the academics want to ignore because it's difficult to
quantify, but I wouldn't trust any argument that exluded the human element
because it's ultimately humans who use these tools. I still haven't seen
anyone address Elben Shira's original point about the difficulty involved in
understanding the return values of a function: what's really in that immutable
request map? How do I use it?

Given experienced and disciplined programmers, the type system probably isn't
as important. But given the quality of programmers who are using these
languages in the real world, a good type system does seem to help.

~~~
yogthos
Yeah, but do you seriously expect an inexperienced programmer to be able to
drive a language like Haskell?

~~~
nrinaudo
I've heard various CS teachers say that, when presented with Haskell as a
first language, students took to it like a fish to water and could start
writing useful code really fast.

My personal experience is that Haskell isn't really all that complex - when
you stick to the core, it's actually fairly simple, with a small set of well
thought out features (compared to Scala, for example, which I love but has a
_lot_ of features). Strangely enough, what I find hard when dealing with
Haskell is the syntax, because it's so different from all the languages I've
been taught or taught myself in the past. That shouldn't be a problem for
people that have little to no previous experience and tackle it with a virgin
mind.

~~~
yokohummer7
What you said is purely anecdotal. Here's another anecdote I think worth
mentioning (it is from a Haskeller who has spent years teaching Haskell to
kids):

[https://www.reddit.com/r/haskell/comments/3pfg7x/either_and_...](https://www.reddit.com/r/haskell/comments/3pfg7x/either_and_in_haskell_are_not_arbitrary/cw6hlw7)

> this experience has taught me that whenever functional programmers claim
> that certain things are only unintuitive because programmers have had their
> minds polluted by non-Haskell programming languages... they are usually
> wrong.

> the more common claims you see about how purely functional programming is
> easier and more intuitive for students without previous imperative
> programming languages. Surprisingly, that's not the case. Even in situations
> where I don't expect it, I'm constantly fighting everyone's urge to think
> about situations first and foremost in imperative terms.

I think it's safer to assume that functional approaches have inherent
complexity compared to imperative approaches. YMMV, of course.

~~~
wtetzner
> I think it's safer to assume that functional approaches have inherent
> complexity compared to imperative approaches. YMMV, of course.

Complexity and intuition aren't the same thing. I would argue that the _point_
of functional programming is to reduce and control complexity.

------
lisper11
The age of dynamic languages will be over when you see a modern static
language that supports self-modifying runtime environments as well as Common
Lisp or Smalltalk presently do.

This is an area where the state-of-the-art in static languages (Haskell,
Idris, Agda) falls flat, and I suspect that's why (other than ignorance) we
see so many comparisons from Haskell advocates between Haskell and the likes
of Python, Ruby, or worse that try to paint dynamic languages as offering only
brevity through the omission of type signatures as a major advantage over
common static languages--an advantage for which Haskell can largely compensate
--while completely ignoring the capabilities more sophisticated dynamic
languages offer.

~~~
incepted
The JVM has supported the kind of environment you describe for more than a
decade. Fire up IDEA or Eclipse, put a break point in your code and modify it
at runtime at your heart content.

You can even rerun portions of your code after you've modified it (rewind it),
something that even Smalltalk never supported.

~~~
lisper11
> The JVM has supported the kind of environment you describe for more than a
> decade. Fire up IDEA or Eclipse, put a break point in your code and modify
> it at runtime at your heart content.

The JVM is a virtual machine targeted by many languages, some of which are
dynamic, and Java isn't a "modern" static language from the standpoint of type
theory and provability.

In Pharo Smalltalk, for example, you can pause running code in the debugger
and modify it using syntax the language doesn't actually support, and then, in
the same runtime, open a class browser and modify the compiler to support the
new syntax before finally accepting your changes in the debugger and resuming
execution. You can swap every reference to one object with references to
another object using become:, effectively swapping the identities of two
objects globally at runtime.

Perhaps all of this is possible in Haskell and the like, but it's still to be
demonstrated.

> You can even rerun portions of your code after you've modified it (rewind
> it), something that even Smalltalk never supported.

Smalltalk and Lisp both support this.

~~~
incepted
> Smalltalk and Lisp both support this.

Interesting, do you have a few pointers?

~~~
lisper11
For Smalltalk, look at Pharo. For Lisp:

[http://malisper.me/2015/07/07/debugging-lisp-
part-1-recompil...](http://malisper.me/2015/07/07/debugging-lisp-
part-1-recompilation/)

[http://malisper.me/2015/07/14/debugging-lisp-
part-2-inspecti...](http://malisper.me/2015/07/14/debugging-lisp-
part-2-inspecting/)

[http://malisper.me/2015/07/22/debugging-lisp-
part-3-redefini...](http://malisper.me/2015/07/22/debugging-lisp-
part-3-redefining-classes/)

[http://malisper.me/2015/08/05/debugging-lisp-
part-4-restarts...](http://malisper.me/2015/08/05/debugging-lisp-
part-4-restarts/)

[http://malisper.me/2015/08/19/debugging-lisp-
part-5-miscella...](http://malisper.me/2015/08/19/debugging-lisp-
part-5-miscellaneous/)

~~~
incepted
None of these links cover what I was discussing. The closest is part 4
("restarting") but this section only describes code that can retry after a
failure.

I was specifically talking about debuggers that let you rerun code after
you've modified it without having to restart your application, something that
Java debuggers have allowed for almost ten years now.

I continue to think that neither Lisp nor Smalltalk offer this functionality,
can someone prove this wrong?

~~~
lisper11
> I was specifically talking about debuggers that let you rerun code after
> you've modified it without having to restart your application, something
> that Java debuggers have allowed for almost ten years now.

Java got this from Lisp and Smalltalk. The first and third links covers what
you're talking about. For Pharo, which is graphical, just open the image, open
the debugger on some code and modify it to your liking before clicking
"Proceed," or alternately, pick an earlier stack frame and do the same.

Did you even read this paragraph:

> In Pharo Smalltalk, for example, you can pause running code in the debugger
> and modify it using syntax the language doesn't actually support, and then,
> in the same runtime, open a class browser and modify the compiler to support
> the new syntax _before finally accepting your changes in the debugger and
> resuming execution_. You can swap every reference to one object with
> references to another object using become:, effectively swapping the
> identities of two objects globally at runtime.

------
jasode
Just in case anyone is reading the blogs in a hurry, keep in mind that neither
the Dmitri Sotnikov nor the post he mentioned by Maxime Chevalier-Boisvert
actually try to refute Elben Shira's carefully worded prediction: _" This is
my bet: the age of dynamic languages is over. There will be no new successful
ones."_

Instead of making a case for "new successful dynamic languages" and _why_ such
new ones would emerge, both authors are defending the _existing benefits of
today 's dynamic languages_.

Elben Shira is saying that he predicts the next wave of successful languages
will have static checking _but also look & feel like a "dynamic" one_ made
possible by smarter compiler technologies such as type inferencing. (Maxime
Chevalier-Boisvert also mentions the new Crystal language as an example of
this.)

If one is itemizing all the great points of Python/Ruby/Javascript, that's
orthogonal to what Shira is betting on.

~~~
yogthos
Oh definitely think the prediction is absurd. I didn't realize it wasn't
obvious from my post.

The problem I outline is not that static typing involves additional
boilerplate. It's the inherent additional cognitive load it introduces. A
smarter compiler isn't going to help you there.

The fallacy of Elben's argument is in the assumption that there will be only
advances in the field of static typing.

Meanwhile, people working with dynamic languages are improving their tools at
an astonishing pace. For example, working with an editor like Cursive provides
many refactoring options you could only expect from a static language a few
years ago.

The biggest benefit though comes from interactivity though. Dynamic languages
lend themselves extremely well to providing live coding environments. This
allows for tools like the REPL and things like Figwheel
[https://github.com/bhauman/lein-figwheel](https://github.com/bhauman/lein-
figwheel) and this kind workflow will be difficult to match for static
languages.

~~~
incepted
It's obvious you think it's absurd and given your heavy involvement in
Clojure, your reaction is not unexpected. But you are still not addressing the
main claim of the articles (that no new dynamically typed language will ever
emerge again and be successful) and also missing why statically typed
languages have won (it's not just correctness guarantees, it's also the
automatic refactorings they enable, which are impossible to achieve with
dynamically typed languages, and performance).

Clojure and its family of language will be a thing of the past in a few years
from now (Clojure is not even a thing of the present actually, given its
minuscule mind share on the JVM).

~~~
ane
Elixir isn't that old and it'll be interesting to see if it caches on. It's
basically Erlang with a nicer syntax and some new ideas.

~~~
sanderjd
Elixir (and Erlang itself, actually) are definitely the vanguard of
dynamically typed languages at the moment. I think this is because they aren't
_just_ convenient to write "normal" code in (like Ruby, Python, JS, etc.),
they also come with major advantages in writing concurrent and distributed
code, which is increasingly important. BEAM is also a particularly good
runtime with better performance (especially memory) characteristics than many
of its dynamically typed competitors.

I think this is the counter-argument to the original claim; new dynamically
typed languages can be successful by offering convenience for important
functionality. Even so, I think type annotations to aid static analysis (which
Elixir has and JS will likely have in the future) will be an important
feature.

I think the bar has simply been raised for new languages with _both_ static
and dynamic type systems. The former can no longer be boilerplate-y,
inflexible, and inconvenient, and the latter can no longer be statically
uncertain and hard to reason about. This is a good thing.

------
rdudekul
I love Ruby, Python and JavaScript. I believe I am more productive with
dynamic languages than say static languages like C# or Java. The abstractions
in dynamic languages (at least for me) allow me to write less code with less
confusion and less classes to remember.

With so many libraries supporting my development, I usually tend to write much
less code using a dynamic language. Also most of the code tends to be isolated
with clear interfaces exposed to other parts of the code. Writing tests is
simpler too, compared to writing them in dynamic languages.

May be large projects with hundreds of thousands of lines of code, could
benefit from statically typed languages. But for majority of the projects
(80-20 rule) dynamic languages offer cleaner syntax, simpler coding experience
and faster time to market (my opinion).

~~~
stephen
> dynamic languages offer cleaner syntax

Solely for syntax, my assertion is that this is just because dynamic languages
evolve faster. Because they are easier to build.

E.g. one person can buy the dragon book(s), use parser/lexer libraries, get an
AST, and then start an interpreter than just stores objects as maps, and evals
things dynamically.

But very few people are going to write a new/novel type system on their
weekends.

So that's why, IMO, all the sexy/modern syntax shows up in dynamic languages
first, and then takes ~a generation or two to reach static languages, where
you need a dedicated team to put the effort into the type system + compiler +
expected tool set (IDEs/etc.).

(E.g. see Scala, Kotlin, Rust, etc., which have all, IMO, achieved the "syntax
as nice as Ruby" level, but took large teams of people to do it).

~~~
Veedrac
> see Scala, Kotlin, Rust, etc., which have all, IMO, achieved the "syntax as
> nice as Ruby" level

I've only reasonable experience with Rust in that list but... no. I've seen
enough Scala to get that that's not true, and I know it's not true for Rust.

That's not to say that static languages can't have nice syntax - Crystal is a
great example if you'r trying to match Ruby - but I don't think your examples
are great.

------
solomatov
I think, instead of dynamic languages, the current place of dynamic languages
in the future will be dominated by hybrid languages.

Having no type annotations, even in highly dynamic languages, leads to lower
maintainability. However, some type systems are too complicated, and over-
engineered, the best example of which is Haskell (compare it to Agda or Idris,
which can do more, but have much much simpler type system).

~~~
yogthos
I think gradual typing is definitely likely to become the norm. It allows you
to develop things using the dynamic style and then add type annotations where
appropriate. It also avoids the problem of mixing the proof into your
solution.

~~~
solomatov
I think the main problem with types isn't that they slow you down. In my
experience they actually don't and reverse is actually the true. The main
problem is that static languages are often too complicated for non
professional software developers, i.e. designers, system administrators, data
scientists, etc, to be motivated to learn them. They just need to write the
code, and when they do it themselves, the stuff is done much faster and much
more efficiently.

~~~
yogthos
The inherent complexity of static typing is precisely my argument. You have to
demonstrate that the additional complexity actually adds value in the long
run. This seems like a prerequisite before you start extolling this approach.

------
lisper
When people write Lisp code, they invent new dynamic languages all the time.
Inventing and implementing new languages (a.k.a. writing macros) is just part
of the regular course of doing business in Lisp. And these languages that
people invent when writing Lisp tend to be dynamic, because they tend to
compile to Lisp. But of course they don't have to be.

------
nitrobeast
Some people may disagree, but Steve Yegge's "political axis" seems relevant
here:
[https://plus.google.com/110981030061712822816/posts/KaSKeg4v...](https://plus.google.com/110981030061712822816/posts/KaSKeg4vQtz)
[https://plus.google.com/110981030061712822816/posts/iuRbQe6E...](https://plus.google.com/110981030061712822816/posts/iuRbQe6EoiK)

I suppose no rationale person believes dynamic language would disappear
completely. Also I think both dynamic and statically typed languages exist on
a spectrum. It is not very productive to talk in generalized terms like "end
of dynamic languages".

------
js8
The post is kinda like saying, you don't really need to understand notions of
convergence or measure to be able to do integration. It's true, in a sense,
and it's faster to just learn Newton's formula and get the job done. But you
may get into problems without good foundations.

I like dynamic languages, especially Python and Clojure. But programming with
types is important to get things right, on the correct, formal basis. (I think
functional programming is a misnomer, it should really be called algebraic-
oriented programming, but I digress.) Once you see how to do it correctly it's
perfectly OK to mimic the same abstraction in dynamic language.

~~~
yogthos
You can reason and prove code to be correct in a dynamic language. What static
typing gives you is compiler assisted proving. The cost of the compiler
assistance is that you have to express the code in a way that often makes it
more difficult for a human to understand.

~~~
js8
You can do it, but you don't. It's not a language thing, it's a culture thing.

Language like Haskell forces you to do it. It is more effort, but this effort
pays off in getting composability and edge cases right. The end result is not
actually harder to understand, it's rather that you did not fully understood
the problem in the beginning (by glossing over technical details).

~~~
yogthos
Ah but the thing is that the payoff is simply assumed. Nobody has actually
shown this claim to be true. You spend more effort and you hope that there is
a payoff. Yet you have things like Cabal that's as flaky as anything written
in PHP. Clearly static typing isn't some magic pixie dust that all of a sudden
makes all your code work correctly.

~~~
js8
> Clearly static typing isn't some magic pixie dust that all of a sudden makes
> all your code work correctly.

Static typing itself - no. What is important about Haskell is the equational
reasoning (algebraic thinking, if you will) you can do, not static typing.
Static typing is just a vehicle to be able to pull it off.

There is a payoff already - see my comment to Lisp guy above. We could also
talk about how monads made LINQ great, or how streams in Java have to be more
complicated than necessary because the language wasn't very well formally
defined.

And there will be bigger payoff in the future, when we will learn for example
how to type a whole big software component correctly (like a web service). So
far, it's an art driven by engineering intuition; but formalization will help
to make more robust systems.

The payoff is also assumed based on experience with other branches of
mathematics (as I was alluding in my first comment), where historically, the
formalization always paid off. It doesn't mean you cannot get things right
informally, but it's usually fiendishly difficult.

~~~
yogthos
Again, you're making a lot of assertions in absence of any evidence to support
them. One would think that this would be easily demonstrable seeing how long
Haskell has been around, and yet no such evidence exists to my knowledge.

~~~
js8
I think there is plenty of evidence (of how types help to design better APIs),
you just don't want to accept it. I already mentioned LINQ and lens, another
could be parser combinators and FRP (Observables in particular).

Recently there was an article about a whole application:
[http://abailly.github.io/posts/cm-arch-
design.html](http://abailly.github.io/posts/cm-arch-design.html)

I can also recommend book Functional Programming in Scala, which talks about
how FP helps develop good API in more detail.

------
birchb
"This is my bet: the age of dynamic languages is over. There will be no new
successful ones."

I wonder what Shira counts as "successful"? Popular?

Sigh.

