
Death of a Language Dilettante - doty
http://prog21.dadgum.com/219.html
======
agentgt
I find there are roughly two categories of languages that are worthwhile
(based on my opinion of course).

1\. Easy to write/produce: Languages/Framework that are easy to write because
they are highly expressive (Haskell/Scala) or are very opinionated (Rails).

2\. Easy to read/maintain: Verbose languages with excellent tools... cough...
Java/C#.

As for reading code I don't know what it is about crappy verbose languages but
I have yet to see Java/C# code that I couldn't figure out what was going on.
Sure I have more experience with these languages and the tools (particularly
code browsing with an IDE) make it so much easier... but so do most people.

The reality is language dilettantes think writing code is painful (as
mentioned in the first paragraph by the author) but the real bitch is
maintaining.

I feel like there must be some diminishing returns on making a language too
expressive, implicit, and/or convenient but I don't have any real evidence to
prove such.

~~~
Zak
I would add a third category, which is not exclusive with the other two:
languages that provide good tools for abstraction.

The pitfall with #1 is leaky and obscure abstractions. It's easy to write code
that has performance problems or requires a lot of understanding of moving
parts not actually related to the problem at hand. _Where 's the code
responsible for putting the current state on a web page? All I see is a bunch
of monad transformations and I don't know what they're for!_ Sure, I can
figure out what's going on eventually, but I'll have to read a lot of CS
papers first.

The pitfall with #2 is lack of ability to write a suitable abstraction for the
problem. Instead, the problem has to be fit to the language. You end up with
either something relatively simple, but inflexible or a large amount of
incidental complexity. _Why do I need to implement
AbstractThingPutterOnPageGenerator and generate a ThingPutterOnPage before I
can put a thing on the page? Couldn 't this just be called putThingOnPage()
and use some optional args when the default behavior doesn't cut it?_ Sure, I
can figure out what's going on eventually, but I'll have to read a lot of code
first.

I think Lisp has always been strong in the third category, and that Clojure is
a Lisp especially suited to real-world use right now. The heavy emphasis on
defining code in terms of generic operations on generic data structures is a
particular strength. For something more mainstream, Python does pretty well
here. That's largely cultural though; Python has a very comparable feature set
to Ruby, but Ruby's community doesn't have "explicit is better than implicit",
the lack of which can lead to code which is impenetrable rather than merely
dense.

~~~
tormeh
The problem with Lisps is that generally speaking, they're all interpeted,
which means type errors are discovered at runtime. Which sucks for
maintenance.

~~~
Zak
Most implementations of Common Lisp have ahead of time compilation, at least
as an option, but _also_ have the compiler, or sometimes a different compiler
or an interpreter available at runtime. Clojure is also typically AOT-compiled
to JVM bytecode.

Did you mean that Lisps are dynamically-typed? That's true, and whether it's
mostly good or mostly bad is a religious topic that almost certainly lacks one
true answer. My own take on it is that I program very interactively and static
typing feels like an impediment to that most of the time. Furthermore, type
errors are usually a small subset of the possible errors and many static type
systems allow any type to be null anyway, drastically reducing the benefit.

------
EdwardCoffin
There are a couple of implicit assumptions in the final paragraphs that I
think should be made explicit. One was that we can evaluate these languages
based on this experiment _with a single programmer_. Another was the not-
clearly-defined term _strong work ethic_ , by which I _think_ he means someone
who will strive to make the program work properly, not have horrible kludges,
will avoid known problematic aspects of the language, etc.

The problem with these assumptions is that you don't run into situations like
that often. You're far more likely to run into a team of people of mixed
abilities, and with some languages, one or two of them will be able to inflict
horrors on the whole codebase.

~~~
lolc
I just tried Elm and I can assure you that bad programmers can write bad code
in any language, good or bad.

~~~
EdwardCoffin
I'm not disagreeing with that, what I am saying is that some languages offer
up a power to perpetrate real horrors that some other languages don't. For
instance, much as I love Common Lisp, I'd hate to work in a group with people
who don't know how to write macros but do so anyway. There are all sorts of
things one can get up to in C or assembly that you can't get up to in Java
(deliberately treating a chunk of memory as if it were a type other than it
really is, for example). Some languages like Smalltalk don't have a way of
enforcing privacy on APIs, unlike say Java, so on a large project you can find
out that someone you don't even know has just started using your private APIs
and you are now obligated to support them as if they were public APIs,
restricting your own freedom to change internals. These are all problems I
have run into.

~~~
lolc
To me "bad programmers can write bad code in any language" is a snarky
critique on favoring laws over conventions. There is this hope that given
strong laws (safeguards), collaborating with unreasonable people is easier.
Then the discussion devolves into the use of dangerous features versus the
need to work with unreasonable people.

In my view, the discussion should be about comfort. At what abstraction levels
will we work? If you put in a lot of safeguards, you can work comfortably at a
certain level. But if you want to move out of this band, be it to write some
low-level glue-code, or some higher abstractions, you find the safeguards to
be a barrier. If those safeguards are conventions, you can agree to break
them, if they are laws, you must subvert them or not work at those levels.

I prefer conventions as much as I prefer working with reasonable people. And
sometimes turning conventions into law has few downsides, like with the
private keyword.

------
jpt4
Temper the soldier rather than steel, and a club becomes a sword. A fairer
example might be to compare the Nix and Guix package manager codebases, which
aim to implement the same model of declarative system wide dependency
management. The former is written by a university team in C++ and Perl, the
latter by GNUcolytes in Guile Scheme and a touch of C.

~~~
andreasvc
Well, which do you say is better?

~~~
1_player
None sadly, one will be relegated to the "just an academic curiosity" phase,
the other one to obscurity. [1]

1:
[https://news.ycombinator.com/item?id=10005646](https://news.ycombinator.com/item?id=10005646)

------
sdenton4
My take away here is that it only pays to be a programming language dilettante
if you are actually building a programming language, especially if it's a
dedicated language for a new platform. Otherwise, you're mostly going to be
subject to whatever's already mostly in use on your platform of choice, up to
minor tweaks in that language over time.

------
hyperpallium
Unfortunately, the real value of anything is almost entirely due to extrinsic
factors. Air? Very valuable if you're underwater, on the moon etc.

Which human language? The one spoken by the people you need to communicate
with is most valuable.

The first iPhone? Very valuable then; not today.

But some people love intrinsic value. And it's what they create that ends up
having real value. They would say that intrinsic value is the only "real"
value. They aren't very practical.

~~~
pron
> They would say that intrinsic value is the only "real" value.

Thing is, this is a testable hypothesis (at least in theory): measure whether
those "intrinsically valued" languages make a true impact on software cost.
Often, it is the very same people who tout this intrinsic value who
deliberately shy away from testing this hypothesis empirically.

It's interesting that when Java's original designers analyzed customer needs
vs. features offered by academic languages, they discovered that most value in
those languages wasn't in the linguistic features but in the extra-linguistic
features, so they deliberately put all the good stuff in the VM, and packaged
it in a language designed to be as unthreatening and as familiar as possible.
It was designed to be a wolf in sheep's clothing:

 _It was clear from talking to customers that they all needed GC, JIT, dynamic
linkage, threading, etc, but these things always came wrapped in languages
that scared them._ \-- James Gosling[1]

[1]:
[https://www.youtube.com/watch?v=Dq2WQuWVrgQ](https://www.youtube.com/watch?v=Dq2WQuWVrgQ)

~~~
hyperpallium
Thanks, I've just watched it (only up to your quote).

I agree, but how did customers need JIT? I thought it was just to compensate
for the inherent performance penalty of an extra layer, of a VM.

BTW: Android java now uses ahead-of-time compilation (5.0 switched from JIT
dalvik to AOT art).

Just on the "wolf" part: not only was the language familiar (sheep's
clothing), but features were removed. Not just for familiarity, they actually
caused problems and so removing them was an improvement.

e.g. removing operator overloading (which C++ had): apart from very
fundamental maths (such as complex numbers and matrices), these caused a lot
of problems. Probably because they were algebras designed by non-
mathematicians. Removing them hurt matrix algebra etc, but was by far a net
benefit. (BTW a nice thing about shader languages is matrices as first class
values).

Finally, taking this back to my GP comment on intrinsic vs extrinsic: I was
thinking of the language as a "product" which would include the VM... but I
think your approach is better. Along those lines, performance, bugginess,
portability of the runtime etc are also non-linguistic features, strictly
speaking.

Actually testing that hypothesis is really difficult. I've only heard of a few
attempts to measure productivity in different languages, and their
experimental design is not very compelling. Of course, in practice, all those
non-linguistic factors dominate.

Yet, some of James' non-linguistic "wolf" features have linguistic counter-
points: e.g. no memory management; threading. Therefore, they are examples of
linguistic features with "intrinsic value". I'd also include references
(instead of pointers), and array-bounds checking (um... is that last one a
"linguistic" feature?)

I think some language features have real value - though, as with java's
inspirations, probably not the whole language, just particular features.

[ But I meant by my statement, that these folks think intrinsic value is the
only "real" value, that they dispute extrinsic value entirely! i.e. that ideas
are eternal, valuable absolutely and despite context; and contingent
fluctuations in supply and demand, progress of technology and install base etc
are 100% meaningless.

Like the truth of a theorem, regardless of its usefulness. ]

~~~
pron
> I agree, but how did customers need JIT? I thought it was just to compensate
> for the inherent performance penalty of an extra layer, of a VM.

Good -- and simple[1] -- abstractions require lots of dynamic dispatch, that
can be optimized away by a JIT. The problem is far simpler (and may be solved
partially, AOT, if you don't have dynamic linking and especially dynamic code
loading, as is the case on Android).

> and array-bounds checking (um... is that last one a "linguistic" feature?)

I would definitely classify that as extra-linguistic.

> But I meant by my statement, that these folks think intrinsic value is the
> only "real" value, that they dispute extrinsic value entirely! i.e. that
> ideas are eternal, valuable absolutely and despite context; and contingent
> fluctuations in supply and demand, progress of technology and install base
> etc are 100% meaningless.

I understand and completely agree.

[1]: You can do away with a lot of dynamic dispatch at the cost of a larger
number (and therefore less simple) abstractions, as in the case of Rust.

~~~
hyperpallium
> Good -- and simple[1] -- abstractions require lots of dynamic dispatch

So they don't need the JIT per se, but for linguistic abstractions... to be
performant, which is non-linguistic. That's splitting hairs though. So, James'
customers needed those abstractions, and _he_ said they needed a JIT.

Incidentally, I've been doing dynamic code loading on Android 5.0, with its
AOT. So it works. For my use, it's hard to tell if its startup is slower,
though I'd expect it to be.

I hadn't heard that about Rust.

~~~
pron
> Incidentally, I've been doing dynamic code loading on Android 5.0, with its
> AOT.

How does that work? Or is their AOT really a JIT that works all at once,
rather than collecting a profile first?

> I hadn't heard that about Rust.

Basically, in Rust (as in C++) you can pick either static dispatch
abstractions, which are "zero cost", and dynamic dispatch abstractions, that
are more costly. On the JVM you get dynamic dispatch as the abstraction, and
the JIT figures out whether static dispatch can suffice per callsite, and
compiles the dynamic abstraction to static-dispatch machine code.

~~~
hyperpallium
Sorry for the delay. It works just the same as dalvikvm, using DexClassLoader.
I guess it must just compile then run - like a JIT without profiling, as you
say. But I don't know the innards.

Thanks for the info on rust.

------
lmm
Give me Scala and a real-world problem vs someone using using PHP or
Javascript and I will beat them on the initial write, and destroy them on the
maintenance. I wouldn't use Scala professionally if I didn't believe this.

In the short term practical concerns can be more important than PLT ones - in
five years' time I'm sure Idris will be a better language than Scala, but for
some tasks it isn't yet - apart from anything else, you need a strong
library/tool ecosystem before a language is truly useful. But that's a
temporary state of affairs. If you were making this kind of judgement 20 years
ago, and chose a popular language like Perl or TCL or C++ over a
theoretically-nice language like OCaml or Haskell, how would you be feeling
about that decision today?

~~~
pron
Give me Java and a real-world problem vs someone using using Scala and I will
beat them on the initial write, and destroy them on the maintenance. :)

> in five years' time I'm sure Idris will be a better language than Scala

Idris? In the entire history of computing there has been a single[1] complete
non-trivial (though still rather small) real-world program (CompCert) written
in a dependently typed language. Even though the program is small and the
programmer (Xavier Leroy) one of the leading dependent-type-based-verification
experts, the effort was big (and that's an understatement) and the termination
proofs proved too hard/tedious for him, so he just used a simple counter for
termination and had a runtime exception if it ran out. Idris is a very
interesting experiment, I'll give you that. But I don't see how anyone can be
sure that it would work (although you didn't say it can work, only that it
would "be a better language than Scala", so I'm not sure what your success
metrics are).

[1]: Approximately, though I don't know of any other.

~~~
lmm
> Give me Java and a real-world problem vs someone using using Scala and I
> will beat them on the initial write, and destroy them on the maintenance. :)

Seems we have a true disagreement.

> Idris? In the entire history of computing there has been a single[1]
> complete non-trivial (though still rather small) real-world program
> (CompCert) written in a dependently typed language.

Five or ten years ago how many such programs were there in a language with
higher-kinded types? Thirty years ago how many with type inference? Innovation
is slow - perhaps five years was too optimistic, looking at the history - but
it does happen; PLT ideas do eventually make their way into mainstream
languages.

> although you didn't say it can work, only that it would "be a better
> language than Scala", so I'm not sure what your success metrics are

I think it will be the most effective (for real-world problems) general-
purpose programming language - a spot I think Scala currently holds. Hard to
define objectively of course.

~~~
pron
First, higher-kinded types is a feature; dependent types is an entire
philosophy. Second, I think you're misjudging the adoption of those
languages/ideas: the percentage of real-world programs using HM type-systems
(and similar) has hardly changed in the past two or even three decades. Scala
is special because it's a single language with lots of paradigms. If you count
only those who make good use of sophisticated typed abstractions in Scala and
add those to the HM languages, there would still be a very small uptick. The
major change in the past few years, I think, has to do with mainstream
adoption of higher-order functions. That idea took fifty years to break into
the mainstream.

As to language effectiveness, I can't argue with you because neither of us has
any real data, but I can say that there's a lot of religion surrounding the
question of how much linguistic features (as opposed to extra-linguistic ones,
like GC) actually increase productivity. What is certain is that we still
haven't broken the 10x productivity boost Brooks said wouldn't happen between
1986 and 1996, and it's been thirty years -- not ten -- and it seems like we
won't do it in another decade.

~~~
lmm
> higher-kinded types is a feature; dependent types is an entire philosophy

Totality is a philosophy; you can have dependent types as a feature without
it. Maybe immutability or purity are better comparisons for what Idris brings
to the table, but if you're just talking about dependent types then I'm using
them already.

> the percentage of real-world programs using HM type-systems (and similar)
> has hardly changed in the past two or even three decades.

Is that really true? I can't imagine a recruiter asking about Haskell, or a
Facebook-sized company talking about their OCaml strategy, ten years ago.

> As to language effectiveness, I can't argue with you because neither of us
> has any real data, but I can say that there's a lot of religion surrounding
> the question of how much linguistic features (as opposed to extra-linguistic
> ones, like GC) actually increase productivity.

That feels like gerrymandering your definitions to me. GC is usually a
language-level feature.

> What is certain is that we still haven't broken the 10x productivity boost
> Brooks said wouldn't happen between 1986 and 1996, and it's been thirty
> years -- not ten -- and it seems like we won't do it in another decade.

How would we tell? The productivity of the technology industry as a whole has
certainly risen enormously. My general impression is that coding is the
bottleneck a lot less often - even for a technology company - than it was five
or ten years ago.

~~~
pron
> Is that really true? I can't imagine a recruiter asking about Haskell, or a
> Facebook-sized company talking about their OCaml strategy, ten years ago.

I think it may have risen a tiny bit but here's why I think it appears larger
than it is: I don't think a recruiter would ask about Haskell today outside a
very small section of the ecosystem, but that section seems just larger than
it is. What has changed in the past 20 years is the cultural prominence of the
startup culture (SV startups hardly make up a minuscule percentage of the
software industry, yet they get a significant portion of the media coverage),
and even then only if you're involved in communities like Reddit and HN.

I think it is more an artifact of those communities that using certain
languages lends a certain prestige, which is then used as a marketing effort
(the CTO of a well known SV startup once told me that they have a small team
using Scala only so they could attract a certain crowd). Similarly, in
Facebook, it is my understanding that Haskell isn't really spreading, but is
more of a marketing gimmick to developers of a certain sort in an environment
with very unique characteristics. The entire software development industry
consists of, I'd make an educated guess, 20 million developers or more
(extrapolating from the known number of ~10M Java developers). I would be
immensely surprised if more than a million of them can tell you what Haskell
is (and by that I mean "a pure functional language").

So there's definitely an uptick in numbers and a rather strong uptick in
exposure -- assuming you're following the right online communities -- but I
don't think there's a real uptick in portion of production systems. Some
people used Lisp and ML in production 20 years ago, too. They just didn't have
HN to tell everyone about it. I'm not even sure we're currently at '80s level.
Haskell is certainly talked about and used less than Smalltalk in the '80s,
and see where it is today.

What is _definitely_ true that some ideas that had previously been associated
with functional programming, most notably higher-order functions, have now
finally made it into nearly all mainstream languages.

> That feels like gerrymandering your definitions to me. GC is usually a
> language-level feature.

I think the distinction between language-level abstractions and something like
a GC (or dynamic linking) is rather clear, but I won't insist.

> How would we tell? The productivity of the technology industry as a whole
> has certainly risen enormously. My general impression is that coding is the
> bottleneck a lot less often - even for a technology company - than it was
> five or ten years ago.

Is it? I've been a professional developer for 20 years and while I agree that
productivity has gone up considerably (maybe 2-3x) it can be attributed nearly
entirely to automated tests and GCs.

~~~
lmm
Ten years ago even startups were largely Java-only when I looked (at least
here in London). Five years ago nowhere was advertising pure-
Scala/Haskell/OCaml/F# jobs at all. I heard about OCaml at Facebook from
friends working there before the big public announcements about it. It's hard
to separate general trends from my own trajectory of course.

> Is it? I've been a professional developer for 20 years and while I agree
> that productivity has gone up considerably (maybe 2-3x) it can be attributed
> nearly entirely to automated tests and GCs.

I think productivity is noticeably higher than even five years ago, when we
already had GC and widespread testing.

------
andreasvc
It seems to me that if everyone followed the kind of pragmatism that this post
argues for, nothing new would ever be adopted.

~~~
angersock
That would be a feature not a bug.

Every developer should be forced, I believe, to read Arthur C. Clarke's story
_Superiority_ (
[https://en.wikipedia.org/wiki/Superiority_%28short_story%29](https://en.wikipedia.org/wiki/Superiority_%28short_story%29)
) and to reflect on its application to their profession.

EDIT: Story can be found here...
[http://www.mayofamily.com/RLM/txt_Clarke_Superiority.html](http://www.mayofamily.com/RLM/txt_Clarke_Superiority.html)

~~~
B1FF_PSUVM
> Story can be found here

Well worth a few minutes. Not that teaching it would help much, wisdom rolls
off people's minds like water off a duck's back.

------
Zak
This article isn't exactly wrong. Certainly, running on your target platform
and having library support for the things you're trying to do are critical
features for getting anything done, and a great language that lacks these
things in the wrong tool for the job. That doesn't mean criticism of bad
design choices in, say, Javascript is mistaken, or as the author describes it,
"troubling". It just means you probably have to use Javascript anyway[0].

It also leaves out another reason for learning languages and using them for
pet projects: it makes you a better programmer. The more good languages you
know, and idioms from those languages, the more likely you are to recognize
when an ad-hoc implementation of one of those idioms is the right solution to
a problem in the language you're actually using.

[0] Though possibly only as a compilation target.

------
xigency
When doing serious development and hitting these issues, the workaround isn't
to continue using a broken language. The workaround is to use a different
language. The first question 'Does this language run on the target system that
I need it to?' isn't a yes or no question.

Take a look at this example - [http://blog.fogcreek.com/the-origin-of-
wasabi/](http://blog.fogcreek.com/the-origin-of-wasabi/)

A language that compiles to PHP and ASP, what a relief.

And for the contemporary result - [http://blog.fogcreek.com/killing-off-
wasabi-part-1/](http://blog.fogcreek.com/killing-off-wasabi-part-1/)

When the platform catches up, then you can go back to mainstream development
with a useful language.

~~~
spdegabrielle
Sometimes you need to write a compiler.

------
ScottBurson
_Even something as blatantly broken as the pre-ES6 scoping rules in JavaScript
isn 't the fundamental problem it's made out to be. It hasn't been stopping
people from making great things with the language._

No, it hasn't been stopping them, but I guarantee you it's been slowing them
down, at least a little. If nothing else, it makes the language a little bit
harder to learn than it needed to be. I'll wager it also causes actual bugs
that people have to spend time tracking down. It's true that those bugs can be
avoided by proper discipline, but the brain cells required for enforcing that
discipline could have been used for something else.

ETA: I agree with the author that a certain pragmatism is useful in selecting
a language for a particular project, but I still think it's important to raise
people's consciousness about warts in language designs. Doing so improves the
odds that the next language someone designs to scratch their personal itch,
but that happens to catch on for some reason, will have fewer such warts.

~~~
xiaoma
Over 90% of the first 100k LOC I wrote was ES5 JavaScript or CoffeeScript. I
don't actually recall even a single instance when I was bitten by a bug
related to lexical scope on the job. Maybe the problem is people expecting
JavaScript to work like Java or some other block scoped language.

Async bugs, on the other hand, were nightmarish at times.

~~~
x0x0
Did you write the same 100 klocs of code in a language with lexical scoping
before coming to js?

~~~
xiaoma
Another 100 klocs of code before my _first_ 100 klocs of code? That wouldn't
have been possible...

------
clitmouse
matlab? custom function was (or still is?) a one-function-one-file. i had
bunch of function-files in a project directory. my mind was literally
scattered.

then i moved on to languages that allow binding function to variable. i had
much less files. simpler.

FP with anonymous function further frees my mind from naming variables so i
have zero chance of mistyped function names. easier maintenance? sure

those didn't stop me from getting work done; however, i prefer to not waste
time on weaker programming languages, although coding on those languages did
broaden my mind (yeah, now i know they suck)

------
sgt101
Kinda mean to Ruby.

Also misses the point, it's not the job at hand that matters, it's the 10000
jobs that keeping the thing alive that matters.

~~~
Chris2048
Maybe. but actually what's important is investment cost versus the ratio of
supply vs demand.

There are _lots_ of Java jobs, yes, but there are also lots of Java
programmers too. I'm a python dev (now) and while I have to search a little
longer for jobs, I get good pay still, since I'm also rarer.

~~~
sgt101
I think that what is important is how easy code is to maintain and how easy it
is to move from 60 modules to 6000 modules (and beyond). Running a big code
base in PHP is very, very difficult (I have done this, it was not a good
experience) running a big code base in Java is better (I have done this too,
it wasn't that great but some of the information hiding and abstractions in
Java seemed to help partition the issues from one another). I have never run a
large code base over a number of years in Python, I expect it's rather like
doing things in Java. I feel strongly that if I had to run a large Julia code
base (in about 2 years when it goes gold) then things might be much improved
and much less make work might arise.

~~~
tanlermin
What do you mean by "goes gold"? Like 1.0?

~~~
sgt101
Yeah - when they (the dev's) underwrite that they believe that their original
intent is fulfilled - 1.0

