
Research in programming languages - ltratt
http://tagide.com/blog/2012/03/research-in-programming-languages/
======
raganwald
Alan Kay’s observation is that any time a field grows faster than education,
it becomes a pop culture. The “problem” with PL is exactly that: The use of
programming languages grows faster than education about programming languages,
so it’s a pop culture. It’s a lot like music, really. Is there research into
music? Absolutely. Is there innovation in music coming out of universities?
Absolutely. Does this have any effect on the top ten iTunes singles? Nope, or
at least, not directly.

Music (even “classical” music) is a pop culture, just like programming. Music
is also a science with a body of knowledge and well-formed theories. But it’s
a pop culture. Just like programming languages.

~~~
swannodette
Of course it's rare that Alan Kay says something not worth revisiting, and
bringing up pop music kind of gives me a different take on this quote. It's no
secret that pop music (aka folk music) has long informed serious music which
has long informed folk music which has long informed serious music ...

I think the current state of programming languages clearly reflects this
pattern. I'm happy that the mainstream is becoming interested in something
other than the strange variants of Smalltalk OOP we've been using for 30 some
years.

In fact there seems to be real grassroots PL renaissance happening these days
:)

~~~
puredanger
I'm looking forward to Richard Gabriel's keynote at Clojure/West
(<http://clojurewest.org/sessions#gabriel>). I think it will delve directly
into some of this. Fortunately, it will be recorded and released.

------
pcwalton
This is something we talk about all the time here at Mozilla Research.
Languages do not become popular primarily because of technical innovation;
they become popular because they were in the right place at the right time.
(You might draw an analogy to startups here.) Often the designers of the
popular languages didn't have much of a background in programming languages
research, so they didn't break new ground. (That's not to say that the current
languages aren't _good_ , though. On the contrary, Python, Ruby, Perl, etc.
are all awesome languages! They just weren't particularly innovative from a PL
design standpoint.)

Given that, what is the place for PL research in industry? There are two main
ways that we can see for PL research to make a practical difference:

(1) Apply PL research to new features of existing popular languages, and push
for those features to be standardized if they're successful. This is what
we're doing with JavaScript with, for example, the PJs project (parallel
extensions for JavaScript). It's always trickier to fit new research into
paradigms of existing languages, but if it can be done it's very rewarding.
This has been done with e.g. Java in the past; generics and the flow-sensitive
initialization checking were an example of this.

(2) Create new languages designed for a specific, concrete high-value project.
This is what we're doing with Rust -- Mozilla is investing in Rust
specifically to be the language that we write a new parallel layout engine in.
This constraint helps us focus on keeping the language usable, practical, and
feature-rich. Features like the uniqueness typing and (in the future) bounded
task lifetimes that allow us to avoid concurrent GC are driven by the pain
points that we struggle with in Gecko.

~~~
raganwald
I can’t remember the exact quotation, but it goes along the lines of, “ _Every
successful language began as the scripting language for something popular._ ”
(Anyone able to provide the correct quote?)

If that’s the case, “The right place at the right time” is when something new
is taking off, like C for Unix, Ruby for Rails, Objective C for iOS, or
recently JavaScript for Node.

~~~
chancho
JavaScript for _Node_? Isn't that a bit like saying "Java for Minecraft"? I
can think of at least one popular use of JavaScrip that predates Node.

------
carterschonwald
This article starts off with some interesting albeit vague points, and then
meanders into anecdote and unsubstantiated claims. To whit, there seems to be
a 20 year lag between a PL idea first being explored in an academic context
and it's wide spread use in "mainstream" languages. Parametric polymorphism,
first class functions, these have only been integrated into languages like c#
and java etc in the past half decade or so. It is normal and typical for ideas
to take time to spread! (whether or not that's a good thing is a whole
separate discussion) Maybe I'm missing the point of the article. But science
is science because it deepens our understanding, not because it will be used
by some engineers in a business next week. Arguing for the superiority of
anything by dint of its ubiquity/popularity is about as reasonable and healthy
as using radium/thorium infused tooth paste twice a day. (so not particularly
healthy)

------
chwahoo
It's also worth noting that most of PL research does not focus on simply
designing new languages. For work that does propose new language
features/constructs, it is typically done within the context of a small
lambda-calculus/ML-style language to allow the authors to easily explain the
implications of and prove properties about the feature. If the authors do take
the next step, they'll often add the feature to an existing language
(Java/C/Scheme) rather than design a new one from scratch. It's up to
practitioners to take the most useful of these "idea-nuggets" and include them
in new languages. Most academic researchers view their job as generating these
new nuggets, not building and supporting tools based on them for wider
consumption.

Work on new type systems is a big part of the field, however these type
systems aren't always intended to be deployed as part of languages. Rather,
they can often be viewed as bug-finding program analyses. Indeed, program
analysis (including type inference/checking) is a much more common research
topic than "a new language for X".

I think many in the field agree that more work should be done on language
usability, although it is a hard topic to research. The PLATEAU workshops (on
the Evaluation and Usability of Programming Languages and Tools) were efforts
in this direction. The Software Engineering field is much more focused on
usability and would be the most likely publication target for PL usability
research. (The top PL conferences focus on theoretical concerns (POPL, ICFP)
and implementation/systems (PLDI, ASPLOS)). Although usability studies are
rare, the PL field has gotten tougher about requiring strong arguments about
the usefulness of research: the evaluation and motivation sections of papers
really need to be convincing.

------
swannodette
Some missing context, this post is by Cristina Lopes, one of the co-inventors
of Aspect Oriented Programming. For thos that don't know Gregor Kiczales and
others worked on the CLOS MOP culminating in their book The Art of MetaObject
Protocol. I still think this is the only success story of I'm aware of where
multiple incompatible programming language dialects were unified under a
protocol malleable enough to express them all. This is William Cook's points
about the flexibility of OOP taken to the extreme.

When Kiczales and others realized Common Lisp as a community wasn't going
anywhere they took their ideas and created AspectJ.

Anyways, her blog seems like a lively read in general - I found this blog post
pretty entertaining and I agree wholeheartedly -
[http://tagide.com/blog/2011/05/programming-is-math-
apparentl...](http://tagide.com/blog/2011/05/programming-is-math-apparently/)

------
thurn
I think academic languages don't succeed because the incentives aren't
aligned. A mass-market language has very different requirements from a
research language: performance, similarity to existing languages,
compatibility with existing code, good marketing. Research languages need to
be novel, to have genuinely new ideas. They don't have to worry about how hard
it will be to teach a million Java programers to use this thing.

Especially at big companies, we're very risk-adverse. We know you can build
high-scale applications in Java and C++ because it's been done. You can
probably build a high-scale Haskell application too, but would bet millions of
dollars on it?

~~~
larsberg
> academic lanaguages don't succeed because the incentives aren't aligned

A much harder problem is that compilers are shockingly time-consuming to fully
implement and test, much less to integrate with a full set of libraries and
tools that you need to use them in practice. Even if you have a new language
design, implementing all of the "basic" optimizations to bring your compiler
from hopeless to merely embarrasing requires person-years of heroic effort,
none of which results in publications or other recognition required to keep
your NSF funding coming (they're "basic").

For example, it has taken us since 2007 to get Manticore
(<http://manticore.cs.uchicago.edu/> ) to a point where our sequential
performance is within a factor of 2-4 of C (depending on the benchmark), and
closing that last bit is probably going to take another couple of years unless
we have some magical windfall of stunning undergraduate and graduate student
candidates. Further, over the _entire_ lifetime of the project, I'd doubt that
we will be able to put in even half the people-hours that I saw committed to
improving the template error messages in Visual C++ during my first two years
working at MSFT.

That said, we've been able to do some truly great things with scheduling
computations on multicores, building a GC that works amazingly on NUMA
computers, actually getting real speedups on > 36 core machines, etc. But I
wouldn't expect to see any of the language features or implementation tricks
in mainstream languages for many years. That time lag is just sort of the
nature of things in PL research.

~~~
haberman
Doesn't a modular compiler framework like LLVM significantly reduce this
barrier-to-entry?

~~~
larsberg
Not significantly. If you check out the sources for most functional language
compilers (GHC, any of the ML family, scheme, etc.), code generation and
micro-optimizations make up a tiny fraction of the code and developer time
investment. LLVM is a fabulous project, but many of the optimizations that it
performs are already handled at a higher level of the compiler (loop
unrolling, inlining, etc.) in a type-preserving functional language. Richer
type information (particularly for algebraic datatypes) enables things like
loop unrolling over recursive data structures, which you really can't do
anymore once it's a bunch of typecasts and tagged union discriminations.

That said, we're looking to move over to it because our old code generation
library (MLRISC) is beginning to show its age, and porting to LLVM is probably
going to be about as much work as fixing the spill code bug we recently hit
and exposing more SSE instructions. Most of the work will probably be in
porting our calling convention, which does not resemble C's in any way. Like
many frameworks, it's a modular compiler framework for building C compilers,
not really a modular compiler framework for all types of compilers.

That's not necessarily a bad thing; many people have tried and failed to make
more general ones. I'd rather have to do work to shoehorn our compiler into
something that works and is widely used in industry than rely on something
that's an easier fit but only likely to be around as long as the group is
still publishing papers on it.

~~~
sanxiyn
> Most of the work will probably be in porting our calling convention, which
> does not resemble C's in any way. Like many frameworks, it's a modular
> compiler framework for building C compilers, not really a modular compiler
> framework for all types of compilers.

GHC folks succeeded in including GHC calling convention in LLVM, so there is
hope.

[http://blog.llvm.org/2010/05/glasgow-haskell-compiler-and-
ll...](http://blog.llvm.org/2010/05/glasgow-haskell-compiler-and-llvm.html)

~~~
larsberg
Yes, we're familiar with that work. One technical bit of interest is that we
don't use the C stack at all (we have heap-allocated continuation records), so
we're curious how that will play out with the calling convention even beyond
the issue with registers.

That said, we're still hopeful that we can make things work, though we're not
optimistic that it will come about without some significant tweaking.

------
evmar
In linguistics, the study of human languages, a big deal is made about how
while there are supposed rules about how languages work ("you must never end a
sentence with a preposition"), the real object of study is how humans actually
_use_ language, which is frequently pretty far removed from how people even
self-report their behavior.

Programming languages as a subset of computer science are a purely
mathematical thing, where we can use Turing's ideas from the 1930s today to
inform type systems. But they're used by humans and humans are fuzzy; they
choose "objectively bad" languages like PHP. That doesn't mean to say that
science no longer applies -- it just means whatever metric we're using to
judge PHP as bad is not the metric that causes a language to succeed. That is
still an area worthy of research.

(Not a direct response to the article, sorry. Got a bit carried away.)

~~~
tikhonj
This is just a wider reflection of a very common pattern--success is only
vaguely correlated to quality. If anything, success is heavily dependent on
qualities of the environment rather than the thing in question, so there is
probably no way to just look at something in a vacuum and decide if it will
succeed. Really, the metrics aren't at necessarily at fault, the "market" is--
people choose an inferior product for whatever reason and then stick to it.

You can see a division like this in many other fields, particularly the arts
and literature--most popular literature isn't "great" and most "great"
literature isn't (as) popular. So the natural parallel is that PHP,
Python...etc are like the thrillers at the top of the best-sellers list and
Scheme, Haskell are like what you would read in an English class.

And really, this makes sense--whenever anybody talks about the quality of a
programming language, they are talking about whether they would use it
_themselves_ rather than whether the public at large would use it. So it is
completely reasonable to have a "lower quality" language be more popular than
a "higher quality" one.

Coincidentally, while I talk about high quality and low quality, I do not mean
to denigrate popular languages. After all, sometimes a thriller is all I want
on a plane ride! And it could be a perfectly fine book indeed. But that does
not mean it's a better book than _Ulysses_.

------
mkn
I think the entire discussion would be helped by the simple realization that
programming is not really all that glamorous nor scientific. It seems like a
lot of programmers have math envy, but programming is much more like managing
an office staffed with savants than it is like discovering a proof; You tell
the workers in the office what to do, using very specific instructions because
they can't figure out what you mean, no matter how blindingly obvious it is to
you, even though any one of them can add two numbers together in a billionth
of a second.

Given that, we can probably look to progress in the "science" of management to
get a feel for what progress in the "science" of language design is going to
look like. That is to say, we probably can't expect anything at all in the way
of progress. It's funny that there's a parallel between the conclusions in
William Whytes' "The Organization Man" and "progress" in language design.
Whyte concludes, one, that management in the abstract doesn't actually exist
and, two, "management" taken as organizational oppressiveness and intrusion is
actually a parasitic load on people trying to get work done, and ought be
minimized. Look at the success of weakly-typed scripting languages like perl,
js, ruby, python, and so on: the fewer strictures they impose on the data, the
more work you can get done!

Researchers are just going to have to get over math and physics envy. The
"truths" they discover are very unlikely to be anything like nearly as
universal as physical truths. Structured programming, OOP, AOP, functional
programming, or whatever else aren't ever going to fit into a proposition like
"If we adopt ____, we find that blah," where blah is any kind of contingent
claim relating to bugs or productivity. All we'll ever get are notions that
whatever paradigm worked well in one context and poorly in another. Again,
this is parallel to management. You can't manage programmers like auto workers
like farm workers like service workers. Outside of algorithmic analysis,
computer "science" has as little to say about programmer efficiency as
management science has to say about how many weeks of parental leave you
should give your employees.

------
archgoon
An aside, Python doesn't really fit in with the Ruby, PHP, and Javascript.
Although Python was not created for the purposes of research (it was more of a
server glue), it was influenced by Guido's experience working on ABC; a
research language designed for teaching children. So, python is much more of a
result of research than the others.

That being said, the ironic part is the relevant research was not type theory,
but rather "How can we teach programming to children?"

------
KaeseEs
It would seem that, even though these languages weren't made from soup to nuts
in academia, they mostly incorporated ideas therefrom; for instance, Ruby and
(eventually) Python got closures that had been pioneered by Scheme. There's
still plenty of valuable work to be done trying out ideas that will eventually
be incorporated and popularized elsewhere.

And of course, there are still languages like Haskell that have carved out a
nice niche, despite their recent vintage and academic roots.

~~~
chubot
Yeah, in Python design discussions I see academic papers cited with some
frequency. So all's not lost -- the research is doing something. There is a
lot of useless PL research but that's just the nature of exploring new ideas
(though, not to ignore real problems with academia, some of which the field of
PL research is particularly afflicted with).

Related (long but good) article: [http://unqualified-
reservations.blogspot.com/2007/08/whats-w...](http://unqualified-
reservations.blogspot.com/2007/08/whats-wrong-with-cs-research.html)

"No field has been more infested by irrelevant formalisms than that of
programming languages - also known as "PL research." So when I say Guy Steele
isn't a PL researcher, what I mean is that he's not a bureaucrat."

I read basically all of the Lua papers and I think they're a great model for
how to do programming language research. They use a real language for a
testbed of their ideas. For example, there was a great paper that went over
the history of coroutines and different options for designing and implementing
them. There was also a great one about PEGs and a formal treatment of the
implementation of the lpeg parsing VM. Somehow I feel that this kind of
research wouldn't be respected at big name American CS universities, which is
a shame, because it's exceedingly valuable.

~~~
larsberg
That linked article is from 2007. Interestingly, the stuff that the author
mentions as useless (proof carrying code) is now basically The Way that
developers handle code that has security, reliability, or robustness
requirements.

\- For example, Google Native Client uses a formal verifier to prove the
safety of binaries (<http://sos.cse.lehigh.edu/gonative/index.html> )

\- Microsoft has long used a formal driver verifier to prove liveness and
protocol properties associated with device drivers

One amazing piece of work going on right now in compilers is by Xavier Leroy,
who cares a lot about formally proving that your compiler and its
optimizations respect the semantics of the language (i.e. that its execution
on hardware is within the range of possible executions specified by the
original input language). Without the decades of work on formalization,
semantics work, theorem provers, etc. the community wouldn't have a chance of
tackling those problems today.

Certainly, if you read the proceedings of POPL or even some ICFP papers, you
might wonder where it's going. And even the authors might admit to the same.
But until you've fully explored the issues that come up when you try to merge
(shameless example from my own work) effect types, region types, concurrency,
parallelism, and transactions, it's hard to know what sets of language
features can be safely combined in a way that programmers can modularly reason
about and a toolchain can implement efficiently and correctly.

~~~
chubot
I don't think what Native Client does is the same as "proof carrying code".
From what I gather, if it used proof carrying code, then it would do some
verification on the receiving side at "compile time", and no sandbox for
runtime checks would be necessary.

My understanding is that Native Client works with a combination of a special
compiler toolchain (on the sending side) and runtime checks on the receiving
side.

[https://developers.google.com/native-
client/pepper16/overvie...](https://developers.google.com/native-
client/pepper16/overview#how-nacl-works)

Happy to see any corrections. Your link seems to show a project that is
related to NativeClient, but is not the core technology behind NativeClient.

EDIT: Also, I would be interested in details about the Microsoft driver thing,
but that doesn't seem related to proof carrying code either.

~~~
larsberg
You are correct --- PCC (AFAIK) has not come about in the sense of carrying
along a proof object that a formal verifier can validate is both correct and
corresponds to the code payload.

As the other commenter pointed out, the SLAM tools are part of the Windows
Device Driver Development Kit. The last time I talked to the kit's dev manager
(~2003), they were talking about making it mandatory that you pass the formal
verification in order to have your driver signed by Microsoft. Since those
signatures are then verified at driver installation time, that feels very
close to it!

I have to confess I'm only familiar with the publications on Native Client and
not the actual product. From what I'd read, I understood that the verifier did
some basic static analysis to prove that all possible executions did not
validate some properties. In that case, no proof object is required, as the
source code itself is the proof object. Assuming, of course, that they're
actually doing the stuff talked about in the papers and in practice don't just
"grep for dangerous instructions."

------
tikhonj
I'm taking a programming language class right now (not very impressive
credentials, I know :P) and I think its focus is telling: the primary goal is
not to design and implement a general purpose compiler but rather to design
and implement different _DSL_ s.

Now, this obviously involves understanding how compilers and general purpose
languages work, but it also involves some other skills and ideas both in
design and implementation. The biggest difference is, of course, in scope--
rather than thinking about languages good for _anything_ , we think about very
narrow languages heavily optimized to do _one_ thing. These languages may
stand alone or they may be embedded in bigger languages, but each language
itself is distinct from other languages (including the "host" language).

These languages are also not always aimed at programmers--one of the examples
(a past final project) was a language aimed at _tailors_ , of all people, to
help them work with patterns. Another language we looked at was designed for
musicians to combine different inputs into one output.

These sort of languages _are_ programming languages and have the same ideas,
but they serve a different purpose. In a lot of ways, languages like this
replace _programs_ and _GUIs_ , letting people work with text rather than
pointing and clicking. I think there are very many domains where using text is
preferable to a GUI--there is a reason I still use a CLI, after all--and this
is exactly the sort of thing we're looking into, except not necessarily for
programmers.

I've wandered a bit off topic, but I think these ideas are interesting. It's
another direction for PL research--focusing on _very_ narrow fields and
potentially non-programmers. Just something to think about.

------
archgoon
The author is entirely correct that evaluating the effectiveness of
programming languages on programmer productivity is damned hard. This is
largely because evaluating programmer productivity objectively in the first
place is damned hard.

<tounge in cheek> Perhaps developing useful metrics of productivity, rather
than strong AI, should have been the real Holy Grail of Computer Science.

~~~
_delirium
There's a longitudinal problem as well, in that most of the interesting
aspects of productivity improvements come over longer time scales, with large
projects and people who have already gotten up to speed on them. Those are
extremely difficult/expensive to do controlled studies with. Medicine does
them, but 5-year studies in medicine are quite expensive, logistically
complex, and have a lot of institutional support because it's considered so
important to run them. In HCI, user studies are typically of a shorter length,
like A/B-testing one UI paradigm versus another in 30-minute user sessions.
Applied to PLs, it's feasible to do user studies looking at learning curves,
but much less feasible to answer how Haskell compares to C++ on a large
project. Instead, like in economics, the best you can do often ends up being
to look for "natural experiments" where almost-comparable things happened in
different languages, and try to compare them.

A few software-engineering researchers have told me that that's one major
reason that recent "tools" type SE-research happens outside academia: if
someone in academia had invented git, it's not clear how they would design a
user study to evaluate it, especially within the constraints of, say, a PhD
thesis timescale/budget. The typical/simple study design is you recruit N
participants, randomly assign N/2 to your tool and N/2 to the control tool,
have them perform a task, and then try to show with p<0.05 that the group
using your tool did better than control. But in this case, the "perform a
task" step has to be non-trivial, and it tends not to be feasible to recruit
people to participate in a random study that involves them developing serious
software over several years, which would be the equivalent of the kinds of
randomized studies that are done with medical devices.

I don't actually find the case-study-based approach particularly bad. Start
from cases that are awkward or error-prone to handle in a language (either
constructed or derived from data about real-world errors), and propose a
solution that captures the underlying computation more directly, or in a more
checkable way, etc. There are other areas that make progress in that manner;
for example, symbolic logic develops with a case-study and counter-example-
driven methodology, where someone will propose a case that either can't be
represented in Logic X, or at least can't easily be represented, or maybe
produces incorrect inferences when encoded in the obvious way, and this will
drive development of a Logic X'.

------
jashkenas

        > It appears that deep thoughts, consistency, rigor 
        > and all other things we value as scientists aren’t 
        > that important for mass adoption of programming languages.
    

Perhaps the premise is confused, and programming languages are at least as
much about comprehension and readability to humans as they are about
theoretical purity from a mathematical perspective -- I don't think it's very
surprising that many of the most popular programming languages of the last ten
years were designed by individual hackers. Fortunately, academia doesn't have
a monopoly on "deep thoughts".

~~~
swannodette

      > Fortunately, academia doesn't have a monopoly on "deep thoughts"
    

Are there many examples of language features / designs where expressive power
isn't derived from prior academic research?

I've used quite a few popular programming languages and they certainly feel
"useful" because of syntax, libraries, easy to run on UNIX, etc. Most the
"deep thoughts" in these programming languages can be easily found in prior
academic literature.

Glad to hear of some examples if you have any! :)

There's plenty of evidence that good engineering can be accomplished with
popular programming languages. But I think we're still a long ways off from
_beautiful_ engineering. Any way forward needs to elegantly intertwingle
Theory & Praxis.

~~~
jashkenas
Sure.

PHP's "deep thought" was that in the context of building dynamic web pages,
perhaps it made sense to embed a scripting language into HTML itself --
instead of calling out to an external program. There's an example of the
initial idea being responsible for much of PHP's enduring success.

And here's an anecdote that explains how it was (lack of) readability in
significant indentation that led to Python getting its colon:

    
    
        > In 1978, in a design session in a mansion in Jabłonna (Poland), 
        > Robert Dewar, Peter King, Jack Schwartz and Lambert were 
        > comparing various alternative proposed syntaxes for B, by 
        > comparing (buggy) bubble sort implementations written down in 
        > each alternative. Since they couldn't agree, Robert Dewar's wife 
        > was called from her room and asked for her opinion, like a 
        > modern-day Paris asked to compare the beauty of Hera, Athena, 
        > and Aphrodite. But after the first version was explained to her, 
        > she remarked: "You mean, in the line where it says: 'FOR i ... ', 
        > that it has to be done for the lines that follow; not just for 
        > that line?!" And here the scientists realized that the 
        > misunderstanding would have been avoided if there had been a colon 
        > at the end of that line.
    

(from <http://python-history.blogspot.com/>)

~~~
swannodette
Perhaps for me "deep thought" means something particular which ends up having
wider ramifications. These just seems like anecdotes about known "deep
thoughts". The first one the power of templating / macros, the second the
power of good syntax.

------
rogerbinns
What is needed is a programming language where you can get the same things
done but by writing less code. There will obviously be diminishing returns in
this. On one project I as able to use 10 lines of Python as an alternative to
400 lines of C, but it seems unlikely that those 10 lines could be replaced by
much less.

I do still find that I write far too much code to handle what isn't the normal
flow, instead dealing with errors, unusual values, rare interactions etc.
Exceptions cover some of that (except in Java with the annoying checked
exceptions model). But there is still a lot of code that falls between
exceptions and what gets executed most of the time. If only I could leave that
rarely executed stuff out.

There is also a massive sore spot with functionality split across multiple
processes/machines. Writing synchronous code is clear but has issues, while
writing asynchronous code reflects what is really happening but involves a lot
of baby sitting. I do think the functional world has immutable values right as
a good approach (less locking to worry about, can be transferred between
processes, easier to debug etc)

I look forward to the day when my programs look like this, all of one line
long:

    
    
        DWIM
    

(Stands for Do What I mean)

~~~
msutherl
Abstraction comes at the cost of reduced specificity.

~~~
rogerbinns
Make the normal things easy and the hard things possible. It has been a very
long time since I cared how memory was allocated (eg pools, mmap versus heap).
It has been even longer since I could determine how data was put on disk
(choosing which sectors).

Using Python I can say a list should be sorted and have no say over the
algorithm. If I use the STL I have to make a few decisions. Java appears to
have 7 different list implementations that I have to pick from.

Working code that was quick to write is far more important than being very
specific about every operation. In most cases it is sufficient. With profiling
and real world usage you can get an idea of where more specificity is needed,
but in many cases even that can be dealt with automatically (JIT, type
inference, calling out to a different language). Even if you write new more
specific code you can still use the existing code to test against.

------
ataggart
This reads as another argument about intentional systems versus emergent
systems. The author wants to intentionally build a language which is provably
"better". Instead we have an ecosystem where many languages (and libraries,
and frameworks, and IDEs, etc.) are created, each with varying attributes, and
of varying quality; some emerge as preferred for various applications, others
fall by the wayside. And then the cycle begins anew. The author seems to miss
the important lessons from the "market process" at work.

Perhaps languages are no longer at the right granularity to be "solved". This
again echoes the market process; too many variables, and too many different
preferences leads would-be problem solvers to rely on the pretense of
knowledge.

Languages combine a host of disparate features, and sometimes simply combining
previously known features in novel ways is sufficient iterative improvement.
Clojure is one good example of this, where immutability, functional
programming, and STM were all known and used to varying degrees, but combining
them in a clever way allowed for something greater to emerge, particularly how
the first two allowed for a new form of the third. From this in turn emerged a
(new?) well thought out model of time state, and identity.

In the end, perhaps languages are now really engineering problems, not the
science/math problems they used to be. Perhaps the academics should embrace
this. To continue the economic parallel, if one wishes to examine the
qualitative aspects of PLs, perhaps research should approach from the
historical perspective, asking why certain languages emerged to solve certain
problems.

------
shirro
There is probably some good research in programming languages out there though
every time someone goes looking they encounter the ACM paywall. Back to PHP I
guess.

------
andrewflnr
So ideally, academia would do the "innovation" and the practical people would
incorporate the innovative ideas into their "mashups of concepts that already
existed", right? But I guess the problem is that academia is not set up to
handle that kind of innovation, because it's hard to quantify the benefits.

There's obviously lots of room to improve in programming languages. It seems
the only question is getting people to do it.

~~~
Gryftr
This. It seems like new features are developed, but it takes decades for the
good ones to rise to the top and end up being implemented. And the academic
model, of applications for grant funding works quite well for just these sort
of advancements, at least in other fields. Somebody who has the money and time
to put together a grant program targeting improvements in both programming
languages and, perhaps more importantly, real world user experiments in the
design of programming languages, could really make a difference. There is no
inherent reason that that the decades for useful feature implementation
couldn't be reduced significantly, given a moderate source of funding, a good
journal community, and better research methodologies.

------
pnathan
My experience in academia suggests that this desire for pseudoscience isn't
confined to programming languages.

As much as the ideas of Science and Engineering are popular sacred cows in
"Computer Science", I think that the essence of the software world is much
closer to some sort of mathematical craftsman.

------
stewbrew
I think a PL to explore novel concepts will hardly ever make it into
mainstream. Those languages from the 90s reused concepts that were developed
decades ago but were eventually put into a form accessible for the masses. New
problems will arise since the way computers are used will change which will
pose new problems. But it will take decades to filter out which approach is
most suitable for practicioners and those resulting languages will have little
in common with the academic PLs whose ideas they reuse. I thus find it funny
to name haskell and php in the same sentence (those languages don't simply
have different target domains but they are from different planets). From the
point of view of an academic PL researcher/teacher, that's somehow frustrating
of course.

------
praptak
It might be true that the last 30 years brought very little new ideas in
language design. But "no innovation" is something quite different than "no
improvement".

In my opinion the languages of the present improve by a) skillfully combining
the existing ideas and b) implementing the 95% other things that are not big
ideas but are still essential to a language being good: consistent and broad
standard lib, streamlined syntax, core concepts playing well with each other
and so on.

Both of the above are rather engineering than science, so the author seems
right in his views. Maybe when we reach the local optimum with the current set
of the fundamental ideas, we will see the need for new ones more clearly and
the necessity of fundamental research will reappear.

------
theaeolist
A language is defined by syntax and semantics, and there is nothing
intrinsically in there to make quantitative judgements. So performance is not,
and cannot be, of a language but of a compiler/interpreter, together with a
runtime/virtual machine.

------
aufreak3
It seems useful to address the question of _why_ people create new languages -
either DSLs or full languages. Would anyone know of work in such an area? The
answer to the "why" is not the reasons people _say_ they make a language. I
mean the question as a probe into the psychological urges to "linguate" (is
that a word?).

One possible kind of answer could be that, given formal systems are powerful
tools to model some interesting corner of reality, the urge to create a DSL
could be related to an urge to understand a domain by creating an automaton
that can be mapped to domain aspects.

------
jon6
The second comment on that page says python owes its list comprehensions to
haskell. Is that true? Did Guido (or someone from python land) look at haskell
and say 'hm list comprehensions are cool, lets put them in python!' or was it
more like python rediscovered list comprehensions and the design in both
haskell and python is just the obvious way to do it.

~~~
sunqiang
from <http://wiki.python.org/moin/PythonVsHaskell>

List Comprehension Syntax:

Python's list comprehension syntax is taken (with trivial keyword/symbol
modifications) directly from Haskell. The idea was just too good to pass up.

------
jimbokun
It took 50 years for popular programming languages to catch up to Lisp, and in
some ways we're still not there yet.

So the point of programming language research is to come up with ideas for the
practical programming languages of the next 50 years to steal.

~~~
pgbovine
ah, the old lisp argument again ;)

 _So the point of programming language research is to come up with ideas for
the practical programming languages of the next 50 years to steal._

More generally, one point of research is to come up with ideas for practical
technologists of the next 50 years to steal.

------
msutherl
[repost from blog] – This is really spot on. I would like to refer you to a
couple of things that come to mind that you might find useful for advancing
this line of thinking.

(1) I saw a talk by Jonathan Edwards that was very much along the lines of
what you wrote here: <http://alarmingdevelopment.org/?p=5>

(2) Second, Christopher Alexander’s early work on patterns in architecture and
urban design have been referenced quite a bit in computer science, but seldom
is his ‘magnum opus’, a four-book series on the ‘nature of order’, referenced.
These texts move far beyond the early work. You would do well to have a look
at the first book, which tries to establish an objective theory of design not
based on scientific principles:
[http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-
alias%3D...](http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-
alias%3Daps&field-keywords=the+nature+of+order&x=0&y=0)

(3) You might be interested to read some discussion on the history of music
programming languages. Max/MSP and Pd, both dataflow-oriented, offer what I
would estimate to be an order of magnitude of productivity gain for certain
tasks in building one-off multi-media systems. They’re a bit like a UNIX for
real-time multi-media + control signals. This essay reminded me a bit of the
anti-academic and organic approach that Miller Puckette took in building them
despite being trained as a mathematician and developing them in an academic
setting. This serves as a good lesson that successful software isn’t
necessarily designed by having good principles, but rather the proper
environment, namely, one with energy and a need.

Check out two papers in the Computer Music Journal where this is discussed:

    
    
      2002. Miller Puckette, “Max at Seventeen”. Computer Music Journal, 26(4)
      2002. Eric Lyon, “Dartmouth Symposium on the Future of Computer Music Software: A Panel Discussion”. Computer Music Journal, 26(4)
    

Generally, computer music is one of the more interesting fields to look at if
you’re interested in ascertaining the future of HCI, computer science and
psychological research since from the beginning they have not been accorded
the luxury of forgoing certain constraints, such as that everything must
happen in real-time, data must be of a certain resolution (in time and
‘space’) and that non-tech-savvy practitioners from other fields (musicians)
must be able to use the tools as experts.

\--

Oh, and I would add that if you are not familiar with Bill Buxton’s career, it
may prove interesting reading for you. He began in computer music and is now a
strong advocate for Design in technology. One insight that he often
emphasizes, which I don’t claim is his originally, is that new technologies
take 20-30 years to be adopted. According to this view, new ideas in software
design should expect to lie dormant for at least 20 years, echoing what @Ben
wrote above.

