

Ask HN: When was the last really transformational idea in programming languages? - gruseom

This question came up in another thread (http://news.ycombinator.com/item?id=470838) and got me thinking. Smalltalk dates from the early 70s; Lisp, APL, and Forth are all earlier. When was the last innovation that can truly be called fundamental?<p>We're not talking language features here. Obviously a great deal of refinement and elaboration has taken place since 1971. To qualify, an idea has to be deeper than that. It needs to have the paradigmatic quality that the above do. (Edit: I'm not looking for historical arguments so much as asking what ideas people <i>feel</i> have this quality.)<p>The only other candidate I can think of off the top of my head is Backus' 1977 work on FP, which certainly struck me as fundamental when I read it, though now I'm not so sure.<p>Others?
======
dfranke
The ability to securely dynamically load and execute arbitrary untrusted code
within the same address space as trusted code, ala the Java bytecode verifier.
I'm not certain whether the creators of Java invented this idea, but they were
certainly the first to popularize it.

------
pg
When was any? I find these sorts of things crumble in your hands, because as
soon as you pick a candidate you find yourself thinking "but that was just a
variant of such-and-such earlier idea."

~~~
sctb
I think that such transformational ideas, although possibly variations on
previous ideas like you say, can have a certain distinct paradigmatic quality
that the OP is referring to. I would offer up Miranda (1985) as an example:
non-strict, purely functional.

~~~
sctb
It appears that Turner has languages in this paradigm dating back to 1972 with
SASL. Are there others?

------
apu
Alan Kay has been asking this question for a long time now. His "Viewpoints
Research Institute" is doing some interesting work in this direction.

<http://vpri.org/>

One of their projects is to see if they can build a complete software stack --
from the OS through the GUI and networking/graphics libraries to end-user
software -- all in 20,000 lines of code total.

They have already made some major progress in the past few years. I highly
recommend reading (or skimming) their progress report from last year. In
addition to being quite impressive, it also has several cool ideas that I had
not seen directly before:

<http://www.vpri.org/pdf/tr2007008_steps.pdf>

~~~
david927
I really applaud they're effort, and I'm an incredible fan of Alan Kay, but I
think they're jumping the gun. They weren't required to change the stack. It's
helpful, sure, but I think they should have waited until they hit beautiful
enough technology that would require such a change (as beautiful enough
technology will).

And focusing on code size is possibly misleading, as code size does not
exactly equal orthogonality.

~~~
apu
I have to disagree on all counts. In roughly reverse order:

 _focusing on code size is possibly misleading_

It's not the best metric, but it's close enough in this case. The whole
problem is that large commercial (and some open-source) systems are in the
_millions_ of lines of code. What is all that code doing?

 _code size does not exactly equal orthogonality_

One implies the other, at least in the way they've phrased the problem. Their
goal is to build _everything_ using only 20,000 lines of code. This means they
have a strict overall budget and therefore cannot afford any duplication
anywhere.

 _They weren't required to change the stack_

You can't go from millions of lines to 20,000 by making incremental changes.
The whole system from the ground-up has to be rebuilt with the code budget in
mind.

 _they should have waited until they hit beautiful enough technology that
would require such a change_

They would be waiting forever. At some point, you have to dive in. Nothing
good ever gets built without several iterations. Combine this with the fact
that frameworks and languages are best built simultaneously with the
applications that will use them (so that the levels of abstraction are
correctly tuned), and I think their approach is perfect. By being forced to
consider all levels of the stack, they achieve all of these goals.

Incidentally, they're not hoping to get the system built in one go. They're
"building [at least] one to throw away", as Brooks said. They are building a
rough version of the system (over-budget on lines) to see where the problems
are. Then, they will use this first system to build the real thing.

At its heart, this project seems like an ideal way to do research into
computing systems. Take on a daring project which will require many innovative
ideas -- some small, some large -- while making sure that you're always tied
to reality by having concrete goals and need to have some sort of a working
system at all times.

~~~
david927
Thanks, Apu. I understood that these were their reasons and I certainly find
them valid; we're talking about an amazing group of people here.

What I was trying to say was: to be brave, That it wouldn't be about waiting
forever. Jumping in is important but keep a wide perspective and be able to
throw it all out, over and over again. What they're looking for isn't going to
come incrementally from code size. They've been given a golden chance here and
I think they should consider being brave enough to lose site of shore -- to go
for broke. I think the Alan of the 1970's would have done that.

------
davo11
type inferencing as used in haskell et al, that's a fairly recent development
- early 90's/late 80's?

Category theory is another perhaps - it's from the 40's but it's application
to programming is new - 80's perhaps?

~~~
watmough
I suspect the Hindley-Milner type inference algorithm probably showed up in
the early 80's, since ML had it, and I was taught Standard-ML in the 84ish
timeframe.

Of course there may be pre-ML work predating even that.

------
nostrademons
Most really fundamental ideas are only recognizable in hindsight, after people
have built other ideas on top of them. If I remember the early/mid 80s
correctly, the hot language was BASIC because it came with most microcomputers
and would supposedly enable a new generation of hobbyist programmers. (Which
it did, but they grew up to program in C++, Python, and JavaScript, not
Basic.)

There're a lot of really interesting ideas going on in the programming
language research community right now. Subtext, Epigram, associated types,
Goo(ze) (which unfortunately seems to have been abandoned), JoCaml, STM, etc.
Unfortunately, it probably won't be possible to judge the worth of these ideas
for another 20 years.

~~~
cubix
There is a demo video of Subtext here:
<http://www.subtextual.org/subtext2.html>. It looks like it would be fun to
play with. Too bad we can't download it yet.

~~~
MaysonL
you can download the code for that demo: <http://subtextual.org/subtext2.zip>

------
jlouis
Operational semantics has transformed language theory quite a lot in my
opinion. It has set a new precedent for precision in describing programming
languages. I would hope that more new languages began to pick that idea up.
And then to use formal machine verified methods for verifying its meta-theory.

------
shailesh
The question reminds of the notion of processes and channels in Occam with
constructs for parallelizing based on Tony Hoare's CSP theory.

~~~
jacquesm
It's a good thing I read the thread before commenting because that was
_exactly_ what I was going to write!

Naturally you've got my vote.

Let me add a bit of history to reduce the 'me too' content: There was an
interesting hardware development at the same time which was specifically
tailored to these channels, the 'transputer' by INMOS.

The whole upshot of this idea was that processors and a bit of support
structure would live on a small dimm like package that you could stick into a
base platform to create a mini cluster on a card.

I think this idea was well ahead of its time and that eventually we'll see
something very much like it as our future hardware architecture. After all the
current trend towards clustering and miniaturization all point in that
direction.

~~~
halo
You might be interested in XMOS (<https://www.xmos.com/>), a start-up ran by
David May formerly the architect of the Transputer and the designer of Occam
at INMos, who are creating new parallel processors based around a parallel
variant of C called XC.

------
njharman
I'm not a language connoisseur so don't know if this meets your paradigmatic
quality test and it's kind of old, early 80's, but significant indentation as
popularized but not invented by Python. [http://python-
history.blogspot.com/2009/02/early-language-de...](http://python-
history.blogspot.com/2009/02/early-language-design-and-development.html)

I believe it is a large factor of why Python has/is gaining mindshare in
technical but non-programmer fields, education, science. If true and continues
seems transformational to me.

------
tlb
One place where big new ideas are badly needed is in robotics. No existing
language is a good match for programming continuous, dynamic movement. I've
been working on a bunch of techniques to make it easier, but they're not ready
for general use yet.

------
tome
Monadic models of computation, in Haskell etc?

------
watmough
Practical implementations of languages based on lambda calculus and graph-
reduction.

KRC, SASL, Miranda. Simon Peyton-Jones was a key driver of some of this work,
and obviously he's still a major force behind Haskell.

------
david927
I've been working on something for a while that I think could represent a
fundamental shift in constructing software. It's essentially an environment
that takes collections of nodes and assembles them into self-defined
constructs. Since there's nothing more than these simple key-value nodes,
anything that is created is a matter of cloning an existing node, removing
one, or setting the nodes state. There's no textual syntax besides operators
in expressions. I haven't published the results yet, but if you're interested
in finding out more, feel free to email me.

------
DaniFong
I would say the programming aesthetics really do count for something. This has
influenced Python, Mathematica, Arc, Fortress, and the list goes on. Yet it's
fairly new.

------
stuki
Don't know how transformational it has been yet, but I feel the idea that all
side effecting code needs to be specifically noted by the programmer, a la
Haskell, will one day be seen as a watershed.

We're only at the very early stages of the parallelisation of hardware, but as
that trend intensifies, I can't imagine compiling efficient code without
explicit compiler knowledge of what code might side effect.

------
cpr
What's funny is that even all the latest trace-compiling (V8, TraceMonkey) and
JIT-ting (lots of others) implementation technologies for dynamic languages
date from the early 80s (the Schiffman-Deutsch JITting Smalltalk compiler,
specifically).

Though there was also a fair bit of Self optimization work going on at the
same time--don't know if Alan's and Peter's work was the very first of its
kind.

------
wheels
Two things that I'd probably call just features of OO languages, but seem
significant:

\- Type-safe generic programming

\- Introspection

~~~
GeoJawDguJin
Typesafe generics have been in ML from the get-go (1970s) from what I
remember. People like to talk about how type inference is useful, but it's
really the generic polymorphism that makes it shine.

I was hacking together an Arc interpreter in Haskell the other day, and after
refactoring some of the hairier monadic code, I was horrified to see that the
type signature for a particular function was twice as long as the function
body. And that's _with_ the compiler figuring it out for me... I don't know
what I'd have done if it was C++ instead.

~~~
nostrademons
It's really amusing (in an oh-my-God-I'm-fucked way) to try and debug type
errors in Happy-generated parsers. There have been times when the _type_ of
the function (not even the whole error message) has been a page long.

C++ STL code can be similarly (un)fun.

~~~
wheels
Intel's C++ compiler produces much more readable error message, I find.
Though, almost perversely, I've gotten used enough to GCC's cryptic template
error output that I'm more comfortable with it.

------
davo11
What about javascript's dynamic object model, where the objects can be changed
at run time? I don't think this was possible earlier on, can't remember if
this was possible in smalltalk or self?

The idea of the DOM?

Both mid 90's

~~~
BigZaphod
Javascript's design owes a lot to Self. Modifying an object's slots in
realtime was pretty much the whole point of Self and what lead to the concept
of a prototype language. (Well, technically, I think Self was simply an
implementation of the already-existent prototype language idea... but is an
idea actually useful before there's an implementation of it to test theories
with? Chicken-egg... :))

The DOM (as in Document Object Model defined by W3C) has nothing to do with
Javascript - it's just an API originally designed for accessing the various
parts of HTML. It was set up so that it could be implemented for almost any
language. In fact, had it actually used some of the more advanced (Self-
inspired) features of Javascript within the definition of the DOM, the DOM may
not have been such a pain in the ass. :)

What Javascript did manage to do that was very important is that it ended up
becoming not only the most installed programming language of all time, but
unlike BASIC, it's actually a pretty damn good language under the covers. So
if anything, the fundamental innovation of Javascript within every browser has
basically yielded another BASIC-like inspiration to a whole new generation of
programmers. Except this time, their intro language was much more abstractly
powerful.

------
gaerfield
Somewhere/Sometimes comes the point where fundamental ideas just evolve or
merge's. Democracy is one of these examples, just from a other domain. Future
improvements would be massive parallelism and languages would evolve on this.

------
cchooper
How about regular expressions as an essential part of a modern, general
purpose language (rather than a separate mini-language)?

------
jderick
Model checking.

------
vlisivka
Inversion Of Control principle and Dependency Injection - 2002-2005. That is
single possible answer.

~~~
apage43
Is there a difference between IoC and callbacks? In any case, I don't think it
can be said to be -quite- that recent.

------
jimfl
XSLT. Hahahahahaha!

