Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: When was the last really transformational idea in programming languages?
30 points by gruseom on Feb 7, 2009 | hide | past | web | favorite | 50 comments
This question came up in another thread (http://news.ycombinator.com/item?id=470838) and got me thinking. Smalltalk dates from the early 70s; Lisp, APL, and Forth are all earlier. When was the last innovation that can truly be called fundamental?

We're not talking language features here. Obviously a great deal of refinement and elaboration has taken place since 1971. To qualify, an idea has to be deeper than that. It needs to have the paradigmatic quality that the above do. (Edit: I'm not looking for historical arguments so much as asking what ideas people feel have this quality.)

The only other candidate I can think of off the top of my head is Backus' 1977 work on FP, which certainly struck me as fundamental when I read it, though now I'm not so sure.


The ability to securely dynamically load and execute arbitrary untrusted code within the same address space as trusted code, ala the Java bytecode verifier. I'm not certain whether the creators of Java invented this idea, but they were certainly the first to popularize it.

When was any? I find these sorts of things crumble in your hands, because as soon as you pick a candidate you find yourself thinking "but that was just a variant of such-and-such earlier idea."

Thus does the creator of Arc manage to compress the entire career of James Burke into a single sentence.


Not that the long-form version isn't entertaining as well.

That was my favorite TV series of all when I was a kid.

I know what you mean - Smalltalk was Lisp + Simula and so on. Yet even if everything's a hybrid, some hybrids feel plainly derivative while others seem magically deep. In order to avoid tedious arguments about historical primacy, perhaps a better way to phrase the question is: what languages (or language ideas) feel most fundamental to people? I've given my list. What strikes me is how there isn't anything more recent on it. This may just be my ignorance.

Even more broadly, when was the last big idea in academia? Yes, they've mapped the human genome and all that, but that's just grunt work, not a big idea that lead to new fruitful research. I don't know a lot about the different disciplines, but it seems like math is running out of steam, so I've heard from a PhD student who switched to CS.

I think that such transformational ideas, although possibly variations on previous ideas like you say, can have a certain distinct paradigmatic quality that the OP is referring to. I would offer up Miranda (1985) as an example: non-strict, purely functional.

It appears that Turner has languages in this paradigm dating back to 1972 with SASL. Are there others?

I think you are right - all that we have now is ultimately derived from the structure of the underlying machine and that model has not undergone any fundamental change since the earliest iterations.

For a truly transformational step we need a new machine.

That's interesting. I'd like to see a machine that uses continuous rather than discrete logic. I think there would be less of an impedance mismatch with humans.

Alan Kay has been asking this question for a long time now. His "Viewpoints Research Institute" is doing some interesting work in this direction.


One of their projects is to see if they can build a complete software stack -- from the OS through the GUI and networking/graphics libraries to end-user software -- all in 20,000 lines of code total.

They have already made some major progress in the past few years. I highly recommend reading (or skimming) their progress report from last year. In addition to being quite impressive, it also has several cool ideas that I had not seen directly before:


I really applaud they're effort, and I'm an incredible fan of Alan Kay, but I think they're jumping the gun. They weren't required to change the stack. It's helpful, sure, but I think they should have waited until they hit beautiful enough technology that would require such a change (as beautiful enough technology will).

And focusing on code size is possibly misleading, as code size does not exactly equal orthogonality.

I have to disagree on all counts. In roughly reverse order:

focusing on code size is possibly misleading

It's not the best metric, but it's close enough in this case. The whole problem is that large commercial (and some open-source) systems are in the millions of lines of code. What is all that code doing?

code size does not exactly equal orthogonality

One implies the other, at least in the way they've phrased the problem. Their goal is to build everything using only 20,000 lines of code. This means they have a strict overall budget and therefore cannot afford any duplication anywhere.

They weren't required to change the stack

You can't go from millions of lines to 20,000 by making incremental changes. The whole system from the ground-up has to be rebuilt with the code budget in mind.

they should have waited until they hit beautiful enough technology that would require such a change

They would be waiting forever. At some point, you have to dive in. Nothing good ever gets built without several iterations. Combine this with the fact that frameworks and languages are best built simultaneously with the applications that will use them (so that the levels of abstraction are correctly tuned), and I think their approach is perfect. By being forced to consider all levels of the stack, they achieve all of these goals.

Incidentally, they're not hoping to get the system built in one go. They're "building [at least] one to throw away", as Brooks said. They are building a rough version of the system (over-budget on lines) to see where the problems are. Then, they will use this first system to build the real thing.

At its heart, this project seems like an ideal way to do research into computing systems. Take on a daring project which will require many innovative ideas -- some small, some large -- while making sure that you're always tied to reality by having concrete goals and need to have some sort of a working system at all times.

Thanks, Apu. I understood that these were their reasons and I certainly find them valid; we're talking about an amazing group of people here.

What I was trying to say was: to be brave, That it wouldn't be about waiting forever. Jumping in is important but keep a wide perspective and be able to throw it all out, over and over again. What they're looking for isn't going to come incrementally from code size. They've been given a golden chance here and I think they should consider being brave enough to lose site of shore -- to go for broke. I think the Alan of the 1970's would have done that.

Yes he does. He recently asked the question, "Significant new inventions in computing since 1980", on Stack Overflow:


I like how he refuted most of the responses usually stating that a given invention was already invented at Xerox PARC in the 70s.

type inferencing as used in haskell et al, that's a fairly recent development - early 90's/late 80's?

Category theory is another perhaps - it's from the 40's but it's application to programming is new - 80's perhaps?

I suspect the Hindley-Milner type inference algorithm probably showed up in the early 80's, since ML had it, and I was taught Standard-ML in the 84ish timeframe.

Of course there may be pre-ML work predating even that.

"type inferencing as used in haskell et al, that's a fairly recent development - early 90's/late 80's?"

According to the wikipedia page (http://en.wikipedia.org/wiki/Type_inference#Hindley.E2.80.93...), it dates back to 1978


It seems like that might be the next transformational idea, since it hasn't transformed mainstream language yet.

Most really fundamental ideas are only recognizable in hindsight, after people have built other ideas on top of them. If I remember the early/mid 80s correctly, the hot language was BASIC because it came with most microcomputers and would supposedly enable a new generation of hobbyist programmers. (Which it did, but they grew up to program in C++, Python, and JavaScript, not Basic.)

There're a lot of really interesting ideas going on in the programming language research community right now. Subtext, Epigram, associated types, Goo(ze) (which unfortunately seems to have been abandoned), JoCaml, STM, etc. Unfortunately, it probably won't be possible to judge the worth of these ideas for another 20 years.

There is a demo video of Subtext here: http://www.subtextual.org/subtext2.html. It looks like it would be fun to play with. Too bad we can't download it yet.

you can download the code for that demo: http://subtextual.org/subtext2.zip

Operational semantics has transformed language theory quite a lot in my opinion. It has set a new precedent for precision in describing programming languages. I would hope that more new languages began to pick that idea up. And then to use formal machine verified methods for verifying its meta-theory.

The question reminds of the notion of processes and channels in Occam with constructs for parallelizing based on Tony Hoare's CSP theory.

It's a good thing I read the thread before commenting because that was exactly what I was going to write!

Naturally you've got my vote.

Let me add a bit of history to reduce the 'me too' content: There was an interesting hardware development at the same time which was specifically tailored to these channels, the 'transputer' by INMOS.

The whole upshot of this idea was that processors and a bit of support structure would live on a small dimm like package that you could stick into a base platform to create a mini cluster on a card.

I think this idea was well ahead of its time and that eventually we'll see something very much like it as our future hardware architecture. After all the current trend towards clustering and miniaturization all point in that direction.

You might be interested in XMOS (https://www.xmos.com/), a start-up ran by David May formerly the architect of the Transputer and the designer of Occam at INMos, who are creating new parallel processors based around a parallel variant of C called XC.

Thanks for the transputer recollections. One of the interesting aspect of the T800 transputer was its 3-register constraint and their usage as the stack.

XMOS looks interesting.

I'm not a language connoisseur so don't know if this meets your paradigmatic quality test and it's kind of old, early 80's, but significant indentation as popularized but not invented by Python. http://python-history.blogspot.com/2009/02/early-language-de...

I believe it is a large factor of why Python has/is gaining mindshare in technical but non-programmer fields, education, science. If true and continues seems transformational to me.

One place where big new ideas are badly needed is in robotics. No existing language is a good match for programming continuous, dynamic movement. I've been working on a bunch of techniques to make it easier, but they're not ready for general use yet.

Monadic models of computation, in Haskell etc?

I've been working on something for a while that I think could represent a fundamental shift in constructing software. It's essentially an environment that takes collections of nodes and assembles them into self-defined constructs. Since there's nothing more than these simple key-value nodes, anything that is created is a matter of cloning an existing node, removing one, or setting the nodes state. There's no textual syntax besides operators in expressions. I haven't published the results yet, but if you're interested in finding out more, feel free to email me.

Practical implementations of languages based on lambda calculus and graph-reduction.

KRC, SASL, Miranda. Simon Peyton-Jones was a key driver of some of this work, and obviously he's still a major force behind Haskell.

I would say the programming aesthetics really do count for something. This has influenced Python, Mathematica, Arc, Fortress, and the list goes on. Yet it's fairly new.

Don't know how transformational it has been yet, but I feel the idea that all side effecting code needs to be specifically noted by the programmer, a la Haskell, will one day be seen as a watershed.

We're only at the very early stages of the parallelisation of hardware, but as that trend intensifies, I can't imagine compiling efficient code without explicit compiler knowledge of what code might side effect.

What's funny is that even all the latest trace-compiling (V8, TraceMonkey) and JIT-ting (lots of others) implementation technologies for dynamic languages date from the early 80s (the Schiffman-Deutsch JITting Smalltalk compiler, specifically).

Though there was also a fair bit of Self optimization work going on at the same time--don't know if Alan's and Peter's work was the very first of its kind.

Two things that I'd probably call just features of OO languages, but seem significant:

- Type-safe generic programming

- Introspection

Typesafe generics have been in ML from the get-go (1970s) from what I remember. People like to talk about how type inference is useful, but it's really the generic polymorphism that makes it shine.

I was hacking together an Arc interpreter in Haskell the other day, and after refactoring some of the hairier monadic code, I was horrified to see that the type signature for a particular function was twice as long as the function body. And that's with the compiler figuring it out for me... I don't know what I'd have done if it was C++ instead.

It's really amusing (in an oh-my-God-I'm-fucked way) to try and debug type errors in Happy-generated parsers. There have been times when the type of the function (not even the whole error message) has been a page long.

C++ STL code can be similarly (un)fun.

Intel's C++ compiler produces much more readable error message, I find. Though, almost perversely, I've gotten used enough to GCC's cryptic template error output that I'm more comfortable with it.

both these are outgrowths of C++ features - templates and rtti, which I saw first in the early 80's but were probably around earlier.

I'm not sure when type-safe generics came on the scene (perhaps that was C++), but introspection/reflection has been around since Smalltalk at least.

Somewhere/Sometimes comes the point where fundamental ideas just evolve or merge's. Democracy is one of these examples, just from a other domain. Future improvements would be massive parallelism and languages would evolve on this.

What about javascript's dynamic object model, where the objects can be changed at run time? I don't think this was possible earlier on, can't remember if this was possible in smalltalk or self?

The idea of the DOM?

Both mid 90's

Javascript's design owes a lot to Self. Modifying an object's slots in realtime was pretty much the whole point of Self and what lead to the concept of a prototype language. (Well, technically, I think Self was simply an implementation of the already-existent prototype language idea... but is an idea actually useful before there's an implementation of it to test theories with? Chicken-egg... :))

The DOM (as in Document Object Model defined by W3C) has nothing to do with Javascript - it's just an API originally designed for accessing the various parts of HTML. It was set up so that it could be implemented for almost any language. In fact, had it actually used some of the more advanced (Self-inspired) features of Javascript within the definition of the DOM, the DOM may not have been such a pain in the ass. :)

What Javascript did manage to do that was very important is that it ended up becoming not only the most installed programming language of all time, but unlike BASIC, it's actually a pretty damn good language under the covers. So if anything, the fundamental innovation of Javascript within every browser has basically yielded another BASIC-like inspiration to a whole new generation of programmers. Except this time, their intro language was much more abstractly powerful.

How about regular expressions as an essential part of a modern, general purpose language (rather than a separate mini-language)?

Model checking.

Inversion Of Control principle and Dependency Injection - 2002-2005. That is single possible answer.

Is there a difference between IoC and callbacks? In any case, I don't think it can be said to be -quite- that recent.

XSLT. Hahahahahaha!

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact