Clojure tries to address this problem. First and rest are still fundamental concepts in its data structures, but underneath the hood is properly written concurrent Java code. Hickey wrote it once, and now you don't have to.
I highly recommend the Clojure videos, available at: http://clojure.blip.tv/. Watch them online or download them via iTunes. There are a two videos on Clojure data structures, and a set of introductions for both Lisp and Java programmers
This is confusing the difference between concurrency and parallelism, the latter of which Clojure doesn't yet address.
If your program is divided into threads, having a language like Clojure that makes concurrency easy and lock-free is a wonderful thing, and of course you get some benefit from running on multiple cores, though I don't think that's entirely the point.
If, on the other hand, you're looking for a faster way to map a function across eight million lines in a flat file, using Clojure's map function won't magically make your program run faster (and pmap might not run at all -- you may run out of memory, since pmap isn't truly lazy).
The fact that a Clojure PersistentVector (not a PersistentList) is implemented with a 32-way tree (not 64-way) does not imply that Clojure has abandoned the car/cdr paradigm in favor of conc lists -- it still uses cons, in a more generic form, but still with the same first/rest concept. The trees allow vectors to be inexpensively persistent, so you can have actual practical immutable vectors.
Ah yes, a silly mistake on my part. I am hoping that eventually Clojure will help the programmer write parallel algorithms, but you are correct in that such constructs don't exist in the language at this moment.
But Steele is talking about introducing fundamental new constructs all the way up to the application level. This is not a problem that can be abstracted away by better language implementations, and certainly not by better implementations of familiar constructs.
> and certainly not by better implementations of familiar constructs.
Why not? I sort of took the slides as just that. For a good chunk of the slides he talks about the simple singly-linked list found in LISP. Then shows that implemented as a Tree (conc list), familiar operations still exist, but we get improvements by making operations more parallelizable.
Slide 24 starts showing how `first`, `rest`, `append` can be implemented with conc lists, and 31 shows `map`, `reduce`, `mapreduce` as well. Then, he goes on to show `length`, `filter` and `reverse` in terms of `mapreduce`.
Well, if first and rest are unsuitable for parallelism as Steele says, a programming edifice built on top of better implementations of those might be beside the point.
My understanding of the slides is that we don't need better language runtimes to exploit parallelism for us while we continue to write the same kind of code as before; what we need is a new set of primitives (modeled by conc lists in this talk) that would allow us to compose programs differently, with more parallelism. And that far from being insulated from this, programmers will have to be thinking about it throughout.
It's possible that Clojure's implementation is already such that these primitives, and a programming style built on them, are feasible today. If so, that's awesome. But this is a different thing than providing better implementations of cons-based abstractions.
Edit: another thing - the transformation from cons to conc is nontrivial in the Lisp context because Lisp's code=data is so tied to cons. It might be an interesting to think about what programs would look like if you applied code=data rigorously in a conc model. Would a language whose expressions are represented as concs look or feel significantly different than a cons-based one?
> It might be an interesting to think about what programs would look like if you applied code=data rigorously in a conc model. Would a language whose expressions are represented as concs look or feel significantly different than a cons-based one?
> My understanding of the slides is that we don't need better language runtimes to exploit parallelism for us while we continue to write the same kind of code as before;
Yes, I agree he was saying that, but that doesn't mean we can't change the implementations of what we use now to actually make it easier to do accomplish this.
> another thing - the transformation from cons to conc is nontrivial in the Lisp context because Lisp's code=data is so tied to cons. It might be an interesting to think about what programs would look like if you applied code=data rigorously in a conc model. Would a language whose expressions are represented as concs look or feel significantly different than a cons-based one?
But, you could still implement car and cdr in terms on conc, so this is not really an issue. It might not be the most efficient thing, but that's sort of the benefit of macros. Expand at compile time, and avoid the overhead. So, in summary, write macros in terms of an emulated cons cell, and use conc optimized primitives elsewhere.
But, you could still implement car and cdr in terms on conc, so this is not really an issue.
It might still be an issue. The more distance there is between how code is represented and how data and lower levels of code are represented, the less fluid programming in Lisp becomes. Yes, you can translate one to the other. But the sweet spot of Lisp is the space where no such mapping is needed or it's trivial; the magic drops off pretty sharply as you leave that space. It's not obvious (to me anyway) what the effect of what we're talking about would be. Small changes can have surprisingly big impacts, and one has to include the psychology of this.
I highly recommend the Clojure videos, available at: http://clojure.blip.tv/. Watch them online or download them via iTunes. There are a two videos on Clojure data structures, and a set of introductions for both Lisp and Java programmers