
Guy Steele: "How to Think about Parallel Programming: Not" - puredanger
http://www.infoq.com/presentations/Thinking-Parallel-Programming
======
guns
When I watched Guy Steele's earlier talk on the same subject (below), the
scales fell from my PL eyes.

When he showed that user-definable, verifiably associative combination
operators can provide magnificent opportunities for automatic parallelization,
I was ripped out of my head and found myself unable to squabble about the
different features of my favorite languages. Even Lisp (the prototype [1])
falls significantly short, since linked lists are most naturally processed by
a single thread of execution.

It's clear to me now that when the massively multicore era hits, there will be
a significant opportunity for a totally new breed of PLs and for programmers
to (re)implement a new wave of software. I hope to be ready when that time
arrives.

"Organizing Functional Code for Parallel Execution; or, foldl and foldr
Considered Slightly Harmful" <http://vimeo.com/6624203>

[1]: Clojure does seem to be anticipating this future, however

~~~
pjscott
The operators don't have to be _verifiably_ associative -- I'm pretty sure
that's impossible to verify, in general, thanks to the halting problem -- but
it would be nice to be able to declare operators associative.

Clojure looks like it could easily be made to support the kind of thing Steele
is talking about: its immutable vectors are implemented as trees of array
chunks, to which you could apply an associative operation in parallel.

~~~
riffraff
aren't verification and definition of associative operators two different
problems?

I mean, while you can't generally prove that an operator is associative, you
can have probably tell that they are associative by construction.

~~~
btilly
_I mean, while you can't generally prove that an operator is associative, you
can have probably tell that they are associative by construction._

From the construction of matrix multiplication, it is far from obvious that it
is associative. And this is a simple example.

------
cpr
The first part is fun! Brought back great memories of hacking IBM 1130's at
Teledyne Ryan while in high school.

He's wrong when he said that machines of that vintage didn't know about
stacks: the whole Burrough B5000 series of machines were entirely stack-based
(and programmed in an extended Algol--decades ahead of their time).

------
pohl
I'm so happy to live in a world where I can stream a talk like this that
happened yesterday at a conference I'd not be able to attend.

The motivation at the beginning with the punchcards and the following PDP-10
"poem" for computing the sin() function left me skeptical about what was to
follow, but it turned out to set an excellent stage for the remainder of the
talk. By the time he got to the "big deal" ideas at the end, I was well
prepared to accept his thesis.

Don't miss this talk. Take the time to watch it.

One nice bit: he admits that if he knew 7 years ago what he knows today,
Fortress might have started with Haskell and been pushed 1/10th of the way
towards fortran, rather than starting with an object-based foundation and
writing side-effect-free functional domain-specific-languages on top of it.

------
ghotli
This was the best keynote at Strange Loop in St. Louis this year. It was wild
sitting in a room with that many people all laughing about the unbelievable
metaprogramming he had to do on punch cards. I'm hoping 2011 Strange Loop is
as good as this year.

------
DanielRibeiro
Guy Steele had some really great presentations (this one included) over the
last years: <http://bit.ly/hy5n0S>

------
tieTYT
I didn't see the point of the talk (and was very annoyed by it) until minute
31. That's when the topic starts, in my opinion.

~~~
barrkel
You'd probably prefer the talk linked to by guns, then. It's an expanded and
deeper version of the latter half of this talk.

------
aidenn0
Yeah, I kind of agree. Saying there should be better parallel programming
tools is only slightly more useful than saying that the hardware people should
come up with a way to improve single-threaded performance.

------
wccrawford
"Guy L. Steele Jr. believes that it should not be the programmer’s job to
think about parallelism, but languages should provide ways to transparently
run tasks in parallel."

And who makes the languages? Programmers!

I get what he's trying to say... He's trying to say that someone should stop
and make it easy for the rest of us to do parallel programming.

And I think someone will eventually do just that. But as usual, I don't think
someone writing a blog post about it is going to push someone into it. Either
they have the drive to do it, or they don't. "Somebody else should do
something about this!" never works.

~~~
rtghnthyjnm
Even in C++ it's pretty trivial to parallelize a loop with openMP.

The compiler could do it automatically but it's only a single line
preprocessor directive.

~~~
Someone
Is it? Adding the directive is easy, but verifying that it does not change
program semantics can be far from easy.

A parallelizing compiler must do the latter before doing the former.

