1. Given that the underlying operations are order and grouping independent (associative and commutative) of course.
This concept may be familiar to you, but talks like this will be necessary until parallelization is as seamless as garbage collection.
That's a better summary than mine, but I still think it's too little substance for a 60 min talk. If you want a parallel(izable) implementation then divide-and-conquer is a well-known schoolbook approach. The problem as I see it is that it's not easy to build (or understand) algorithms like that. I saw very little in this talk about how to address/solve that, except the obvious: better education.
I think it was also a bit irritating that he pretended there was nothing a compiler could do about the accumulation loop he presented as the first example. Isn't it obvious that a smart compiler could (in principle) optimize that loop so that it could run in parallel on multiple cores...?
Just assume for a second that I'm not a complete idiot. Assume that I've been around for awhile, that I know how to perform summations, that I can do running averages. Assume that I've seen complications that come with solving some of these basic problems. Assume that I've solved problems before only to be confronted by subsequent problems in the pipeline, and assume that I've come to expect that kind of development.
What in the world could my first impression mean to anyone?
For someone that's familiar with parallel programming topics and language, is this talk easy to follow? I'm trying to figure out if the fault lies with myself or the presenter. The vibe I'm getting is that showing slides full of code in an obscure language isn't a good teaching method, but I also don't have the prerequisite background here so my opinion isn't useful. I also don't do work that involves these classes of problems.
But the code was pretty useless. I didn't know that language and within the short time span that it was presented I couldn't understand the code.
Clearly this would be better presented as a paper so you could have time to go through it but I think if you ignore the actual code completely I still understood (or at least i think i do) the talk.
I think it was easy to follow if you focused on the aspects of general interest (= what he was doing in principle, e.g. divide-and-conquer). But sure, if you wanted to understand exactly how the merging of "bitonic globs" worked, then no, that's not easy to follow.
I remember thinking when he started talking about merging the "bitonic blobs" that "Yea, yea, sure, you can divide-and-conquer, but it's just about as much work combining two of those partial solutions as solving the original problem! There's got to a be a better way...", and I basically stopped listening to the details.
Guy's example code uses a language that looks like mathematical notation, but I think that if you could tell the compiler what's associative and what the identity value is, you could get there with procedural code provided you use for-each instead of classic for. There's also probably a better way to phrase the first line than "var x = operator.identity;" so that more natural code ends up parallelizable, but I don't know what it is.