
Four Solutions to a Trivial Problem [video] - dang
https://www.youtube.com/watch?v=ftcIcn8AmSY
======
bjornsing
I watched the whole thing and was a bit disappointed I must say. The TLDR (or
W as in watch perhaps) is essentially _if we write our programs in a map
/reduce (functional) style fashion then the compiler/runtime can sometimes[1]
choose between a parallel and a sequential implementation at runtime,
depending on what computational resources are available_. Seems fairly obvious
to me... I would have much preferred the 10 minute version of this talk.

1\. Given that the underlying operations are order and grouping independent
(associative and commutative) of course.

~~~
deckar01
I think calling it a "map/reduce (functional) style" is an understatement. The
presenter is suggesting that when a problem must be solved in parallel, don't
just code the low level parallel implementation. Analyze the problem, divide-
and-conquer, then use high level abstractions that make parallelism a
responsibility of the underlying implementation.

This concept may be familiar to you, but talks like this will be necessary
until parallelization is as seamless as garbage collection.

~~~
bjornsing
> Analyze the problem, divide-and-conquer, then use high level abstractions
> that make parallelism a responsibility of the underlying implementation.

That's a better summary than mine, but I still think it's too little substance
for a 60 min talk. If you want a parallel(izable) implementation then divide-
and-conquer is a well-known schoolbook approach. The problem as I see it is
that it's not easy to build (or understand) algorithms like that. I saw very
little in this talk about how to address/solve that, except the obvious:
better education.

I think it was also a bit irritating that he pretended there was nothing a
compiler could do about the accumulation loop he presented as the first
example. Isn't it obvious that a smart compiler could (in principle) optimize
that loop so that it could run in parallel on multiple cores...?

------
stevebmark
I'm trying very hard to follow along but this quickly devolves into language
that isn't easy to follow. When he starts going into combining "bitonic globs"
is when I start having trouble focusing.

For someone that's familiar with parallel programming topics and language, is
this talk easy to follow? I'm trying to figure out if the fault lies with
myself or the presenter. The vibe I'm getting is that showing slides full of
code in an obscure language isn't a good teaching method, but I also don't
have the prerequisite background here so my opinion isn't useful. I also don't
do work that involves these classes of problems.

~~~
diskcat
I could see the gist of the procedure i.e. representing it as globs which
allow you to break up the data into 2 so you can run it in binary tree like
steps.

But the code was pretty useless. I didn't know that language and within the
short time span that it was presented I couldn't understand the code.

Clearly this would be better presented as a paper so you could have time to go
through it but I think if you ignore the actual code completely I still
understood (or at least i think i do) the talk.

------
jtolmar
My key takeaway was that mathematical properties like associativity,
commutativity, idempotency, and having an identity value are good tools for
communicating with compilers. An associative binary operator plus an identity
is enough to automatically turn your + into a Σ, with the compiler being able
to choose a parallel or sequential implementation.

Guy's example code uses a language that looks like mathematical notation, but
I think that if you could tell the compiler what's associative and what the
identity value is, you could get there with procedural code provided you use
for-each instead of classic for. There's also probably a better way to phrase
the first line than "var x = operator.identity;" so that more natural code
ends up parallelizable, but I don't know what it is.

