Hacker News new | past | comments | ask | show | jobs | submit login
Four Solutions to a Trivial Problem [video] (youtube.com)
48 points by dang on Feb 28, 2016 | hide | past | web | favorite | 8 comments



I watched the whole thing and was a bit disappointed I must say. The TLDR (or W as in watch perhaps) is essentially if we write our programs in a map/reduce (functional) style fashion then the compiler/runtime can sometimes[1] choose between a parallel and a sequential implementation at runtime, depending on what computational resources are available. Seems fairly obvious to me... I would have much preferred the 10 minute version of this talk.

1. Given that the underlying operations are order and grouping independent (associative and commutative) of course.


I think calling it a "map/reduce (functional) style" is an understatement. The presenter is suggesting that when a problem must be solved in parallel, don't just code the low level parallel implementation. Analyze the problem, divide-and-conquer, then use high level abstractions that make parallelism a responsibility of the underlying implementation.

This concept may be familiar to you, but talks like this will be necessary until parallelization is as seamless as garbage collection.


> Analyze the problem, divide-and-conquer, then use high level abstractions that make parallelism a responsibility of the underlying implementation.

That's a better summary than mine, but I still think it's too little substance for a 60 min talk. If you want a parallel(izable) implementation then divide-and-conquer is a well-known schoolbook approach. The problem as I see it is that it's not easy to build (or understand) algorithms like that. I saw very little in this talk about how to address/solve that, except the obvious: better education.

I think it was also a bit irritating that he pretended there was nothing a compiler could do about the accumulation loop he presented as the first example. Isn't it obvious that a smart compiler could (in principle) optimize that loop so that it could run in parallel on multiple cores...?


The part where the guy chuckles and says that some of us could recognize this problem as part of their employment application at Google was a bit disturbing. It sounded innocent, but, ominous really.

Just assume for a second that I'm not a complete idiot. Assume that I've been around for awhile, that I know how to perform summations, that I can do running averages. Assume that I've seen complications that come with solving some of these basic problems. Assume that I've solved problems before only to be confronted by subsequent problems in the pipeline, and assume that I've come to expect that kind of development.

What in the world could my first impression mean to anyone?


I'm trying very hard to follow along but this quickly devolves into language that isn't easy to follow. When he starts going into combining "bitonic globs" is when I start having trouble focusing.

For someone that's familiar with parallel programming topics and language, is this talk easy to follow? I'm trying to figure out if the fault lies with myself or the presenter. The vibe I'm getting is that showing slides full of code in an obscure language isn't a good teaching method, but I also don't have the prerequisite background here so my opinion isn't useful. I also don't do work that involves these classes of problems.


I could see the gist of the procedure i.e. representing it as globs which allow you to break up the data into 2 so you can run it in binary tree like steps.

But the code was pretty useless. I didn't know that language and within the short time span that it was presented I couldn't understand the code.

Clearly this would be better presented as a paper so you could have time to go through it but I think if you ignore the actual code completely I still understood (or at least i think i do) the talk.


> For someone that's familiar with parallel programming topics and language, is this talk easy to follow? I'm trying to figure out if the fault lies with myself or the presenter.

I think it was easy to follow if you focused on the aspects of general interest (= what he was doing in principle, e.g. divide-and-conquer). But sure, if you wanted to understand exactly how the merging of "bitonic globs" worked, then no, that's not easy to follow.

I remember thinking when he started talking about merging the "bitonic blobs" that "Yea, yea, sure, you can divide-and-conquer, but it's just about as much work combining two of those partial solutions as solving the original problem! There's got to a be a better way...", and I basically stopped listening to the details.


My key takeaway was that mathematical properties like associativity, commutativity, idempotency, and having an identity value are good tools for communicating with compilers. An associative binary operator plus an identity is enough to automatically turn your + into a Σ, with the compiler being able to choose a parallel or sequential implementation.

Guy's example code uses a language that looks like mathematical notation, but I think that if you could tell the compiler what's associative and what the identity value is, you could get there with procedural code provided you use for-each instead of classic for. There's also probably a better way to phrase the first line than "var x = operator.identity;" so that more natural code ends up parallelizable, but I don't know what it is.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: