Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're parallelizing algorithms with divide-and-conquer behavior (recursive algorithms, or anything that forms a tree of tasks and sub-tasks), I think it's a very natural form of parallelism.

I did something similar as a C++ library for my Master's: http://people.cs.vt.edu/~scschnei/factory/ I felt it was an intuitive way of expressing task parallelism, but if the dependent tasks don't operate on strict subsets of each other's data, I agree it's not much help. If that's the case, then you've successfully expressed the parallelism, but the larger problem remains: synchronized access to data structures.

Of course, transactional memory could help there. (Hey, full circle.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: