Hacker News new | past | comments | ask | show | jobs | submit login

> While standardization and support for [threaded] APIs has come a long way, their use is still predominantly restricted to system programmers as opposed to application programmers. One of the reasons for this is that APIs such as Pthreads are considered to be low-level primitives. Conventional wisdom indicates that a large class of applications can be efficiently supported by higher level constructs (or directives) which rid the programmer of the mechanics of manipulating threads. Such directive-based languages have existed for a long time, but only recently have standardization efforts succeeded in the form of OpenMP. OpenMP is an API that can be used with FORTRAN, C, and C++ for programming shared address space machines. OpenMP directives provide support for concurrency, synchronization, and data handling while obviating the need for explicitly setting up mutexes, condition variables, data scope, and initialization.

From Introduction to Parallel Computing

OpenMP: http://www.openmp.org/




Yes, every time I hear people talking about how evil or difficult threads are, I just think back to my frequent use of OpenMP. It is really quite easy to take a single-threaded program and make it multithreaded with OMP -- so long as the jobs only read, and do not write shared state.

Usually I will write a program that does the following:

1. Initialize global read-only data structures

2. Parallelize jobs across STDIN lines (or whatever)

3. Output results within a #pragma omp critical block

It works wonders, and is literally 5 additional lines of code to make a single-threaded program multithreaded with OMP. But only for some types of program. The concept is very similar to what GNU parallel does.

In short, threads are not the problem per se, they are just the wrong tool for the job if you have a multiple-readers multiple-writers situation. Message passing, databases, or something else are more appropriate in those cases. But you will pry my read only shared memory space out of my cold, dead hands.


Using OMP is incredibly easy but in my experience it sometimes leaves up to 20% performance on the table vs doing and tuning everything manually.


Since OMP Tasks it is kinda useful for non-numerical non-trivial applications as well, though still kinda... odd. Odd but neat.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: