From Introduction to Parallel Computing
Usually I will write a program that does the following:
1. Initialize global read-only data structures
2. Parallelize jobs across STDIN lines (or whatever)
3. Output results within a #pragma omp critical block
It works wonders, and is literally 5 additional lines of code to make a single-threaded program multithreaded with OMP. But only for some types of program. The concept is very similar to what GNU parallel does.
In short, threads are not the problem per se, they are just the wrong tool for the job if you have a multiple-readers multiple-writers situation. Message passing, databases, or something else are more appropriate in those cases. But you will pry my read only shared memory space out of my cold, dead hands.