I personally feel using light-weight processes for intra-application multitasking is far more superior than having concurrently running threads that share the same block of memory.
Light-weight processes are far more secure than threads in the sense they don't share memory and thus avoid a whole host of problems associated with it.
IMO, they are also easier to work with (while programming); I find the message-passing IPC model simpler and more manageable.
Additionally when it comes to parallel computing; even there light-weight processes are a win-win scenario. There's no need for complex algorithms that manage shared memory between CPUs when each CPU can be assigned 1/more L.W.-processes and they all interact by message passing.
I think on a well designed OS, L.W.-procs should be as efficient as threads.
Some applications like Google Chrome already use L.W.-procs (for each tab the user opens a seperate process is launched). It surprises me that a lot more people don't use it already, given its many advantages.
Which model of multitasking do you think is better? (especially in terms of programmer efficiency)
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1....
Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to support threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures. In this paper, I argue that this is not a good idea. Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly nondeterministic, and the job of the programmer becomes one of pruning that nondeterminism. Although many research techniques improve the model by offering more effective pruning, I argue that this is approaching the problem backwards. Rather than pruning nondeterminism, we should build from essentially deterministic, composable components. Nondeterminism should be explicitly and judiciously introduced where needed, rather than removed where not needed. The consequences of this principle are profound. I argue for the development of concurrent coordination languages based on sound, composable formalisms. I believe that such languages will yield much more reliable, and more concurrent programs.