Hacker News new | comments | show | ask | jobs | submit login
Concurrency is not Parallelism. Rob Pike at Waza 2012 [video] (heroku.com)
65 points by DanielRibeiro 1607 days ago | hide | past | web | 48 comments | favorite



You have four tasks A, B, C and D. B and C both depend on the result of A, D depends on the result of B and C. Now B and C are concurrent because there is no dependency between them - you can execute these tasks in order ABCD or ACBD. It is only important that A executes before and D after B and C. And there is a third option - you can execute B and C at the same time making the execution parallel. So concurrency is about dependencies between tasks or the lack of them while parallelism is about actually executing independent tasks at the same time.


> So concurrency is about dependencies between tasks or the lack of them while parallelism is about actually executing independent tasks at the same time.

I understand what you're getting at, but I don't think that's a great summary of concurrency vs. parallelism. My version would be: currency is about non-determinism and multiple threads of control while parallelism is about doing multiple computations at once for performance reasons (possibly deterministically).

EDIT: sorry I see this has already been posted almost verbatim a few other places in the thread


I personally would not draw a (strong) line between concurrency and nondeterminism - while concurrent (real-world) systems usually show a fair amount of nondeterminism there is still no inherent requirement for that. The dependencies are static and one can always schedule task execution in a fully deterministic way.


Well, sure we're assuming ultimately some sort of deterministic result of our computation, but we're discussing pl semantics and properties of the computation itself, and non-deterministic semantics are one of the most important characteristics of concurrent programming, no?


We definitely want a deterministic result for every task execution even if the execution of some subtasks happens nondeterministically. But after rethinking what I wrote last time I would now even strengthen my statement. Concurrency is a structural property of a system, parallelism is property of an (specific) schedule of subtasks and therefore a property of the runtime behavior of a task. Nondeterminism is also a property of the runtime behavior and therefore nondeterminism MAY be associated with parallelism but CAN NOT be associated with concurrency.


The irony was not lost on me... Heroku posts a video of Rob Pike discussing intelligent routing (load balancing) using concurrent go a week after the "random load balancing fiasco" [1]. I chuckled.

[1] http://news.ycombinator.com/item?id=5215884


Yes although the example given by Rob Pike highlights all the information that would have to be shared between the load balancer and the web processes in real time in order to make intelligent routing work well.

Very hard to enforce when you're running a heterogenous network trying to handle routing which may have any kind of app answering requests. If there was an easy solution to this, evidently Heroku would have put it in already, but I'd hope they're working on it.

Sounds like some of their customers have outgrown them though; given the amount they are paying per month they could easily host themselves and hire someone to look after their servers - at some point it becomes necessary to control your entire stack if you're looking at performance.


The difference, as I understand it, is actually pretty simple... This suggests to me that the confusion is due to the synonymy of the terms in common usage. Without making any claims about the quality of this presentation specifically, I wonder why anyone would insist on using them and inevitably have to lecture perfectly smart people about why they don't mean the same thing...


I listened to his presentation. IMHO the presenter laid too much stress on parallelism and concurrency being different, but never explained the difference clearly and succinctly. But from the presenter's side, I can see that he needs to sell his product(golang). Therefore he assumed that - these definitions are often confused, it is a big issue, and go is 42.


These terms have been confused since the early 90s at least. It is one of the first things we systems people learn: concurrency is doing multiple things at once because you must, parallelism is doing multiple things at once because you can.

Infrastructure and techniques that support concurrency (e.g. actors) are almost always totally unsuitable to parallelism (e.g. SIMD ala GPUs), and the other way around. Likewise, skills don't transfer very well between the two fields. So when someone talks about this technology being useful to your use case, and they've confused the terms, its very annoying (you should use actors to get more parallelism, WTF????).


You're taking a pretty narrow view of parallel computing. For example, there are plenty of systems that use essentially actor models for large-scale parallel computation, e.g. Charm++ and HPX.


Isn't Charm++ infrastructure comparable to MPI? In that case, its not clear what the goals are. I think it was during the MPI-phase that we began to realize that we had two very different problems on our hands. Fine-grained message passing has been obsolete in the parallel field at least for awhile now.


Do you say that MPI is obsolete? I thought most legacy codes in HPC based on MPI?


You know, its rude to say something is obsolete in this field, so we just say...there are other choices and hope the hint is taken. Nothing ever dies quickly, it just fades away off into the sunset.

There is a lot of legacy code running with MPI for sure, but the big data and almost big data trends are clear, and CUDA has been absolutely disruptive in the scientific computing field (though some people are beginning to use MapReduce here also).


yeah, I always map CUDA blocks to racks and CUDA threads to blades ;)

GPUs, Xeon Phi, FPGAs and other accelerators still need somebody managing them. Nobody said it should be MPI, but there are still many hybrid architectures with MPI handling distribution and actual compute done using OpenMP or accelerators.

In my system I use Erlang/OTP to handle distribution and concurrency and OpenCL for data-parallel compute.


> yeah, I always map CUDA blocks to racks and CUDA threads to blades ;)

Sounds like Jeff Dean.

> Nobody said it should be MPI, but there are still many hybrid architectures with MPI handling distribution and actual compute done using OpenMP or accelerators.

MPI or even RPC works fine as a control mechanism, just not as a critical performance-sensitive abstraction, where we care about the width and speed of the pipe, and MPI is nothing like a pipe!

> In my system I use Erlang/OTP to handle distribution and concurrency and OpenCL for data-parallel compute.

This is quite reasonable. Once one understands the difference between concurrency and parallelism, they can pick appropriate tools to deal with each. As long as they confuse the issues, they'll make bad choices.


Rob's been trying to get people to understand the difference since the early 1990s (Alef, etc.). It's not something new, nor unique to Go. In fact Go is a response to the desire for concurrency; it's not that Go popped out of nothing and then Rob decided that he consequently needed to "sell" it by pushing concurrency.


I think roots of Go in Limbo programming language, running on Dis VM on Inferno OS. Inferno itself is based on Plan 9 ideas.

Essentially Go just Limbo, but native instead of VM and with more conventional syntax.

Inferno also had built-in distribution and hot code loading: I.e. you could load module from remote location and run a function locally or you could execute function remotely as RPC.

Erlang/OTP never had this level of integration as Inferno, but now we have some efforts to running Erlang on bar metal, I.e. Erlang-on-Xen.


I really love this talk, but man oh man the gophers are seriously creepy.


They're not so creepy compared to some of the artist's other work: http://reneefrench.blogspot.com/


Thanks for sharing, everything makes more sense now.


That has Courage the Cowardly Dog all over them! Very creative, though.



You can order it at the Google store: http://www.googlestore.com/Fun/Go+Gopher+Squishable.axd


Yeah I know there is one on like every desk at my office. So ugly, it's the worst thing about golang. Which as far as worst things go isn't so bad, but man. Ick.


I'm trying to wrap my head around this, not being familiar with Go yet... is this analogous to the difference between a Task and a Thread in the .net world?


They are language agnostic generic concepts.

Concurrency deals with dependency, or the lack of, among tasks and their running order. Task A must run after task B, B after C, etc. A, B, and C are not concurrent tasks. D can run at anytime. D can run before, after, or at the same time of A/B/C. The ordering doesn't matter. In this case D is concurrent to A/B/C. The concurrent tasks MIGHT run at the same time but it's not necessary. It's up to the OS/system scheduler. Concurrent tasks can run on a single core cpu serially and they are still concurrent tasks.

Parallelism has to do with efficiently cranking as many tasks as possible to actually run in parallel. E.g. given 10 cpu, how do I fully utilize all the cpu at the same time? Mostly it has to do with data partition to make sure the cpu can get evenly distributed non-depending data.

Concurrency and parallelism can be mixed at the same time. E.g. in the map-reduce pattern, the map part utilizes parallelism to distribute the data to process at all the cpu. The reduce task has dependency on the mpa tasks; running order means concurrency is in play there.


A task is a future or a promise which can deliver a result later. A Goroutine in go is more general than a task because they don't have to terminate when they deliver a result. The subsume tasks in any way.

Goroutines are mapped onto (kernel-level) threads in order to make it possible to run more of them at once and thus provide parallel execution. Otherwise, you have a single scheduler of goroutines in a single process. You can still switch context between the goroutines, thus the system is concurrent. But it will not be parallel: One goroutine only at a time.

For a system in which most goroutines wait on IO or the network, it may not be a problem to run many goroutines on a single core only.


I still don't quite follow. What do you mean by "terminate" a task? From my understanding, when a Task returned, the underlying Thread would be returned to the common pool for use by other Tasks. Does the Go scheduler detect when a task is blocked for IO, and reuse the thread for other goroutines?


"to make it possible to run more of them at once and thus provide parallel execution."

I guess you meant "concurrent execution" here ;)

"Otherwise, you have a single scheduler of goroutines in a single process. ... But it will not be parallel: One goroutine only at a time."

So Go doesn't have something equivalent to SMP scheduler in Erlang?


Suppose you have a process which is I/O bound on multiple I/O resources. If instead of blocking on each one, you suspend that sequence of execution and move onto the next, resuming when you get the signal that the call is unblocked, you are handling the resources concurrently, but not in parallel.

If you alter the program to spawn a separate thread for each, and let them all block, you will have a parallel solution.


Concurrency : appears to be running at the same time (possibly using the same core)

Parallelism : actually running at the same (using different cores)


tl;dr Parallelism implies concurrency but concurrency doesn't imply parallelism.


It's a bit more nuanced than that. Concurrency implies non determinism. For example if you have multiple threads then the result of the program is non deterministic: it may depend on the particular interleaving of the threads over time. Parallelism is about executing multiple things at the same time to improve performance. You can have parallelism without concurrency. For example a parallel map operation is parallel, but not non deterministic. It always gives the same result. Now, under the hood to implement parallel map, there is probably some concurrency involved, but that is not exposed in the interface of map. You can have concurrency without multiple threads, for example Node with asynchronous callbacks is concurrent, because the order in which those callbacks get called is non deterministic. But it does not necessarily need to run on multiple threads. Usually we don't want the end result of our program to be non deterministic, so we use synchronization primitives like locks and Go's channels to limit the non determinism in such a way that it does not affect the end result.

What most people don't realize is that for concurrency to be useful, there must be parallelism. But wait, didn't I just say that in Node you have concurrency without multiple threads? Yes, but the parallelism does not need to be in your CPU! If you do an asynchronous disk read, or an asynchronous network request, the disk and other computers on your network are working in parallel with your program. Concurrency allows us to exploit that parallelism, so that the CPU can do useful work while the disk is busy reading our data.

Now, you can write your program with concurrency, but if there is no parallelism anywhere in the system, that's not useful. So in practice it is the reverse: concurrency implies parallelism (but not necessarily on the CPU), but parallelism does not imply visible concurrency (though it might be hidden under the hood of a parallel construct like parallel map).


> What most people don't realize is that for concurrency to be useful, there must be parallelism.

No, as far as I understand from this and other talks, Rob Pike argues that concurrency is useful even without parallelism: it's a better way to structure software. A nice example for this is shown in his talk about lexical scanning in Go: http://www.youtube.com/watch?v=HxaD_trXwRE


I'd say concurrency is a bit overkill for that problem (i.e. threads or processes scheduled by the run time system that appears non-deterministic to the programmer). Deterministic coroutines work fine. Even Python generators are sufficient.

Though perhaps "useful" is a bit too broad: maybe it should be replaced by:

> What most people don't realize is that for concurrency to improve performance, there must be parallelism.


I actually would disagree with the statement that "for concurrency to be useful, there must be parallelism." In my experience with handling asynchronous events (such as input from the user, system, or network) it's much easier to handle the input in separate threads instead of polling some buffer in your top-level logic loop. That way the time slicing problem is offloaded to the OS (or whatever thread library you're using) instead of you having to worry about getting stuck in another part of your top-level loop for too long.


The reason you don't want to get stuck is because there is parallelism: between the other computers on the network and your computer, or between the between the person using the computer and your program. (this isn't really different than a disk or a network doing things in parallel with your program, only in this instance the other thing runs on flesh instead of on silicon). If the disk couldn't do things in parallel with your CPU there would be no point in avoiding to get stuck for too long in your main loop, since either one would have to wait for the other anyway.


Cooperative multitasking on a single CPU is concurrent, but not parallel. It's still very useful.


Yes, but only because there is parallelism between the CPU and the peripherals, like disk, network and the brain of the person using the computer.


The realm of the programming language is the CPU. In the context of this talk, the terms concurrency and parallelism should refer to tasks running on CPUs. It confuses the issue greatly when you introduce the outside world, which is inherently parallel.


While parallel map is always deterministic, parallel reduce is not. I.e. if you trying to calculate a sum of a vector with real values in parallel, you will get different results each time.


Sort of, but this explanation loses too much information. Concurrency is a binary relation on tasks, which looks like Task -> Task -> Bool; if true, the order in which the tasks may execute relative to one another does not matter. If false, the tasks must execute in a defined order. Astute readers will note the above function is isomorphic to the function defining whether there is an edge between two nodes in a graph (although the Bool value is flipped); indeed, task dependency graphs are a useful way of modelling concurrent systems.

Concurrency has absolutely nothing to say about parallelism. If you have a computer system which is able to execute tasks in parallel, it can take advantage of information from the concurrency relation to execute two concurrent tasks in parallel.

For the best implementation of this idea I've seen, I recommend taking a look at Intel's Threading Building Blocks template library.


Concurrency is when you have multiple threads of control. Parallelism is when you are doing more than one thing at the same time.

Multiple threads of control may do things in parallel.

Another good explanation: http://ghcmutterings.wordpress.com/2009/10/06/parallelism-co...


Yep.. I don't understand.

Concurrency is not parallelism, yet.. parallelism requires concurrency...?

I read through your linked article and didn't understand, I read through the linked article and... I still don't understand!

>Now I can hear you object, but isn’t concurrency required to implement parallelism? Well, yes, it is, but concurrency is also required to implement sequentiality too!

What am I missing here?

Is parallelism just a well structured concurrency? Or is the sole distinction determinism vs non-determinism? How is non-determinism avoided in a parallel process (which, to my thinking, means a thread)?

From my Java/Python mind, the load balancer dispatcher/worker/channel-listener seemed very similar in construction to what you would do with a Thread (or Process) and a synchronized Queue. What makes one concurrent, and the other not?

I suppose, my main question is, what prevents parallelism from not being non-deterministic?


> Or is the sole distinction determinism vs non-determinism? How is non-determinism avoided in a parallel process (which, to my thinking, means a thread)?

The distinction is much easier to understand in haskell because the language is expressive enough, but that blog post might have been too much information. Here's the standard haskell concurrency library:

http://hackage.haskell.org/packages/archive/base/latest/doc/...

It provides computations that can be forked on new threads, and primitives for communicating. The threads are scheduled by the runtime (non-deterministic semantics!) and I am free to write and run such a program on the single-core laptop in front of me (no parallelism!).

In contrast take a look at Control.Parallel:

http://hackage.haskell.org/packages/archive/parallel/latest/...

Here all the `par` function does is tell the runtime "it might be helpful to compute these in parallel" (parallelism!), but semantically `par x y` is simply `y` (the type or definition of `par` might not make sense to you, but that's not really important); there have been no new threads of control spawned (deterministic!).

Now if I wanted to, I could write a library for a parallel function `map :: (a -> b) -> Array a -> Array b` say (parallelism!), and I might implement that in terms of the concurrency primitives from Control.Concurrent. I would consider that `map` to be an example of deterministic Parallelism, even though the implementation might be utilizing multiple non-deterministic threads of control (in the same way the `par` is parallelism despite being implemented on top of the runtime, OS/green threads, and all other sorts of magic.


Concurrency is a statement about a duration of time. Parallelism is a statement about a moment in time.

During a one second window, did both of the tasks execute? If yes, then they are concurrent. If no, then they are not concurrent.

Was there any moment in time when both tasks executing simultaneously? If yes, then they executed in parallel. If no, then they did not execute in parallel.

From this, it's easy to see that if something is parallel, it must also be concurrent - in order for two tasks to execute during the same moment in time, they must also both have executed during a window of time.

Concurrency without parallelism happens on machines with a single core. Processes and threads must time-share the only core. Frequently, some of these processes will have multiple threads, so the threads need to synchronize with each other, even though they don't execute at the same time.

The definitions of concurrency and parallelism have nothing to do with determinism, so I don't know why others have brought it up. Concurrency and parallelism implies non-deterministic interleavings of threads and processes, but whether or not the result is deterministic is up to how we program.


Parallelism does not require concurrency (at least not in your code). Parallelism can be handled by a library allowing your code to speed up with often minimal changes. One such example is the replacement of a sequential map function by a parallel map.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: