You may have a different perspective than the author, but I think his explanation is clear.
Well, not being someone from the functional programming world, I don't think the explanation is clear at all.
Edit, To add a concrete example: I am currently working on a concurrent Haskell program that simulates a system of communicating nodes. However I am running it on my single core netbook. So we have indeterminacy and concurrent semantics without parallelism.
So you're saying that concurrency is an issue of semantics used in programming, while parallelism is talking about the underlying problem?
I might be able to agree with that, but if that's the argument, that is what the author of the blog post should have stated.
Anyway, please let me know if I'm understanding your point.
Well, not being someone from the functional programming world, I don't think the explanation is clear at all.
Edit, To add a concrete example: I am currently working on a concurrent Haskell program that simulates a system of communicating nodes. However I am running it on my single core netbook. So we have indeterminacy and concurrent semantics without parallelism.
So you're saying that concurrency is an issue of semantics used in programming, while parallelism is talking about the underlying problem?
I might be able to agree with that, but if that's the argument, that is what the author of the blog post should have stated.
Anyway, please let me know if I'm understanding your point.