"Parallelism is for performance, Concurrency is for correctness."
It reminds me of how technically there is space between everything if you go small enough (ie cellular or atomic) and how some elementary school kids would use that to convince you that they arent touching you :)
This is not true. Unix commands are designed to be used in pipelines. Unix is actually the easiest way to run things in parallel, so easy that people don't even notice.
The overall task has multiple things running concurrently but it doesn't have to and the individual components are doing no concurrency of their own. It has the same effect in many cases because of the buffering built into how pipes are handled, for instance passing the output of gzip through gzip through gzip will happily use most of three cores (if that much CPU time is free at that moment) but try that with commands that need to block such as invocations of sort for example and you'll see that break down. One sort command in your pipeline will tie the whole lot to not scaling much beyond one core. Also pipes and other trickery like xargs/parallel don't help if the task is something like "grep over this large file" - that will only scale over multiple cores at all if your implementation of grep supports a multi-threaded search (and your data is in cache or otherwise in RAM so the process isn't IO-bound).
 it could queue each and spool the output between to temporary storage, it is so chose
 note that some implementations of sort, such as the GNU version seen in most (all?) linux distributions, do support their own multi-core use
To address another comment, I agree this distinction has limited utility (assuming I understand it correctly). This context of discussing architectures which promote abstract parallelism is probably about the only place it's useful.
I agree that a strict/true interpretation of 'concurrency' (as distinct from Pike's definition) would not include time slice multi-tasking since it depends on the limits of human perception to appear as truly-concurrent/executed-in-parallel when it's actually fine-grained sequential processing.
Edit: compare a batch system (no concurrency and no parallelism), Vs a time sharing system on an uniprocessor (concurrency) Vs a time sharing system on a multiprocessor (parallelism).
The other angle is that concurrent programs are still concurrent on a uni-processor. Parallel programs are not.
A uniprocessor may be pipelined, superscalar, support SIMD, etc which would seem to satisfy the definition of parallelism. So isn't parallelism achievable on a uniprocessor?
The more I think about this stuff the less sense the distinction makes.
If you look at a what happens inside a CPU then you probably could say a uniprocessor has some degree of parallelism, but this is not a common point of view AFAIK, it is too low level. Also, the execution contexts inside a pipelined CPU are not entirely independent, you can have pipeline stalls due to dependencies. This is not the case with, say, two application threads running independent tasks.
Of course! Normally unqualified parallelism refers to multiprocessor parallelism, but instruction level parallelism is also a thing. So is overlapping io or memory access with computation. Even memory accesses themselves can be parallelized (caches are multi ported, CPUs allow multiple outstanding cache misses), or IO (multiple disks or network cards).
The discussion existed around languages like Python, long before Golang showed up on the scene.
If you ever ask Racket-lang folks about threads they will lecture you about concurrency vs parallelism endlessly, it's like a mantra they repeat to avoid realizing there is a problem.
Being super careful about 'parallel' and 'concurrent' is super, super pedantic and pointless. They are synonyms. If two things are running concurrently, they are running in parallel. If two things are running in parallel, they are running concurrently. These are terrible words to overload with very careful definitions.
When people talk about 'concurrency' using the super careful pedantic definition, they just mean that it's possible to execute code while at the same time blocking on system calls. This isn't some kind of great feat and any language that can't do it is a toy. There is absolutely no reason to act like 'concurrency' using this definition is in any way special or worth even mentioning.
But that's not the point, is it? People get defensive about their favorite languages, so when faced with valid criticisms like "the GIL prevents you from running Python code in parallel" (which is a completely valid and actually pretty huge defect when modern computers have many cores on average) instead of just admitting the deficiency a person can be defensive and respond with stupid lectures about 'concurrency' and 'parallelism'.
My first run-in with this was when I wrote a toy brute-force birthday attack as part of a course I was doing, and realised it was only utilising one core.
The multiprocessing module works as expected, sure.