I have a little hunch that any parallelism "answer" is going to be derived primarily from ever-more-grand exploitation of its strengths, not patching of its weaknesses.
We know that in a formal reasoning sense, an asynchronous system is less powerful than a synchronous one, because you can implement any async design in a sync system, but not the other way around. Yet in practical use, asynchronous, "hard" decoupled systems are the ones that scale better and are easier to maintain. So we keep telling ourselves, "decouple everything" and have invented a gross array of patterns and techniques that add "decoupling pixie dust."
We know this, but usually don't extrapolate it to its natural conclusions: we're using brute-force engineering to attack our needs from the wrong direction - trying to break arbitrary sequential computations into parallel ones via sufficient smartness, instead of folding parallel ones back into sequential ordering where it produces wins, and breaking out the "sequential toolbox" only if the problem really taxes our abilities of algorithm design, as in the cases you have outlined.
Of course, adhering to parallel designs from the beginning is hardly a silver bullet either, but there are real wins to be had by exploring the possibility, and a good number of them can be experienced today simply by working that style into "ordinary" imperative or functional code with abstractions like actors and message-passing.
Correct me if I'm wrong, but in a formal sense you get the synchronous system inside the asynchronous one trivially. So the two are of equivalent power.
If you imagine that in the case of threading systems, you can implement a multi-threaded system with your own green threads library, which is async on top of sync. You can get sync on top of async with native threads by simply using one thread. Or, if you want to be complicated, by using multiple threads with locks that serialize to a single synchronous ordering.
> We know that in a formal reasoning sense, an asynchronous system is less powerful than a synchronous one, because you can implement any async design in a sync system, but not the other way around.
As someone else pointed out as well, I just don't see it. If components of an asynchronous system can wait on an event you can make it synchronous by driving it with deterministic synchronous events. Imagine a bunch of independent actors that always wait for a clock signal and then processing one piece of data then waiting for next and so on.
So I actually see a synchronous system a restricted case of an asynchronous system.
If you think about it, the world is inherently asynchronous. If you have 2 agents in the real world. They process and do things asynchronously, there is no global event or clock system that drives everything.
We know that in a formal reasoning sense, an asynchronous system is less powerful than a synchronous one, because you can implement any async design in a sync system, but not the other way around. Yet in practical use, asynchronous, "hard" decoupled systems are the ones that scale better and are easier to maintain. So we keep telling ourselves, "decouple everything" and have invented a gross array of patterns and techniques that add "decoupling pixie dust."
We know this, but usually don't extrapolate it to its natural conclusions: we're using brute-force engineering to attack our needs from the wrong direction - trying to break arbitrary sequential computations into parallel ones via sufficient smartness, instead of folding parallel ones back into sequential ordering where it produces wins, and breaking out the "sequential toolbox" only if the problem really taxes our abilities of algorithm design, as in the cases you have outlined.
Of course, adhering to parallel designs from the beginning is hardly a silver bullet either, but there are real wins to be had by exploring the possibility, and a good number of them can be experienced today simply by working that style into "ordinary" imperative or functional code with abstractions like actors and message-passing.