Hacker News new | past | comments | ask | show | jobs | submit login

Alright, so I do computational plasma physics simulations for a living. Given our problem sizes often reach sizes which require teraflop level of parallelism to get done before I retire (that's my scale, I'm sure you're aware of petaflop sims the fluid/engineering guys do) we do need concurrency but it's done by chopping up the domain, literally the volume of space we simulate, onto processors. Now, we could possibly do async at some level (sub node level), but we absolutely need to be syncronized globally because we are simulating real-life physics and we need to maintain causality, especially when we pass information between nodes. Say a wave front passes across one node boundary into an adjacent one, waves must move at a finite and causal manner, of course, just as if you watched a ripple on the surface of a puddle.

If we did the whole problem async (async by default as I interpret it) it would require so much bookkeeping to keep causality that would make it unreasonably difficult. For things that require causality like that, we are lucky that the default mode of computation is synchronous because it fits my problem domain perfectly.

That's why I say "not everything is web" because while async maps well to problems like a server-client type application, it doesn't map well to all problems like my problem for example. Also, as someone else said, sync is easier to reason about for problems that don't require a lot of parallelism.




At worst, you'll have to explicitly wait for all your async calls to have completed and produce results. I suppose normally the implicit wait occurs on attempt to access the result. I also suppose you have to do it anyway in a multi-processor system, and you can't be doing it on a single core.

At best, some of your CPUs would be able to run calculations for two especially fast-to-compute pieces of volume space, while some other CPUs would be busy computing a particularly gnarly block of the volume space.


The problem is that async adds a lot of overhead, and while the mythical sufficiently good compiler could remove it, in practice it is a lot of work for little benefit.

For those CPU bound jobs that do not fit in the classical openmp style scheduling, more dynamic async style scheduling might be appropriate (cilk style work stealing for example) but the async granularity is hardly ever the function boundary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: