I would suggest that it's never a good idea to be cooking several dishes at the same time in the world of software. Software and cooking are completely different in terms of the cost of making a mistake. A single bug in production can take more time to find and fix than the work in the first place.
I can see that in a commercial kitchen that it makes sense to prepare all the ingredients before because you can do this before your customers arrive and it's better to chop 100 carrots in one batch than 100 carrots at different times, but that's very different from software development.
Depends what you're doing. If a small team supports multiple products (internal or external) concurrently, it's almost inevitable.
Obviously, all analogies break down eventually, but I would say something similar to the article: if you anticipate needing to swap some component out, or you are concerned that some activity contains substantial project risk (in the analogy, this is chopping peppers while onions are frying and hoping that no one orders another dish) will be needed during a period of high pressure (lead up to a release, say), try to front-load that effort into a more forgiving timeframe (i.e. chopping peppers before the restaurant starts to take orders). Do the experiments, write the documentation, make sure everyone knows what's going on, then, hopefully, it's less project risk at a time when you can't afford it. You're saving risk, not necessarily time. And you can't eliminate all the risk.
If that's just not possible because you're too busy 100% of the time, you're probably on a death march and it's already too late: once the orders are coming in, if you haven't prepped the peppers, you just have to deal with it, and if that makes any dishes late and customers angry, tough.
That's a good point. In a commercial kitchen one can declare "bankruptcy" by throwing out a single dish (or perhaps a set of dishes if one component was bad and the badness wasn't detected before use). But in software, declaring "bankruptcy" is the dreaded ground-up rewrite. It's like burning the whole restaurant down and starting over.
I think another important difference is that software is much less predictable. Thanks to low/zero-cost replication, we spend much more time doing something novel than a commercial chef does. If a developer is doing the same thing over and over, that's a an opportunity to extract a service or a framework or a library. Whereas restaurants have much more predictable workloads, making it much safer to, say, chop 100 carrots than to write 100 classes in advance of need.
Isn't a bug in production more like food poisoning your customers in this analogy? Both take more work to fix than the effort put in. Where a bug caught by automated tests is more akin to noticing you overcooked the steak before sending it out?