I find the following analogy interesting:
"I don't want to harvest vegetables, boil down some bone to make chicken stock, collect herbs & spices, throw it all in a pot and simmer for 4 hours - I'll just go to the store and buy some soup instead!"
Soup used to be a very good way of extracting additional nutritional value from kitchen scraps, as well as assuaging someone's hunger with relatively few resources. In the industrialized world, it is no longer needed for these purposes, yet the pattern persists. Is soup still near-optimal for something, or does it just persist because it's good enough to survive, though other patterns are clearly better.
To the point that levels of complexity can be hidden by an external production pyramid:
Encapsulating complexity through a 'pyramid' of vendors is the evolutionary norm for all complicated systems. "I don't want to harvest vegetables, boil down some bone to make chicken stock, collect herbs & spices, throw it all in a pot and simmer for 4 hours - I'll just go to the store and buy some soup instead!"
This has always been the norm in software as well. From bits to file systems to grep and on and on, software is always built on building blocks that abstract complexities. We're starting to see this with software at levels that ease the burden of maintaining entire systems or networks(Amazon & Google,etc).
However, these abstractions away from complexities do not ease the burden of creating new software, just as lacquer and wood providers do not make it any simpler to create a warehouse that produces pencils. You can have all the supplies in the world but won't succeed until you know how to put them together, and in general putting them together in an efficient, useful manner is the hardest problem. There's no barrier to entry when it comes to providing a new web app - all the technology exists already, and is fairly well maitained! The barrier to entry is in combining the technology to create something new that is needed. Which brings me to my second point -
Regarding the articles conclusion: You had me till the 'out of the crisis'. If I follow correctly, the argument is that we should charge a fee for utilization of the utility that is software, not for the bits themselves. What this means is that I'm going to pay a cent every time I use my web browser. But this also means that my web browser creators, and, well, they're gonna pay .5 cents each time the browser is used for whoever provided the C++ libraries(or whatever it's built in). Who in turn is going to pay .2 cents for each use of....and so on. If the plan is to create a system that facilitates a broadening or evolution of the "roots" or building blocks of software, this will never work.
Here's why I'm not going to pay for the roots - we're already at a point in software where the 'roots' I really need are built, open-sourced and free. Or close to free. And for the components that aren't readily available - I simply refuse not to build them.
It's not that there's no incentive to provide more roots, it's that I don't want to pay for their use each time my software runs! Screw that. I'll spend an extra week implementing it myself, bugs be damned, and I'll collect all the revenue!
At least, that's the way the thinking usually goes.
It's not that there's no incentive to provide more roots,
it's that I don't want to pay for their use each time my
software runs! Screw that. I'll spend an extra week
implementing it myself, bugs be damned, and I'll collect
all the revenue!
Forgive me if this is leaving out some details, but this article seems a bit like, well, every other Brooks was wrong article.