This paper doesn't just describe the waterfall, it explains why it doesn't work and proposes a more iterative approach, albeit with only two iterations.
If by two iterations you mean the paper suggests only going through the various components processes at most twice, that's not accurate. The method proposed involves feedback loops (Analysis->Design->Analysis->...) that could repeat multiple times. And the single steps back are the ideal. That is, you don't want to have to go back to Analysis from Testing, but you probably will have to for some projects from time to time.
I got the impression that the paper was suggesting an early, disposable prototype during the preliminary program design phase, similar to what Fred Brooks suggested back in the day. (http://c2.com/cgi/wiki?PlanToThrowOneAway) Meaning that the PPD phase was in effect an initial iteration through the development process. The outputs from that were used to generate the program design documents in the primary development process.
I almost made a comment about that part. But that's the only part that's restricted (and then not strictly) to two "iterations". But I don't think that was what OP was referring to.
> This paper doesn't just describe the waterfall, it explains why it doesn't work and proposes a more iterative approach, albeit with only two iterations.
The iterative steps involved are not the throw one away bit, but the feedback loops in between the different stages of development. And the paper does not restrict its model to two iterations.
The throw one away portion are the prototyping phase, which is the same development process in miniature. Like how, here, we might prototype our radio software without the hardware at all just to make sure we understand the protocol and other aspects correctly so when we do the full development on the radio (often with reused code from the prototype, though) it's better informed. [0]
[0] Though, true to form, the last prototype became a delivered product instead of the basis for the final product. Oops. The dilemma of making a too good prototype, but not engineered well enough to actually be a final product. Maintenance has been a beast.
That blew my mind, too. He preempts Spiral/Agile, re-did diagrams for complexity of real-world development, discoveries of maintenance phase as hardest, legal battles over blame, buy-in by stakeholders, mock-ups to catch requirements/design issues early, noting that's necessary in first place, and so on. Reading it makes me wonder what the (censored) people were reading when they described waterfall as a static, unrealistic process?
In reality, it's so forward-thinking and accurate I might have to rewrite some of my own essays that were tainted by slanderous depictions of his Waterfall model.
> If documentation does not yet exist there is as yet no design, only people thinking and talking about the design which is of some value, but not much.
I'm having a hard time remembering how we do things better in modern times without writing documentation.
I first came into contact with this article via https://www.youtube.com/watch?v=NP9AIUT9nos, which is a pretty decent analysis on how people misinterpreted the original concept.
That supports some of my guesses in my comment with a lot of context. Especially the government contractor part. Had anyone asked, I was going to reference Mills as an evolutionary step as well. I'll read the rest of it later.
Thanks for the paper! Getting many mental gaps filled in on our history this week. :)
I can't remember if Larman includes this in the paper, but at a talk he said that he tracked down the guy who standardized waterfall for the DoD (in Boston IIRC) and when they met for lunch, the first thing he said to Larman was "I'm so sorry".
In retrospect it was inevitable. The patterns of thinking that we take for granted around iteration and prototyping were still too alien. You have to get it before you can do it, and the types of programmers who were inclined to get it weren't in decision-making positions.
Another great story that Larman told was that he tracked down some of the programmers on famous waterfall projects that had succeeded, and found out that what they had really done was write code first, secretly, and then write up the requirements and design docs based on what they had learned. In other words they did things in the 'wrong' order but published them in the 'right' order.
"The patterns of thinking that we take for granted around iteration and prototyping were still too alien. You have to get it before you can do it, and the types of programmers who were inclined to get it weren't in decision-making positions."
Maybe I was too harsh on them. It does seem likely. Further, it probably came directly out of the nature of programming on expensive, shared, and often batched machines. Here's a great example of what programming looked like in the 60's (skip to 5:27):
It wouldn't have been much better a bit after 1970 where many of the same concerns and processes would exist. I still think one has to ignore most of the Royce's paper to not pick up on iteration paradigm. But, I could easily see someone in that mentality sort of glossing over it, spotting a diagram of their current flow, giving it a name, and pushing it from there.
Finally read the paper you gave me. It was really neat to see iterative development kept getting independently invented from the 60's onward. It getting into the mainstream was an inevitability due to its inherent advantages. The majority just couldn't contain it within limited groups forever.
"and found out that what they had really done was write code first, secretly, and then write up the requirements and design docs based on what they had learned. In other words they did things in the 'wrong' order but published them in the 'right' order."
That's funny you say that: history is repeating in high-assurance field. Safety-critical design is like waterfall or spiral on steroids with specs, constraints on implementation, rigorous testing, analysis... you name it. To outsiders, it seems they're using a waterfall-like refinement strategy to build these things. Insiders trying to get Agile methods in there have countered that with an unusual supporting argument: successful projects already use an iterative process combining top-down and ground-up work that results in a linked, waterfall-like chain of deliverables. The actual work is often done differently, though, so why not allow that kind of development in the first place?
With your comment, I've heard that twice. Another quick one was Mills' people not being able to run their own code in Cleanroom. Sometimes it wasn't necessary but it has many benefits. So, of course they often ran their own code during prototyping phase to get an idea of how the final submission should be done. We'll all be better off when management lets their processes reflect the realities of development. At least it's already happening in some firms and spreading. :)
How did I never read this paper before now!? People have been bashing waterfall for a long time. If this paper originated it, then the resulting waterfalls say more about the readers and IT culture than the visionary that recommended a very, adaptive process. A few points on the paper.
The author describes the software development as a creative process. Most managers and even many CompSci researchers thought it was mechanical with potential for automation and assembly-line type guidance. He wisely contradicts that in a way that I hope was to help us all out by putting reality in management reader's heads.
I used to think one person did waterfall followed by other models (eg Spiral) realizing initial work usually failed and is rewritten. Now I know it's the opposite: waterfall author knew requirements or design would require rewrites. Even made new diagrams for it. Diagrams most of us never saw while ideal model was plastered everywhere. He underestimates how difficult coding part can be but his claims still proved out with methods like Cleanroom and Correct-by-Construction that kept coding structures simple. Almost all defects happened outside of coding and coding changes were straight-forward.
The documentation chapter is pure gold. Managing scope, preventing excuses during failures, ensuring everyone is on same page, rules to keep it consistent even by halting development, wisely noting maintenance/rework phase is horrible enough that docs are a necessity, and handing off system to cheaper, ops people. Those particularly stood the test of time.
In one section, he recommends implementing something to get the process started even if one doesn't know what they're doing. That's to avoid paralysis by analysis and give something tangible to start with. Ironically, "modern" and anti-waterfall methods recommend exactly that.
The simulation part is tripping some people up and a weird read. People take it too literally. What I'm seeing is a call for prototypes that explore some of the user interface, key calculations, structure, I/O, and other trouble spots. The stakeholders each review a bit of this to spot early requirements and design problems. The next section mentions feedback loops that do the same thing which collectively result in buy-in by those paying. Just shows he wisely considered a critical human factor that led to many project failures later on.
So, it was a short and delightful read whose advice should've led to many successful projects and hastened arrival of more Agile methods. Instead, people cherry-picked his points and even slandered him in subsequent work. All kinds of disaster followed.
Least I know now that the real Waterfall was designed to prevent that and probably would have most of the time. So, props to Dr. Royce for being one of the giants whose shoulders we stand on trying to make IT great. Well, should've stood on for many. ;)
The word 'waterfall' does not appear in this paper, not even to describe the model that Royce is criticizing (and it would be a completely inappropriate and misleading metaphor for the model he is advocating.) I suspect the use of 'waterfall' as a model of software development is a retronym coined (possibly by Royce, elsewhere) for what was once the only model of systematic software development, once its shortcomings and the need for an alternative became apparent. It has persisted because it is a convenient straw man: it is easy to make almost any new model look good when you compare it to the waterfall model. We should have burned this straw man long ago: 45 years have gone by since Royce explained its shortcomings and proposed better alternatives, so by now, we really should be able to justify our latest methods with something more than just "better than waterfall".