> This paper doesn't just describe the waterfall, it explains why it doesn't work and proposes a more iterative approach, albeit with only two iterations.
The iterative steps involved are not the throw one away bit, but the feedback loops in between the different stages of development. And the paper does not restrict its model to two iterations.
The throw one away portion are the prototyping phase, which is the same development process in miniature. Like how, here, we might prototype our radio software without the hardware at all just to make sure we understand the protocol and other aspects correctly so when we do the full development on the radio (often with reused code from the prototype, though) it's better informed. 
 Though, true to form, the last prototype became a delivered product instead of the basis for the final product. Oops. The dilemma of making a too good prototype, but not engineered well enough to actually be a final product. Maintenance has been a beast.
In reality, it's so forward-thinking and accurate I might have to rewrite some of my own essays that were tainted by slanderous depictions of his Waterfall model.
I'm having a hard time remembering how we do things better in modern times without writing documentation.
He's essentially mocking our whole field for reading the paper lazily :) "This paper was a description of what not to do."
An old HN comment about it:
Thanks for the paper! Getting many mental gaps filled in on our history this week. :)
Another great story that Larman told was that he tracked down some of the programmers on famous waterfall projects that had succeeded, and found out that what they had really done was write code first, secretly, and then write up the requirements and design docs based on what they had learned. In other words they did things in the 'wrong' order but published them in the 'right' order.
Maybe I was too harsh on them. It does seem likely. Further, it probably came directly out of the nature of programming on expensive, shared, and often batched machines. Here's a great example of what programming looked like in the 60's (skip to 5:27):
It wouldn't have been much better a bit after 1970 where many of the same concerns and processes would exist. I still think one has to ignore most of the Royce's paper to not pick up on iteration paradigm. But, I could easily see someone in that mentality sort of glossing over it, spotting a diagram of their current flow, giving it a name, and pushing it from there.
Finally read the paper you gave me. It was really neat to see iterative development kept getting independently invented from the 60's onward. It getting into the mainstream was an inevitability due to its inherent advantages. The majority just couldn't contain it within limited groups forever.
"and found out that what they had really done was write code first, secretly, and then write up the requirements and design docs based on what they had learned. In other words they did things in the 'wrong' order but published them in the 'right' order."
That's funny you say that: history is repeating in high-assurance field. Safety-critical design is like waterfall or spiral on steroids with specs, constraints on implementation, rigorous testing, analysis... you name it. To outsiders, it seems they're using a waterfall-like refinement strategy to build these things. Insiders trying to get Agile methods in there have countered that with an unusual supporting argument: successful projects already use an iterative process combining top-down and ground-up work that results in a linked, waterfall-like chain of deliverables. The actual work is often done differently, though, so why not allow that kind of development in the first place?
With your comment, I've heard that twice. Another quick one was Mills' people not being able to run their own code in Cleanroom. Sometimes it wasn't necessary but it has many benefits. So, of course they often ran their own code during prototyping phase to get an idea of how the final submission should be done. We'll all be better off when management lets their processes reflect the realities of development. At least it's already happening in some firms and spreading. :)
The author describes the software development as a creative process. Most managers and even many CompSci researchers thought it was mechanical with potential for automation and assembly-line type guidance. He wisely contradicts that in a way that I hope was to help us all out by putting reality in management reader's heads.
I used to think one person did waterfall followed by other models (eg Spiral) realizing initial work usually failed and is rewritten. Now I know it's the opposite: waterfall author knew requirements or design would require rewrites. Even made new diagrams for it. Diagrams most of us never saw while ideal model was plastered everywhere. He underestimates how difficult coding part can be but his claims still proved out with methods like Cleanroom and Correct-by-Construction that kept coding structures simple. Almost all defects happened outside of coding and coding changes were straight-forward.
The documentation chapter is pure gold. Managing scope, preventing excuses during failures, ensuring everyone is on same page, rules to keep it consistent even by halting development, wisely noting maintenance/rework phase is horrible enough that docs are a necessity, and handing off system to cheaper, ops people. Those particularly stood the test of time.
In one section, he recommends implementing something to get the process started even if one doesn't know what they're doing. That's to avoid paralysis by analysis and give something tangible to start with. Ironically, "modern" and anti-waterfall methods recommend exactly that.
The simulation part is tripping some people up and a weird read. People take it too literally. What I'm seeing is a call for prototypes that explore some of the user interface, key calculations, structure, I/O, and other trouble spots. The stakeholders each review a bit of this to spot early requirements and design problems. The next section mentions feedback loops that do the same thing which collectively result in buy-in by those paying. Just shows he wisely considered a critical human factor that led to many project failures later on.
So, it was a short and delightful read whose advice should've led to many successful projects and hastened arrival of more Agile methods. Instead, people cherry-picked his points and even slandered him in subsequent work. All kinds of disaster followed.
Least I know now that the real Waterfall was designed to prevent that and probably would have most of the time. So, props to Dr. Royce for being one of the giants whose shoulders we stand on trying to make IT great. Well, should've stood on for many. ;)