> The purpose of this document is to provide guidance to DoD program executives and acquisition professionals on how to detect software projects that are really using agile development versus those that are simply waterfall or spiral development in agile clothing (“agile-scrum-fall”).
Actually doing 'waterfall' properly would probably be fine? Or at least not the bogeyman it's made out to be.
The real danger is that the project would just be managed badly, independent of professed approach. Just muddling through badly.
(I am not even sure if waterfall was ever actually a thing, I mostly ever only hear about it as a thing to avoid?)
Doing waterfall in the exact sense that is usually described will almost certainly never work well. People usually focus on the design phase, but I think the much more catastrophic part of 'waterfall' is doing testing only once development is complete.
Now, you absolutely can successfully run software projects that are design heavy, that more or less freeze requirements and designs early on, and that seek to execute on the design, instead of iterating. However, if you truly develop for 6 months before any kind of external QA, as described in most Agile talks about what Waterfall means, that is a recipe for disaster. If you do test things by feature, invest in component testing and practice that all along the way, you can succeed.
This essentially describes a 2-stage waterfall: design and execute, where the execute phase includes both development and testing all along the way. Is this 'doing waterfall properly'? Or is it a hybrid methodology? That's a matter of definitions in the end.
It's also important to note that the extent to which this works depends a lot on the project itself. If the requirements are volatile (e.g. when developing a new product with an uncertain market), or if the project is too large (e.g. if the design phase suggests it will take 5 years to finish given the current team), then it's likely that the project would benefit a lot more from an Agile style of development, where you deliver smaller chunks of the project to its end users to gather early feedback on the actual usefullness.
In this brief and engaging paper you will find the diagram used by many agile enthusiasts to describe the "waterfall method" and will be shocked to discover that it is held up as an example of a process that never actually works in reality.
You will then read quotes like this, which could have come out of an agile book:
"For some reason what a software design is going to do is subject to wide interpretation even after
previous agreement. It is important to involve the customer in a formal way so that he has committed
himself at earlier points before final delivery. To give the contractor free rein between requirement
definition and operation is inviting trouble."
Incremental and iterative development is not an invention of Agile either. It was used before "Waterfall" existed.
For larger projects with adequate resources, agile can make sense, although it's typically modeled after quarterly focused business models and can miss long term opportunities. The core issue with agile is that its structure is ripe for abuse by everyone involved except the actual developer(s). Instead, developers have to reconcile all sorts of poor choices together in a fairly formal environment, leading to headache after headache.
I do agree with tsimionescu that leaving testing (and debugging) for last invites trouble.
Btw, that's an excellent paper.
* Unit testing immediately upon developing any component.
* System & Integration testing starting at some point during development - overlapping phases
* User/Client Acceptance testing also overlapping with end of development (but with planned fix cycles)
* Performance testing... unfortunately depends heavily on the project and its perceived performance but usually in the latter third. Well managed projects see it as "Performance Testing & Tuning", and plan for several iterative cycles of testing and improving performance. Poorly managed projects see it as "Performance checklist" and have a single cycle with no time for tuning.
This is as old-school methodology as it gets, and yet it's been consistently refined and works well when managed well and in right situation/client. A lot of multi-year back-office ERP or infra work for large corporations or public sector inherently needs to adopt similar approach because it's just the nature of client environment. At highest of high levels it's about 5 gantt chart phases with heavy, heavy overlapping and lots of cycles for fixing/improving.
A two-stage dev + test without even an arrow from test to back... I have no idea how that'd work in either theory or practice. Waterfall doesn't have to ignore realities of the world... it just is suited to different realities than Agile.
This is my own experience, but I was involved with a multi-year project for a federal agency that did exactly this. I think what many people don't understand is that the design phase is a byproduct of the internal processes necessary for even getting to the point of developing a system in the government. What I mean by this is, systems don't appear within based on good ideas or pitch decks, they start with policies, and based on those policies, procedures.
Why is that necessary in government? Because typically, there are laws, executive orders, regulations, and various other policy devices that have to be implemented at the agency level, there is a review period that allows for input from affected external agencies, etc. In some cases, there is even involvement from congressional committees and their staff members. This is especially true if appropriations are necessary for funding a program.
I can honestly say it took a year and a half just to develop the policies and procedures, to coordinate them, and to begin work on the system. That was ridiculously quick, given the scope of the project. The good thing about doing it this way, is you have a very clear roadmap at the outset. The downside is, the process for making significant changes to the system's functional requirements can be a challenge (e.g., changes to policy/procedures, another review period, etc.).
That being said, once we actually began the development phase we took a much more agile approach. We would hold daily stand-ups, regular testing/feedback sessions with customers, established product owner(s), etc. I would say it worked really well, but it was not without difficulties. Those difficulties are far bigger than agile vs. waterfall though, it's just the way the bureaucracy functions (for better or worse).
In my experience, waterfall needs unlimited time and money. Waterfall fails hard when money or time are limited, in which case the likely outcome is that the head of waterfall will be massive and the tail will be rushed, so that one gets a design-heavy and rushed implementation with little or no testing and scant documentation etc.
A timeboxed waterfall with inflexible sculpted-in-stone time schedules are a recipe for burning in the ensuing deathmarch. The planning either cannot or will not anticipate all unknowns and/or the buffers dampening the impact of the unknowns shrink because of outside pressure. (What do you mean 8 months to make this thing, can't you do it in 2 weeks, haggle haggle, sold for 2 months; available time reduced by a factor of 4)
In contrast, (a theoretically ideal) agile or some other iterative method works fine if time or money are limited. The iterative nature allows for a cut-off after a sprint. Pull the plug and have a result of state-of-the-art at that point; of course the result then might not quite reach the viable dimension of MVP nor even resemble a product.
At some level everything is waterfall. If anything is to get done at all, at some point the programmer has to make a plan, then put his head down, arse up and implement it.
At another level nothing is waterfall. If the plan is successful it was shipped to the consumer (which may be the programmer himself), and it's very presence changes things for the consumer in ways that they didn't foresee. They then realise they need a new set of changes.
What we call waterfall model is really referring to the scale. If the plan is grand, the specifications for such a grand plan need to be detailed, the implementation long, and the time between putting the head down and evaluating results is large, and we call it waterfall.
But if the stragegy is to explore the solution organically, the re-evaluations are frequent, the waterfall periods are short and we call the strategy something else.
So in the end everything uses waterfall. The programmer would not get the long periods of intense focus he need to be productive without it. But also nothing is waterfall, because no plan can foresee everything, it must be continually re-evaluated in the light of unexpected changes it brings as it is implemented.
It can be done with waterfall.
1. How experienced were these people in the particular problem domain?
2. How long was a single development effort and how many people?
3. What do you consider Waterfall? Are we talking "pure" Waterfall where everything is truly done in set phases? Or do you have feedback loops in place, like testing integrated properly into the development phase?
4. What was the relationships like with customers? Was it one (or a small number) of consistent customers or a diverse set of customers (closer to contract/bespoke software work)?
2) typical team was 2-3 devs, 1-2 QA, project manager, product manager. Typical dev time was 6-9 months.
3) Waterfall has a pretty amorphous definition, our implementation not very pure, which is probably why it succeeded. Each component of a new release would start testing as soon as engineers had something testable. When all components passed QA we’d go into alpha, then beta.
4) It was consumer software, specifically targeted to graphics professionals on Mac/Windows. We had hundreds of thousands of customers, and delivered physically in floppies and CD Rom.
Our main strength was actually our Product Management director. He was excellent at collecting and communicating highest priority customer requirements. He was always questioning and pushing, and helping my engineers come up with better approaches and implementations.
He was also excellent at building external relationships do we had really good partnerships, and training/leading team PMs so they were good team leaders. He was so damn good at it they eventually he moved in from our little company to run all mobile for a $100B+ company.
"Work": I don't mean that the projects fail but that they fail in some aspect. Like they deliver late, or don't deliver the full requirements, or they don't deliver the real requirements (because the requirements were written 5 years ago by a consultant, this might be a total failure in many cases).
"agile" (not Big-A Agile the faddish cult) resolves a lot of this by one simple thing: frequent feedback between developers and customers while developing smaller increments. Which, ironically, was actually Royce's point in the paper: Feedback loops, not necessarily with the customer, need to be incorporated in order to develop large scale systems.
One of the issues in discussing this is that it turns out that when most people say "Waterfall" they mean a modified version. When you dig into what they're doing it's either a small modification (we bring the customer in during testing, which is good but that's still 4 years into the project) or a major modification (V-Model which incorporates all of Royce's feedback loops and then some). Others have gone to doing incremental & iterative development or evolutionary but still call it Waterfall because they don't know any other term.
But yes, Waterfall exists, it is a nightmare, and I hope to never be involved in it again.
That said, sometimes, Waterfall is the best way to do some projects, but I'd say very, very few. It does work reasonably well for hardware production. Many hardware companies apply Waterfall to software, because it's the process they know.
TDD is also a technique that can encourage a "waterfallish" approach, as the design needs to be fairly complete, right at the beginning (to be fair, it is possible to do TDD iteratively, but that takes effort, and many shops like to reduce effort as much as possible). I tend to keep my designs as fluid as possible, refining in a JIT manner, and prefer using test harnesses to fixed unit tests.
I personally have a beef with the way many shops handle the concept of MVP. I feel that a significant number of shops use it as an excuse to shove out a hastily-built lashup; favoring adding features to ensuring quality.
I have come to believe in the concept and purpose of MVP, but I am also one of those "grizzled, cranky oldtimers" that has seen many, many prototypes become the "heart" of applications, and even infrastructures. I won't mention some rather obvious examples.
I feel that it's important to ensure quality from the first line of code, and to accept the fact that the MVP will become the core of the system.
 https://en.wikipedia.org/wiki/Waterfall_model#History (Look at Royce's presentation)
Our industry really jumped the shark with the Agile and XP nonsense. I flip the bozobit for anyone who uses "kanban", "sprint", liar's poker err planning poker, and other Dilbertspeak non-ironically.
20+ years later, best I can tell, the anti-methodology methodology Agile cargo cult band wagon was for people who couldn't be bothered to sit thru a PMI seminar. The unholy synthesis of pop-biz fashionistas and corporate climbers bouldering along the bullshit jobs facades. Artistes, pointy-haired bosses, "consultants", post dotcom era geek wannabes, and other assorted hangerons and poseurs.
I've always struggled to articulate the core problem. Agile is for those proudly belligerent ignorant people who reject expertise, wisdom, or anything else that would expose their grift.
A cult, more or less.
Buy me a beer sometime and I'll tell y'all how I really feel.
Well... why do you think it's made out to be the "bogeyman"? The reality is that people tried (and continued to try) to run software projects that way: state up-front everything you're going to do, and then do it! What could be simpler?
Anybody who's ever tried to do that has run headlong into the reality that: they didn't know up-front all the things they were going to need to do. For a long time, people believed that this was just a matter of experience and perspective and, after a reasonable amount of practice, software developers would be able to not only recite each task ahead of time, not only predict how long each was going to take, but would be able to do so in orders of magnitude less time than the actual task would take.
This view of software development imagines programming as mostly just typing without much more thought involved than, say, laying bricks. That this model, if accurate, could be automated away seems to escape the attention of the project managers who insist on managing software projects this way. If it were possible to specify software in such a way that it could be predicted and planned out the way "waterfall" demands, it could be automated to take humans out of the equation completely. (And the project managers themselves could be replaced with a spreadsheet).
If you go back and read the original Agile manifesto, it was written by people who were trying to explain that software is inherently unpredictable or - more to the point - that the parts of software that you need humans to perform are the unpredictable parts. There's an old saying, though, that nobody ever went broke telling people what they wanted to hear, so a cottage industry of agile "consultants" who'd never tried to develop software themselves made fortunes telling upper management that software is, as they wished, completely predictable, and the only reason schedules slip is because they're not mistreating their programmers harshly enough.
I've never seen a project use "Waterfall". It's just not a thing. People use it as a hypothetical boogeyman to "Agile".
As with everything "Agile", all words lose meaning.
Edit: There DOES exist the CMMI Systems Engineering processes, that generally involve various design reviews (PDR, CDR), IOCs, FOCs. These are essential for large-scale procurement that perform mission-critical functions. Superficially similar to "Waterfall", it still isn't. For example, you don't want to be on an aircraft or spacecraft that was "Agile"-ed to completion.
All the "boogeyman" activities happen - scope creep goes crazy, testing is run short to hit deadlines set, etc. Like you I never really thought it was a thing until I saw it with my own eyes.
The distinction is this: You can't take the CMMI model and execute on it, it lacks sufficient detail. Superficially, if you read the model and try to execute on it you will end up with Waterfall. What you're supposed to do (and why they're making CMMI 2.0) is map your existing processes (or develop new ones) to the model. That is, if you look at the things required for verification it's not complex: you need test cases, test reports, and some other things. It doesn't have to be heavy weight, point out your test scripts and reporting system (CI/CD platforms all have these) and how you maintain them and train people to use them. Done. But if you're not careful, people will write your process per the CMMI model and it's absolutely junk (witnessed in last job, one of the reasons I left).
CMMI 2.0's problem is that it's written as if Agile (Big-A) is the one true God. But it will almost certainly suffer the same implementation issue, it's not a process but people will try to make it one. As such, the resulting processes they do make will be process theater and hinder work, or at best have no effect but to waste some corporate resources. It does lay out a case, better than the previous CMMI model, for picking and choosing parts of the model to implement and get certified on. So that may help a little bit (less all-or-none attitude).
It's not fine if there is a need for the particular things that agile serves, which is why contracting requirements would specify agile and contractors would, to be response, claim to be agile necessitating the people reviewing the submission to detect agile BS.
Government requires waterfallish process in contracts all the time, but a document on identifying Agile BS isn't addressing those cases.
I left that job when all the developers completed their tasks a few hours before the end of the sprint, giving me (QA) just a few hours to test, merge, and deploy their code, which because of our terribly clunky and manual deploy system, just wasn't possible. I was placed under an internal investigation for not being productive because I held up the sprint of the "most productive team in the company" and made us look bad to out of state executives.
They used an enterprise-grade source control system run by the IT department (Perforce.) It was almost impossible to create a branch. In fact, I saw only one created during my brief 1 year time there.
Since there were no branches, you had "shelve" your changes and get an "in person" code review to merge. If you added something minor, like a new getter, but didn't have a unit test for it, you'd get flagged (even if it was used else where in the code.) It basically took forever to get anything done.
Oh yeah, they never shipped any of this stuff, either.
I could go on...
Functional teams and companies will make good products and set up their staff for success, with or without agile.
Dysfunctional teams will have all sorts of perverse incentives and set each other up for failure, with or without agile.
In my career I've seen "Agile" throw a wrench in the works for so many projects. Embedded systems; data center distributed systems that are air-gapped; aerospace safety systems; R&D work. Agile in these cases just isn't the thing to do, but unfortunately the culture these days is that everyone MUST be Agile, and so it creates another bureaucratic nightmare of dysfunction.
The funny thing is, people will always go to bat for Agile (MUCH more so on Reddit than HN).. and I don't understand why. I think furthermore the discourse around this has become so weird. For example, I asked someone kindly what is "Not-Agile"? There's no answer, other than "bad ways of making software". The discourse has become caveman-like "Agile good. Not-Agile bad."
At the end of the day, Agile is probably great if you're making a mobile app or a web app with a small team for a client with a small-to-medium budget, which accounts for most work software developers do, which is why its so popular. But is inherently too short-sighted and unable to address technical challenges that go more than superficially deep.
In my experience it seems like a ton of things that people want to do with agile is get to skip the writing and design work that has to happen to make deep technical projects happen. There is a really strong desire to just start implementing and then be able to refactor to working code. Of course there is always pressure to deliver faster so the refactor only ever gets half done and then there is an architectural mess.
TL;DR, if you believe Agile was created to help you do your job better, you might also believe that open-plan offices were really adopted to "foster collaboration" and not make you easier to spy on at work.
So do the users visit the datacenter to connect them, or what?
It's a practical set of traits to spot. But inevitably a question comes up "what to do next?" Re-educate, enforce, hire/fire, disband?
One needs to remember how the Agile processes were being "installed" back in the day in organisations/teams of various degree of dysfunction. Lots of those teams went through trials of "templates", including waterfall, just with the same outcomes.
Too often, the failure is not at the team but on org-level. The base tenet of Agile success is buy-in on all levels. Yet it's easier for the management to "buy-into" a structure and attributes, not into actually empowering and trusting the teams.
So, this detection approach may find all right attributes, tools, lingo, roles... but not the actual practices. A beaten down example is a morning stand-up, which disguises the dreaded subordinate reports - best indicator of such theater is a presence of a "clip-board" or note-taker person.
I'd think for such guide to be of better practical value, there should be a section which would outline ways to detect the constraints and obstacles for adoption of a proccess which would be effective in a given team's case. It does not have to be Agile-or-wrong.
Made up of various tech industry leaders (mainly CEOs, it seems). The purpose was to try and modernize the way, or present a path to modernization, defense software systems are developed and maintained. Because it's presently a cluster fuck.
You can't force customers to "get" Agile. You can force them to understand the risks of not participating in the process.
Well - yes, this (variations) is what we do, but guess what the outcome is.... "I'm not convinced that we've got this right...", "I was never fully signed up to that...", "I think we have invested a lot of effort in a process that isn't generating business value..."
The problem with Agile is that it doesn't account for politics, if people play nice and are all signed up to get the best done with the tools and people available it's brilliant. If you've got to deal with corporate politics it leaves those with good intent exposed in 100's of ways.
For example: "Some current, common tools in use by teams using agile development".
This is the kind of reason why a lot of people are required by management to use useless overkill stacks for their needs like docker or kubernetes.
Also the "questions to ask" are typical ridiculous agile corporate bullshit like "have you a product charter", or common forced process oriented questions.
My complaint would be that, "Tools listed/shown here are for illustration only: no endorsement implied," should have been inline instead of buried in a footnote.
Usually the second one has an answer along the lines of “not good but we’re working on it” which is fine. This place though tried pretty hard to convince me they were using git to manage their code right up until I asked the second question. The senior engineer sort of mumbled a few things ending with something along the lines of “we’re still figuring out exactly what the transition from svn will look like.”
I’m not sure why they didn’t decide to hire me but I feel like that interaction really upset someone and may have been a big part of it.
In my current team the only thing that I have asked to do when "going agile" was biweekly retro to discuss what to improve. Seems to be going pretty well even if most problems have solutions from various described "agile" process templates.