If that is the case, surely we can find better ways to uncover the requirements, and better tooling will help solve the problem.
Experience tells me that people don’t know everything beforehand. Thus the key assumption is not valid.
Then the question we should be asking is: how do we most efficiently bring people to where they discover and understand the requirements?
Experience tells me people are much better at giving concrete, specific feedback to a running system than to an abstract requirements document.
Hence iterative development.
In essence requirements are not a document by a process.
No, people cannot know "everything in detail, in advance". That doesn't mean that they don't know anything. They know a lot. Nobody with any actual experience in requirements-gathering expects 100% perfection. So the underlying assumption about the underlying assumption is wrong.
After 20+ years in this industry, I'm long past believing the conventional wisdom that running systems are the best way to gather better requirements. It's not agile. Think about it. A key part of agile is to push everything to the left as much as possible - to catch problems as early as possible in the cycle. What's earlier than before you write the code at all? Writing code to find out what's wrong with it from a requirements perspective is really inefficient.
This isn't to say we shouldn't get working code out there as quickly as possible, or that feedback from working systems has no value. But this idea that it's the only way to get meaningful requirements, that's just BS.
Requirements aren't a document, or a process - they are a system.
Your push back to waterfall development is driving me crazy - we already tried that for decades and you can only get it to (kinda) work with a ridiculous investment that only makes sense for incredibly important systems, like launching a billion dollar rocket. And even then, you need iteration, just a more more careful, sandboxed type of iteration.
But the quote agile unquote response is every bit as reactionary, and does happen out in the real world... "You guys start writing code, I'll go get the requirements". Writing code is expensive, even in agile process. Just because you're doing two week iterations or continuous delivery doesn't mean you no longer waste time and effort on dead ends. You're just dying by a thousand cuts.
Turning to user reactions to working code as the only requirements-gathering mechanism is stupid. Stupid. It ignores a ton of requirements issues that are not only complex, but dangerous to screw up - financial behavior, SOX and HIPAA compliance and other regulatory issues, and more. A mistake in initial implementation can cost millions of dollars, company reputation, and worse.
And again, what the OP is proposing here is not agile. Just because you're tossing code over the wall in short sprints doesn't mean you're agile. Agile means catching potential problems as early as possible in the process. Catching problems with requirements is almost always going to be cheaper than catching them by writing code and finding out that the code is wrong.
Agile requirements gathering is a thing, yo.
I'd allow that this might be true within large software organizations, this is definitely not the case where most software is written: in non-software organizations.
I work mostly in big enterprise companies. Whatever business they are in, they are "large software organizations", and they have decades of experience creating and evolving processes to suit the times and available. tech. You don't need to be Google to be an IT company. Any insurance company, any big-box retailer is an IT company. They know how to do this stuff, believe it or not.
footnote: Don't judge big enterprise companies by what they were doing 20, 30 years ago. They were state of the art then, and they're often state of the art now.
footnote: just because they produce lots of software doesn't mean they've ever learned how to do it right. Ford is still a car company, Chase is still a financial company, Schlumberger is still an oilfield service company, despite all of them producing more software than some Software Companies.
Resource contention is a problem in pure software companies, too. I used to work for a small pure software company in rapid growth. What did we have? Legacy code nightmares that were as bad as or worse than anything I've seen in the Fortune 500 (like building the core product on antique Borland C++ where there were only 9 licenses in the company and new licenses were no longer for sale and hadn't been for years, while the UI was written in Java Swing with a table kit from an out-of-business vendor). And almost all growth money went to expanding sales staff... engineering got screwed. They sold (and sell) terrible quality software, and they make a fortune at it.
Meanwhile, I'm at a massive health care company, and they hired me because they're committed to radical improvement in how the already-okay software is built and deployed. We're working hard on a serious continuous integration pipeline, and I expect us to be as good as anyone in a year - our reference points for "Why can't we do this?" are companies like Netflix. We're after that level of smoothness in the process, and we'll get there, or at least get close.
Don't let conventional wisdom tell you who is and isn't good at software.
edit: I'm reminded of going to a meetup about selling to the enterprise in Silicon Valley some years ago, and the twenty-something Stanford crowd were convinced that because these big companies have big failures, that they must suck. I pointed out that if you worked at a startup with $50M revenue, they'd be pretty successful, right? I've worked on several projects with annual development budgets larger than that. It's expensive and risky because they're operating at scales that most of the HN crowd can't even comprehend.
I suspect a lot of the HN hostility to proper requirements analysis is coming from writing trivial systems.
Humans are really bad at designing and building large improvements from paper requirements. Small improvements mean you understand most of the requirements are known and tested, and only small parts are uncertain.
The real problem is that testing requirements is really hard. You need to build the product to test the requirement. That's why most industries have an intermediate between requirements and product that is testable: this could be small scale prototypes, but more and more it's a virtual model that can be tested through software algorithms.
If we want to make real progress in the software industry, we need to move beyond word documents with requirements that are by definition not testable, to testable software models that don't require a full implementation. Low-code, model driven development is an example where this is happening.
1. accept the blame - beef up the requirements gathering process, attempt to gather ever more
2. reject the blame - move to an agile process where everything is learned on-the-hoof
3. reject the premise.
People tend to either land in 1) or 2) above, but I think 3) is the correct place. Gathering requirements is about figuring out how much we know, identifying what we don't know, and working the risks. On some projects the risk is that we don't know enough about what customers really need (= agile engagement required). On others, the risk is literally all about delivery.
Iterative development is great at addressing some risks. It doesn't address other risks at all; it's not well-suited in many instances where the information known up-front is substantial, or where it's difficult to engage users.
The key is to recognise what problems you need to manage, and choose a suitable methodology to do it.
The strategy I use its to scope out as much as you can up front. A list of high-level user stories. Give these a rough prioritization (MoSCoW works) and some rough estimates on each. Now estimate your velocity with a few possible team configurations. Also, assume your backlog will grow about 10% as you go when new stuff is uncovered.
Now if you need to schedule a launch or set a budget, set it deep into the non-mandatory features. If everything goes off the rails, you have cushion to avert failure. If everything goes ok, you will deliver a richer product. You'll also be able to track very accurately as you go how close you are to the plan week by week.
If a problem is less complex or can be released iteratively, than that's the lean way to do it, where you also have good learning. But often to solve the problem just a bit you already need a load of stuff to be taken care of.
Key to me is to stay in text or cheap click-dummies for long enough. Depending on the complexity I go through several stages:
- Always probe for details if you can imagine some already, you are trying to know as much as possible as soon as possible. File it all in to a "to take care of later"-list at least, better yet sort in properly already.
- write down everything (maybe others can remember everything, fine too :) )
- change and refine whenever something new has to be taken care of. IN the following steps it will always be easier now, than in the next step.
1. gather high level requirements with the stakeholders
2. sketch a rough workflow. I usually do a nested list.
3. write down a complete workflow
4. now you might know what you need, so define a rough UI, technology, interfaces
5. still in text: write your concept so someone else understands it
6. talk everyone involved through the concept (first stakeholders, than devs)
7. double-check if you cant simplify or leave out anything, at least for a first version
8. if necessary: do mockups, designs, schemas
9. only now start to program (for difficult stuff a prototype)
- On top it might be helpful to have a small checklist depending on your needs with entries like "reporting?, testing?, support?"
This is reasonable, it's human, but does anyone have a good approach to applying an iterative development approach on fixed price contracts?
I've been on many fixed price projects that are "agile" in name only, general issues I've observed:
- Iteration on requirements becomes confrontational (pay for change request) making it difficult to build a good product as we all learn what does/doesn't work for users throughout development.
- Upfront estimate is inaccurate causing time pressure on development resulting in rushed work which negatively impacts code quality and team learning.
The traditional answer is to have the client commit to specific requirements and hold them to it.
But what I'd really like to figure out is a way to acknowledge evolution of requirements will happen so we can work _with_ clients to build great products.
I struggle because this seems incompatible with fixed price and large companies seem to only want to do fixed price.
Whether the contract was time & materials (preferred) or fixed bid, that 25% rule worked well as an early indicator that the project was likely to go over budget. It allowed us to have early conversations with the client about cutting scope or expanding the budget to cover the unknown complexity.
We'd also dramatically increase our rate to reduce our risk on fixed price projects.
 Barebones, ugly, but functional from end-to-end.
This is highly context dependent. In some domains the business basically just needs to throw random things at the wall and see what sticks because nobody can know what they really "need" until it's tested in front of a customer. In other domains they have an incredibly detailed view of the behavior that they need.
In some businesses they're in a weird situation of just not being very good at figuring out what they need and an improved process would save tons of time and money.
In others nobody even thinks about any of this because their requirements are so simple and obvious nobody needs to.
Iterative development is a lot of the time but it's not a panacea and it's not a replacement for fixing a broken requirements process when it's needed.
by -> but
Misguided downvote, imho. (shrug)
"not a document but (rather) a process"
In the former (as OP typed it), it's grammatically suspect but also seems to imply a missing "created" like I inserted. In that case it'd be ambiguous whether the OP feels requirements are not documents, or perhaps they are documents, just not ones "by a process".
In the latter, which I took to be the intended meaning, OP is saying "requirements are a process, not a document."
The "not X but Y" is grammatical and clear, equivalent to (boolean pseudocode) "Y && !X".