Hacker News new | past | comments | ask | show | jobs | submit login

The underlying assumption with requirements is often not stated explicitly: that people _can know_ everything in detail, in advance.

If that is the case, surely we can find better ways to uncover the requirements, and better tooling will help solve the problem.

Experience tells me that people don’t know everything beforehand. Thus the key assumption is not valid.

Then the question we should be asking is: how do we most efficiently bring people to where they discover and understand the requirements?

Experience tells me people are much better at giving concrete, specific feedback to a running system than to an abstract requirements document.

Hence iterative development.

In essence requirements are not a document by a process.




This is largely wrong.

No, people cannot know "everything in detail, in advance". That doesn't mean that they don't know anything. They know a lot. Nobody with any actual experience in requirements-gathering expects 100% perfection. So the underlying assumption about the underlying assumption is wrong.

After 20+ years in this industry, I'm long past believing the conventional wisdom that running systems are the best way to gather better requirements. It's not agile. Think about it. A key part of agile is to push everything to the left as much as possible - to catch problems as early as possible in the cycle. What's earlier than before you write the code at all? Writing code to find out what's wrong with it from a requirements perspective is really inefficient.

This isn't to say we shouldn't get working code out there as quickly as possible, or that feedback from working systems has no value. But this idea that it's the only way to get meaningful requirements, that's just BS.

Requirements aren't a document, or a process - they are a system.


The OPs statement isn't wrong, or even largely wrong, it is largely right. There was no statement of skipping all requirements gather, but skipping the idea that you can do one requirements gathering and have everything you need to develop the entire system.

Your push back to waterfall development is driving me crazy - we already tried that for decades and you can only get it to (kinda) work with a ridiculous investment that only makes sense for incredibly important systems, like launching a billion dollar rocket. And even then, you need iteration, just a more more careful, sandboxed type of iteration.


So the OP was fighting a strawman. Like I said, nobody out in the real world believes in pure waterfall anymore. Everyone knows that, realistically, a completely up-front requirements process doesn't do enough.

But the quote agile unquote response is every bit as reactionary, and does happen out in the real world... "You guys start writing code, I'll go get the requirements". Writing code is expensive, even in agile process. Just because you're doing two week iterations or continuous delivery doesn't mean you no longer waste time and effort on dead ends. You're just dying by a thousand cuts.

Turning to user reactions to working code as the only requirements-gathering mechanism is stupid. Stupid. It ignores a ton of requirements issues that are not only complex, but dangerous to screw up - financial behavior, SOX and HIPAA compliance and other regulatory issues, and more. A mistake in initial implementation can cost millions of dollars, company reputation, and worse.

And again, what the OP is proposing here is not agile. Just because you're tossing code over the wall in short sprints doesn't mean you're agile. Agile means catching potential problems as early as possible in the process. Catching problems with requirements is almost always going to be cheaper than catching them by writing code and finding out that the code is wrong.

Agile requirements gathering is a thing, yo.


"nobody out in the real world believes in pure waterfall anymore"

I'd allow that this might be true within large software organizations, this is definitely not the case where most software is written: in non-software organizations.


I'm reminded of something a certain high-end ops director (responsible for a DevOps push at a Fortune 50) would tell his CxOs... "No matter what business you think you're in, you're in IT now".

I work mostly in big enterprise companies. Whatever business they are in, they are "large software organizations", and they have decades of experience creating and evolving processes to suit the times and available. tech. You don't need to be Google to be an IT company. Any insurance company, any big-box retailer is an IT company. They know how to do this stuff, believe it or not.

footnote: Don't judge big enterprise companies by what they were doing 20, 30 years ago. They were state of the art then, and they're often state of the art now.


It's a question of support though, in a non-software-selling org, as a dev, you are a cost center, not a profit center, so getting the tools or other things you need is not a business priority; in fact, any additional costs in the cost centers are only losses on the balance sheet. In a company that sells software (primarily), you are the profit center, so anything that can be done to facilitate your work is supported, as it drives the bottom line.

footnote: just because they produce lots of software doesn't mean they've ever learned how to do it right. Ford is still a car company, Chase is still a financial company, Schlumberger is still an oilfield service company, despite all of them producing more software than some Software Companies.


Do you actually work in these environments, or are you making assumptions?

Resource contention is a problem in pure software companies, too. I used to work for a small pure software company in rapid growth. What did we have? Legacy code nightmares that were as bad as or worse than anything I've seen in the Fortune 500 (like building the core product on antique Borland C++ where there were only 9 licenses in the company and new licenses were no longer for sale and hadn't been for years, while the UI was written in Java Swing with a table kit from an out-of-business vendor). And almost all growth money went to expanding sales staff... engineering got screwed. They sold (and sell) terrible quality software, and they make a fortune at it.

Meanwhile, I'm at a massive health care company, and they hired me because they're committed to radical improvement in how the already-okay software is built and deployed. We're working hard on a serious continuous integration pipeline, and I expect us to be as good as anyone in a year - our reference points for "Why can't we do this?" are companies like Netflix. We're after that level of smoothness in the process, and we'll get there, or at least get close.

Don't let conventional wisdom tell you who is and isn't good at software.

edit: I'm reminded of going to a meetup about selling to the enterprise in Silicon Valley some years ago, and the twenty-something Stanford crowd were convinced that because these big companies have big failures, that they must suck. I pointed out that if you worked at a startup with $50M revenue, they'd be pretty successful, right? I've worked on several projects with annual development budgets larger than that. It's expensive and risky because they're operating at scales that most of the HN crowd can't even comprehend.


I've found that writing (pseudo-)code is absolutely necessary to find problems in the requirements. Often enough the requirements are self contradictory or just contain too many unnecessary corner cases. I've seen requirements that sounded really simple in the requirement doc, but turned out to be extremely hard to test because they implicitly defined a state machine with dozens of transitions.


Yes, definitely. This applies a lot to infrastructure issues, too. But pseudocode or extremely simple test case code can do this a lot better than tossing something into production to find out if it sucks.

I suspect a lot of the HN hostility to proper requirements analysis is coming from writing trivial systems.


Especially because English is often a terrible language to express requirements.


Especially when it's written by people who aren't native speakers but work in a "we're a modern company now" environment.


Core to agile is small incremental releases. Most technological innovation is done agile: in small releaseable increments. For example, We've been releasing small improvements for cars and planes for over 100 years. Every year a new model, with small improvements.

Humans are really bad at designing and building large improvements from paper requirements. Small improvements mean you understand most of the requirements are known and tested, and only small parts are uncertain.

The real problem is that testing requirements is really hard. You need to build the product to test the requirement. That's why most industries have an intermediate between requirements and product that is testable: this could be small scale prototypes, but more and more it's a virtual model that can be tested through software algorithms.

If we want to make real progress in the software industry, we need to move beyond word documents with requirements that are by definition not testable, to testable software models that don't require a full implementation. Low-code, model driven development is an example where this is happening.


The point of gathering requirements is not to "know everything". It's often taken that way because people like to blame the requirements: "We didn't build that because no-one gave us a requirement". You can have three reactions to that;

1. accept the blame - beef up the requirements gathering process, attempt to gather ever more

2. reject the blame - move to an agile process where everything is learned on-the-hoof

3. reject the premise.

People tend to either land in 1) or 2) above, but I think 3) is the correct place. Gathering requirements is about figuring out how much we know, identifying what we don't know, and working the risks. On some projects the risk is that we don't know enough about what customers really need (= agile engagement required). On others, the risk is literally all about delivery.

Iterative development is great at addressing some risks. It doesn't address other risks at all; it's not well-suited in many instances where the information known up-front is substantial, or where it's difficult to engage users.

The key is to recognise what problems you need to manage, and choose a suitable methodology to do it.


This is well-known and is the entire reason that agile exists. A lot of teams will write stories and run sprints and think they're doing it right. The actual definition is the ability to flex on scope and timing to meet changing requirements and priorities. Long-term estimation is just never going to be accurate, so setting a date and fixed scope is just automatically doomed.

The strategy I use its to scope out as much as you can up front. A list of high-level user stories. Give these a rough prioritization (MoSCoW works) and some rough estimates on each. Now estimate your velocity with a few possible team configurations. Also, assume your backlog will grow about 10% as you go when new stuff is uncovered.

Now if you need to schedule a launch or set a budget, set it deep into the non-mandatory features. If everything goes off the rails, you have cushion to avert failure. If everything goes ok, you will deliver a richer product. You'll also be able to track very accurately as you go how close you are to the plan week by week.


It really depends on what you do. If you have a well defined problem, that is complex enough to require quite some work, than I would say: Do as little as possible to be able to imagine as much of the workflow as possible, while also staying as flexible as possible.

If a problem is less complex or can be released iteratively, than that's the lean way to do it, where you also have good learning. But often to solve the problem just a bit you already need a load of stuff to be taken care of.

Key to me is to stay in text or cheap click-dummies for long enough. Depending on the complexity I go through several stages:

generally:

- Always probe for details if you can imagine some already, you are trying to know as much as possible as soon as possible. File it all in to a "to take care of later"-list at least, better yet sort in properly already.

- write down everything (maybe others can remember everything, fine too :) )

- change and refine whenever something new has to be taken care of. IN the following steps it will always be easier now, than in the next step.

1. gather high level requirements with the stakeholders

2. sketch a rough workflow. I usually do a nested list.

3. write down a complete workflow

4. now you might know what you need, so define a rough UI, technology, interfaces

5. still in text: write your concept so someone else understands it

6. talk everyone involved through the concept (first stakeholders, than devs)

7. double-check if you cant simplify or leave out anything, at least for a first version

8. if necessary: do mockups, designs, schemas

9. only now start to program (for difficult stuff a prototype)

- On top it might be helpful to have a small checklist depending on your needs with entries like "reporting?, testing?, support?"


Agreed, I've been on many projects where a client only had vague requirements and useful clarification only came in response to seeing the app.

This is reasonable, it's human, but does anyone have a good approach to applying an iterative development approach on fixed price contracts?

I've been on many fixed price projects that are "agile" in name only, general issues I've observed:

- Iteration on requirements becomes confrontational (pay for change request) making it difficult to build a good product as we all learn what does/doesn't work for users throughout development.

- Upfront estimate is inaccurate causing time pressure on development resulting in rushed work which negatively impacts code quality and team learning.

The traditional answer is to have the client commit to specific requirements and hold them to it.

But what I'd really like to figure out is a way to acknowledge evolution of requirements will happen so we can work _with_ clients to build great products.

I struggle because this seems incompatible with fixed price and large companies seem to only want to do fixed price.


In a past life as a project manager for a custom software consultancy, we had a rule of thumb based on experience that a functional prototype[0] takes about 25% of a total project's budget.

Whether the contract was time & materials (preferred) or fixed bid, that 25% rule worked well as an early indicator that the project was likely to go over budget. It allowed us to have early conversations with the client about cutting scope or expanding the budget to cover the unknown complexity.

We'd also dramatically increase our rate to reduce our risk on fixed price projects.

[0] Barebones, ugly, but functional from end-to-end.


>If that is the case, surely we can find better ways to uncover the requirements, and better tooling will help solve the problem. Experience tells me that people don’t know everything beforehand.

This is highly context dependent. In some domains the business basically just needs to throw random things at the wall and see what sticks because nobody can know what they really "need" until it's tested in front of a customer. In other domains they have an incredibly detailed view of the behavior that they need.

In some businesses they're in a weird situation of just not being very good at figuring out what they need and an improved process would save tons of time and money.

In others nobody even thinks about any of this because their requirements are so simple and obvious nobody needs to.

Iterative development is a lot of the time but it's not a panacea and it's not a replacement for fixing a broken requirements process when it's needed.


Sounds like you don't like to be measured.


>"requirements are not a document by a process"

by -> but


FTR this wasn't a grammar nit; these two different words in this context have opposite meaning! So, as a former teacher of English as a Second Language, I offered the substitution in order to help make the meaning clear.

Misguided downvote, imho. (shrug)


How do they have opposite meaning? Can I use "by" to declare A and B as opposites in such a context? (Trying to learn, I never heard the usage of "by" like this)


"not a (document [created] by a process)"

vs

"not a document but (rather) a process"

In the former (as OP typed it), it's grammatically suspect but also seems to imply a missing "created" like I inserted. In that case it'd be ambiguous whether the OP feels requirements are not documents, or perhaps they are documents, just not ones "by a process".

In the latter, which I took to be the intended meaning, OP is saying "requirements are a process, not a document."

The "not X but Y" is grammatical and clear, equivalent to (boolean pseudocode) "Y && !X".


Sad upvote... :(




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: