
Ask HN: Better tools for the software requirements / scoping phase? - castdoctor
Many software projects seem fail or go over-budget from poorly defined or changing specifications. It seems we have excellent tools to manage the delivery of software, but less so at the design&#x2F;scoping phase.
What are your thoughts of using better tools that leverage, say, Domain Driven Design or BDD (Behaviour Driven Development) that would engage end-users early on?
======
mjul
The underlying assumption with requirements is often not stated explicitly:
that people _can know_ everything in detail, in advance.

If that is the case, surely we can find better ways to uncover the
requirements, and better tooling will help solve the problem.

Experience tells me that people don’t know everything beforehand. Thus the key
assumption is not valid.

Then the question we should be asking is: how do we most efficiently bring
people to where they discover and understand the requirements?

Experience tells me people are much better at giving concrete, specific
feedback to a running system than to an abstract requirements document.

Hence iterative development.

In essence requirements are not a document by a process.

~~~
beat
This is largely wrong.

No, people cannot know "everything in detail, in advance". That doesn't mean
that they don't know _anything_. They know a lot. Nobody with any actual
experience in requirements-gathering expects 100% perfection. So the
underlying assumption about the underlying assumption is wrong.

After 20+ years in this industry, I'm long past believing the conventional
wisdom that running systems are the best way to gather better requirements.
It's _not agile_. Think about it. A key part of agile is to push everything to
the left as much as possible - to catch problems as early as possible in the
cycle. What's earlier than before you write the code at all? Writing code to
find out what's wrong with it from a requirements perspective is really
inefficient.

This isn't to say we shouldn't get working code out there as quickly as
possible, or that feedback from working systems has no value. But this idea
that it's the _only_ way to get meaningful requirements, that's just BS.

Requirements aren't a document, or a process - they are a _system_.

~~~
adrianN
I've found that writing (pseudo-)code is absolutely necessary to find problems
in the requirements. Often enough the requirements are self contradictory or
just contain too many unnecessary corner cases. I've seen requirements that
sounded really simple in the requirement doc, but turned out to be extremely
hard to test because they implicitly defined a state machine with dozens of
transitions.

~~~
carlmr
Especially because English is often a terrible language to express
requirements.

~~~
adrianN
Especially when it's written by people who aren't native speakers but work in
a "we're a modern company now" environment.

------
hyperpallium
This was the inspiration for "Extreme Programming":

You make a minimal cost mockup ASAP, for the client to try out, to see if it's
what they wanted. Clients can't appreciate requirements, so (like, actual)
architects make a scale model first.

The alternative, of doing requirements as a separate phase, was ridiculed as
the "waterfall model". In reality, there's interactions between the so-called
phases of req, spec, design, code, test, maintain etc.

The truth is that understanding the problem is most of the work - not just for
the programming problem, but for the business problem. It's just difficult.
And when the world changes, you just have to change with it. If you try to
anticipate what's next, you'll invariably get it wrong.

Because the world is changing faster, software develoment has gone from
beautifully engineered software that just keeps working, to slapped together
solutions, and a fulltime team that runs alongside, slapping patches on
patches, continuously.

What's the point of spending the time and money on beautiful engineering, if
it's going to be scrapped tomorrow anyway (or even before it's finished)?

The only hope is for tools, libraries, frameworks and languages, that address
lower levels of abstraction, that change less frequently. This isn't all good,
but the JVM is one example.

~~~
mariopt
> What's the point of spending the time and money on beautiful engineering, if
> it's going to be scrapped tomorrow anyway (or even before it's finished)?

Good luck maintaining that code. Don't forget about the team, a messy codebase
and/or poor requirements, eventually, will break a team's morale until the day
news devs come along and demand a refactor.

I start to love the idea of a waterfall model for software. People can still
iterate on the problem with wireframes, even high fidelity ones, and speaking
with customers, market research, etc.

There is a lot of value, time and money saved when you have a decent level of
accuracy in the specification.

~~~
namdnay
It really depends on what you're building. There's a reason civil engineering
isn't "Agile" \- this also applies to major IT systems. A tiny difference in
requirements can have massive impacts.

Honestly, in these cases, there's nothing better than waterfall, partly to
save development time, but mostly for contractual protection: If a small
change can cost millions more (and from experience, they can), you need to
know who is responsible for paying those millions...

~~~
geezerjay
> There's a reason civil engineering isn't "Agile"

There are plenty of reasons why engineering isn't "agile" and it's only usable
in software development, and one of the main reasons is that software
development projects manage a single resource: man/hours. Building the wrong
or inadequate solution has its cost, but rebuilding something from scratch
does not require additional resources to be allocated to the project: just
keep the same team working on it and results will pop up.

Engineering projects are very different than software development projects.
Materials and components are the driving cost of a project and there are
plenty of stuff that must be done right from the very start. It's unthinkable
to scrap a machine or building or tunnel midway through, and it's
inconceivable that some disaster happens at all. Engineering either gets it
right at every single stage of a project or there are serious consequences to
deal with, which in some cases might even be criminal charges. If for some
reason a prototype crashes during development then the project might be forced
to shut down.

~~~
zolthrowaway
I 100% agree with you. Software is "soft". Making changes to a code base is
very cheap. Iteration just makes sense in software even for critical systems.
If you build a suboptimal bridge, you have to live with it. If you write
suboptimal software, you can test the actual product and fix it before you
even ship. You can't really test a bridge outside of simulations. GP's
comparison is apples to oranges in a lot of ways.

~~~
geezerjay
> You can't really test a bridge outside of simulations.

Just to pick a few nits, actually bridges are indeed tested during and after
construction. It used to be standard practice to do test runs with near limit
loads to inaugurate bridges, consisting of getting a fleet of military
vehicles or water tankers to cross the bridge while surveyors monitored the
bridge's response.

Nowadays non-destructive testing techniques are favoured for a number of
reasons, including the fact that sensor rigs can also be used throughout the
structure's lifetime to help determine its fatigue life.

------
duncanawoods
You might be interested in [https://thorny.io](https://thorny.io) \- an
interactive notebook for decision-making.

It's purpose is to capture design-rationale and bring it to life so that as
you refine your reasoning, the changes ripple through your decisions. It can
help communicating complicated design decisions that can be awkward to capture
in prose.

It's intended to be very low-friction so more like markdown than old
requirements management or decision support systems. My dream is that it can
help us tackle decisions so complicated we usually give-up e.g. when multiple
decisions impact each other.

I am currently in beta and I'd love to talk to anyone interested in the topic.
Please drop me a line at duncan at thorny.io.

~~~
SyneRyder
That looks intriguing, though I'd be more interested if it was a desktop app
rather than a website. It reminds me a bit of Soulver, though Soulver is
focused more on numbers & calculations:
[https://www.acqualia.com/soulver/](https://www.acqualia.com/soulver/)

~~~
duncanawoods
Soulver is very cool! We have lots of tools for calculations it's funny we
have so few for reasoning.

If beta-users find the web-app useful then I can release native apps but I
must resist until its proven!

~~~
SyneRyder
Ahhh, excellent! I definitely agree with proving the demand for an app before
spending time expanding it. Great strategy!

------
mienski
As someone that spends most of their days at the design/scoping phase, then
watching the product go into development where it encounters constant
misunderstandings and gotchas that the customer never told us about or never
realised themselves, I completely agree that there is a huge disconnect
between the scoping and requirements phase and the build phase.

I almost have guilt over feeling like my work of design and scoping is
effectively useless to a developer, all my mockup layouts have to be built
ground-up, my requirements aren't actionable in any way unless they feel like
reading them (I try to keep them as succinct as possible, but the nature of
working for clients also means I have to be somewhat specific so that people
know when they should actually pay us). I've looked into things like Cucumber
([https://cucumber.io/](https://cucumber.io/)) so that my requirements can
actually be compiled as tests, but adoption is slow and arduous, and all I'm
really doing is adding more work to a dev.

My latest line of thinking is that I need a way to show the user interface,
and then the data flow and logic all the way back through the system (usually
a back-end DB or a customer legacy system). It's vital that these are
presented together, hence my current process is interactive mock-ups built in
Sketch ([https://sketchapp.com/](https://sketchapp.com/)) and hosted on
Invision ([https://www.invisionapp.com/](https://www.invisionapp.com/)) which
allows the customer and developer to click around and see it on a mobile
screen so they really get a feel for it. Finally I couple that with a BPMN
diagram which has swim lanes not just for the traditional system swim lanes,
but also for a user (i.e. User taps Submit) and for a user interface (i.e.
shows the mockup screen that is displayed), and then the logic flows down
through the diagram. (e.g. User, User Interface, Mobile App Logic, Server
Logic, Server DB, etc.)

~~~
beaker52
Can I suggest you involve your entire team in the discovery phase, meaning
talking to the customer and interacting with the problem from the very start?

Shared understanding (passing what you've learnt on behalf of the rest of the
team doesn't count) can help everyone, (including the customer!) understand
the whys, whats and hows of their own problem space. Then as a whole team you
can come up with and vet the solution - qualified by the deeper understanding
everyone in the team has.

This deeper understanding will help you arrive at a better solution, wasting
less time building the wrong thing and reduce the friction in "handing-off" to
development (because there isn't a hand off).

~~~
roel_v
This here. Having someone who doesn't have to social skills of a house plant
to talk to the customers, and then have the neckbeards in the back room code
it up, is not the way to go, and nowadays I refuse to work with partners who
work like this (or with subcontractors who want to force this workflow on me,
and not limited to software dev either - I no longer work with e.g. building
contractors who work like this either). And yes, I understand the point of
specialization and division of labor, and yes in the past I too would have
much preferred to just be the guy being handed perfect specs and never having
to talk to anyone from the outside, and then when things go wrong there is
always 'the spec' to blame. But it just doesn't work that way. In the past,
being a 'business programmer' was called being an 'analyst-programmer'. I
never really understood that, until I got to a point where I realized that the
actual 'programming' (i.e., 'coding') is the easy part; it's the 'analysis' of
the problem (well, and the formulation of a solution to the problem that comes
out of that analysis) is the key to delivering value. But still, the
relationship between the problem understanding, the solution and the
implementation of that solution is so close that you just cannot completely
separate them.

I interviewed a bunch of firms for building a website last year; nothing
particularly fancy. Several of them (at least the big firms) send in a guy who
would always start off explaining their 'process' (all fancy sounding), that
process essentially being 'you tell me what your problem is, then we will
together design a solution, and then I'll hand you off to our project manager
back at the office who will just have the programmers implement it; you'll
never even have to see these guys face to face!'. Uh sure, probably to 95% of
your customers, naive and gullible because of lack of experience, that sounds
great and like it's an advantage, but no way I'm going to get caught with my
pants down 6 months from now because there was some aspect we didn't cover in
the 'design' but the programmers coded it up like that anyway because hey it
say so in the spec, right?

------
ian0
Not a software tool but I was recently introduced to the design-sprint[1]
methodology from google and found it helped a lot with the requirements
gathering and speccing phases. It was also light and easily implementable -
they have some resources there too.

[1]
[https://designsprintkit.withgoogle.com/](https://designsprintkit.withgoogle.com/)

~~~
subpixel
Also 'directed discovery' from Pluralight:
[https://www.pluralsight.com/blog/career/product-
development-...](https://www.pluralsight.com/blog/career/product-development-
directed-discovery)

------
andymoe
We do this thing we call Discovery & Framing at the kickoff of a new project.
It usually lasts 3-6 weeks and involves design, product, engineering and
someone with the power to make decisions on the spot. We call this role a
product owner.

It’s a good way to get stuff out of folks heads, validate the problem and
solution with users, and end up with some stuff for everyone to execute on by
the end of the process. You can google the term for more detailed
descriptions.

As for tools, we’ve been having success with real time board, and pivotal
tracker (maybe gives away where I work) and of course a ton of sticky notes.

~~~
andymoe
Since this got some upvotes, if you want a taste for a process like this
condensed into and hour you can visit:

[https://pivotal.io/office-hours](https://pivotal.io/office-hours)

(Free, staffed by a balanced team over a lunch hour)

------
fbonawiede
I'm a co-founder of a Swedish startup, and we have built a tool targeting
product owners and product managers. It aims to eliminate endless email
threads and disconnected workflows that are common in product development. It
sort of replaces Word, Google docs, and Confluence and integrates with Jira
and Slack.

I would be happy to give you a quick demo!

[https://www.delibr.com/demo](https://www.delibr.com/demo)

~~~
dotancohen
I am seriously interested, however your website has almost no information and
I'll not book a 30-minute appointment at a later time to see how it works. The
only informative part of the whole website seemed to be this screenshot:
[https://www.delibr.com/img/design/features/section2/xstep1.p...](https://www.delibr.com/img/design/features/section2/xstep1.png.pagespeed.ic.SffT8WJQFM.png)

Add more screenshots and less "integrates with slack".

~~~
fbonawiede
Thanks for the feedback! We have avoided putting too many screenshots on the
webpage since we made changes all the time initially. However, we are now
ready to put screenshots.

"I'll not book..."? I guess you meant "I'll book..."? =)

~~~
ken
No, nobody wants to spend 30 minutes to see if a vague description of some
software might work for them. There's not enough hours in the day. There's
about 20 products mentioned here already. That's more than a whole day of
doing nothing but having someone try to sell me something which probably
doesn't do what I'm looking for.

~~~
fbonawiede
How about 10 minutes and you’ll get a Delibr T-shirt sent afterwards?

------
contingencies
Requirements are almost never perfectly specified. One of the most substantial
and hard-learned parts of good systems design is automatically pre-empting, as
far as reasonably possible, the probable range and depth of requirements scope
shifts over the project lifetime. (Critically, that includes the 90% of a
project's lifetime which is post-development maintenance.)

In short: if you have good people it shouldn't matter! Experienced people will
extend the assumed scope beyond the stated requirements without being asked,
and either do so efficiently within the resources available to the project or
bring an appropriate level of attention to the limitation before it becomes an
issue.

The somewhat less PC and far snarkier explanation, in the immortal words of
twitter: _Don 't pick the right tool for the job, pick the right framework
which secures you extra work for being a tool._ \- @iamdevloper (via
[http://github.com/globalcitizen/taoup](http://github.com/globalcitizen/taoup))

------
mysterydip
I recently took a model-based systems engineering course that opened my eyes
to some helpful methods.

Use UML, SysML, etc to diagram out your requirements (starting at high level
at first, more granular as the system matures). Now build a high-level model
of your system design (software and hardware if required). Match parts of your
system to the requirements. This lets you see gaps in two directions:
requirements you haven't addressed (because they aren't connected as
"satisfied by" anywhere on your model), and questions to ask for more detail
on a given requirement that could drive the design.

As others have said, there will always be late requirements that change
things, so it won't resolve those. It can, however, show stakeholders how much
changing a requirement can cost in terms of rework/time by showing how the
model has to change to accommodate.

The caveat with this is: the model needs to reflect your actual design, and
you need to keep it up to date. The times I've used it so far it has been a
useful exercise.

~~~
Pamar
UML? Really? Most Business Users want to discuss the UI and (if you are lucky)
the data structures, two things that UML is not really very concerned with. In
my experience, UML has been a really bad choice to work on requirements for
corporate sw.

~~~
mysterydip
Agreed, it's not what business users want to (or should) see. This is an aid
for what your team does internally, instead of just lists and documents.

------
BjoernKW
I agree that what you describe is a huge problem. I'm not so sure though if
that problem can be alleviated by additional tooling. More often than not the
cause of this is defective processes and assumptions rather than deficient
tools.

DDD, ubiquitous language and bounded contexts in particular, can be enormously
helpful with defining better requirements.

I'm not so sure about BDD though in this context. While the notion of the
customer / product owner writing specifications in this format in a way
developers can use these specifications to test their code sounds great at
face value I have yet to see a project where this is done consistently and
continuously.

Moreover, some types of requirements can be better explained by using diagrams
or UI mockups, which doesn't really fit the BDD paradigm.

~~~
mping
Agreed, it's not about the tools, but the process. I've worked with different
agile coaches - most of them were crap, but one of them really struck a chord
in how he would find flaws on the organisation that would reflect on the
software side. He was really good at devising strategies to overcome this.

To address the specific question, I think most of the time there's a problem
because there is no one that takes care of making sure everyone has the same
vision and understanding of what needs to be done. And because sometimes words
fail us, I think mockups are a great way of uncovering and sharing
requirements.

------
beat
It's not free, and it's not simple, but Aha
([https://www.aha.io](https://www.aha.io)) can be very helpful in figuring out
product.

Engaging end users is a problem from a couple of directions. First, the very
idea of "end user" \- you need to engage _customers_ , not end users. (Take
Facebook, for example... you're not the customer, you're the product. If you
build something for Facebook, your customer is Facebook, not Facebook's
users.) This situation isn't exactly unusual. In this case, it's the
customer's responsibility to gather feedback from end users - and put you in
the feedback loop, as needed.

------
jdswain
There is a class of tools for this called Requirements Management. They tend
to be expensive and clunky tools in my experience, but I haven't used one for
quite a few years. Ones I have used are Rational Requisite Pro (IBM) and
Doors, which is now apparently an IBM Rational product too. I've worked on a
project that used Doors for requirements management but also requirements
traceability, a somewhat tedious task that would let us look from a
requirement and identify all development documentation that implemented that
requirement and eventually to code if required.

From an logical point of view I think it makes a lot of sense to start from
clearly documented requirements, then work forward to design, implementation,
and testing with links back to each requirement. I don't know how testing
(higher level than unit testing) can really be effective unless they have
requirements documents to use as their starting point. Use Cases go some of
the way to providing this information but tend to be a bit less formal. Agile
methodologies tend to be less formal than older methodologies that emphasised
this kind of process more. A lot of Agile is good, but it does also tend to
forget lessons from the past.

------
jimduk
People build tools for use. Software projects are an example of this. A lens I
find helpful is to view a software project as a "process (or game) of
transforming shared and individual understanding (or belief), into a tool or
artefact". The project falls short if the understanding is not valid or the
tool is poorly built or if the tool doesn't encapsulate the understanding or
if circumstances change and the tool becomes less useful.

Transferring valid understanding into the final artefact is a key constraint
in many projects (reading Goldratt and thinking of this transfer of
understanding as a constraint was helpful for me)

There are many ways to fail. Some of the traditional ways to succeed are

i) rely on an individual who really understands the desired tool, and who has
the authority and skill to communicate and be the final arbiter (including
sometimes to write it all themselves). Sometimes this can be also done by a
small group.

ii) write a really clear, well-written, very hard to misinterpret document and
get professionals to develop and test this system (before the document goes
significantly out of date)

iii) Run an agile process where the business can describe what they want in
small pieces that can be delivered so quickly that little understanding is
lost

Obviously what works is massive contextual, depending on the domain, funding,
resource reqts etc. (Glen Alleman is good on this)

So I would argue 'good software tools for requirements' are critically
dependent on your approach for how you are going to turn 'understanding' into
a 'system', and you don't want to worry too much about them until you are
happy with your approach. At that point you can start building your 'meta-
tools'.

------
a13n
I run Canny ([https://canny.io](https://canny.io)), which is a tool that
software companies use to keep track of feature requests from their customers.

One awesome thing you can do is ping everyone interested in a feature, and ask
them how they want it to work. See it in action here:
[https://feedback.canny.io/feature-
requests/p/tags](https://feedback.canny.io/feature-requests/p/tags). (Scroll
down to Sarah's comment on July 20.)

Whenever we're thinking about building a feature, we ping all the
stakeholders. This gives us solid context on how they want it to work, which
helps us define our MVP. If we need to follow up for more information, it's
easy to do that via email / Intercom.

Hope this doesn't come off as salesy – I really felt it was relevant.

------
ryanmarsh
The solution to a “complex” problem is only knowable after the fact, versus
merely complicated problem which is knowable before the fact. Any sufficiently
useful computer program is typically “complex”, though not always.

That said it is still incredibly helpful for everyone to agree on what we’re
building in the present iteration/sprint. You mentioned BDD which can be very
helpful when paired with a team practice such as Example Mapping[0].

Other practices such as DDD can help you bound the problem and define it. Also
there are some helpful lessons in Feature Driven Design.

Source: I teach BDD workshops for a living

[0] [https://cucumber.io/blog/2015/12/08/example-mapping-
introduc...](https://cucumber.io/blog/2015/12/08/example-mapping-introduction)

------
Redsquare
You can engage all stakeholders via a series of eventstorming sessions
[http://ziobrando.blogspot.com/2013/11/introducing-event-
stor...](http://ziobrando.blogspot.com/2013/11/introducing-event-
storming.html?m=1)

~~~
meh2frdf
Engaging isn’t sufficient, it’s a start. What is typically lacking is someone
with the intelligence to properly understand the problem, and map out a
roadmap to get there. There is too often a naive belief that the stakeholders
will put you on the right path, they don’t, they just give you information
that feeds into the design process. Sure you solution should be regularly
validated with the users, however in my experience the tend to lack the
ability to give a vision which isn’t just an incremental improvement on the
current solution.

~~~
Redsquare
Note the word stakeholders, not simply users. Stakeholders includes the team
actually delivering, not just the end client/users.

------
motohagiography
I've been doing a variation of this for security, as it's the main need that
requires abstraction-on-down definition, and it doesn't translate well into
agile environments where developers are designing solutions. (qtra.io, in
private beta, so no free experience yet)

The fit problem I'm having is getting technical threat hunters who populate
the market to think about risk and security design, or product/project
managers to reason even high level about a domain they delegate to
technologists.

Still iterating for fit, but so glad there is a thread of people thinking
about architecture.

------
crdoconnor
I got frustrated with cucumber and cucumber-esque tools for doing BDD so I
built my own which were optimized for programmer usability (strict type
system, inheritance built in, sane syntax, etc.):

[http://hitchdev.com/hitchstory](http://hitchdev.com/hitchstory)

The time when it was most useful as a "BDD tool" was when I was working with
an extremely technical stakeholder who was proposing behavior in a command
line tool he wanted.

I simply wrote 'tests' that described the command line tool's behavior and
showed the YAML to him. He corrected the mistakes I'd made by misinterpreting
his original requirements and then I built the tool to that spec and when it
passed I was confident I'd nailed his requirements.

QA picked up bugs afterwards but they were all either (quickly rectified)
mistakes he'd made in the spec or environment issues. I had zero
spec<->programmer communication issues even though (and here's the crazy part)
_the domain was very complex and I didn 't understand it_. It had to do with a
highly customized software system I didn't understand which enacted some sort
of financial process which I also didn't understand.

Cucumber can do this in theory, but in practice the spec is not sufficiently
expressive and the stories end up being ridiculously, unusably vague. Unit
tests could also do this in theory I guess, but good fucking luck getting a
stakeholder to read them even if you do manage to write them "readably".

I'm taking this process a step further. Although these YAML specifications
were useful for me in the past to collaborate with stakeholders they're still
not amazingly readable. For instance, the "YAML inheritance" makes it easy for
programmers to maintain but harder for non-technical stakeholders to
understand.

Instead of sacrificing maintainability for readability I created a process to
easily generate readable stakeholder documentation from the 'YAML tests'. I
used this in the libraries on the website above to generate always-in-sync-
and-still-highly-readable documentation.

I think this could be used to essentially write low level "unit" tests which
generate documentation which stakeholders can interpret (e.g. defining the
behavior of a complex pricing algorithm) and use that to get a quicker
turnaround on understanding, defining and modifying desired behavior in
complex domains.

------
txime
If you don't mind a shameless plug, we'd be happy to invite you on Txime, a
collaborative webapp to conduct DDD and especially event storming sessions.
[http://www.txime.com](http://www.txime.com) We're in beta but opening soon.

------
shusson
In my experience focusing on the tools usually ends up failing projects or
going over-budget. e.g "we don't have a product yet, but look at all these
shiny cucumber tests!". I think a good start is to have someone who is very
user focused in the team.

------
nickjj
I just use a whiteboard or paper. Nothing beats it IMO because at this stage
you want to create a brain dump and chances are you'll be making a lot of
changes.

With most programs, you'll spend half your time battling its UI and trying to
get around limitations.

------
sailfast
Many people have said this but here goes anyway: The tooling will not help you
better define the specifications. The tooling will not help you manage
changing specifications. You can cover all the bases easily in a free Github
project (Edit: pick your web-based tool, basically) without too many issues.

I would argue that the reason things get called out as poorly defined or
change is because risks are not addressed early, and hypotheses are assumed to
be theorems.

Make sure your teams test the main assumptions early, with actual code if at
all possible. That will call out why your stories aren't clear enough.

Tools are useful, but they won't solve your problem.

------
bpizzi
Those days I just sketch wireframes in a Google Slides presentation.

Bonus point for the interactivity: I can share the URL for remote work
sessions, and the stakeholders on the other end of the line can see me
create/adapt wireframes and notes in real time, while I'm explaining
everything on the phone (using anonymous access for those not logged into 'The
Google').

Google Docs has version management since this year (I think), so I can
pinpoint the evolution of specifications over the time. When I'm done I'll
just export a PDF and attach to a "Spec done!" mail to everyone.

------
sigsergv
This is very advanced topic and I think (almost) all “end-users” are
absolutely not ready to embrace it. You cannot be completely sure that you and
“end-user” are thinking about the same subject.

------
agentultra
I think it depends on the problem domain. In high-risk or regulated industries
having requirements and specifications is a prerequisite to shipping. There
are plenty of good sources for finding the standard expected formats out
there. If you're an IEEE member you'll find the ISO requirements and
specifications templates in the library.

Doing requirements gathering isn't an inherently bad process to adopt even in
non-regulated settings. However I believe many developers will have a negative
reaction due to prior experiences with, or having heard about, the waterfall
method. The key to remember is that requirements don't, and shouldn't, have to
come with an estimate. Great requirements demonstrate a thorough understanding
of the problem. They're written from the perspective of the end-user.

Specifications are the dual of requirements and are what drive the
implementation that solves the stated problem. Specifications is something I
think we need to get better at. We some unit testing, some integration tests,
and occasionally end-to-end user tests... some few to property tests; but
rarely do we write formal, verifiable specifications in a modelling language
that can be checked with a model checker. Rarer still do we write proofs as
part of our specifications.

We often elide specifications or we do the minimum necessary to assure
correctness. This is to the detriment of your team on a larger project. How do
you know your designs don't contain flaws that could have been avoided? How
much time are you spending before you realize that the locking mechanism you
chose and shipped has a serious error in it that is causing your SLO's to
slip?

For requirements I just use plain old Markdown with pandoc. For specifications
I use a mixture of Markdown and TLA+. I use TLA+ for the hard parts of the
system that we're unsure about and require correctness for. The rest of the
specifications that aren't as interesting I simply use prose. It's a matter of
taste but it does require an intuition for abstraction... when to do it and
how to think about systems abstractly.

We could definitely use better tools for TLA+-style specifications, btw. Maye
more libraries that can translate TLA+ state machines into the host language
so that we can drive them from our integration tests, etc. Better IDE tooling.
Better training.

------
lifeisstillgood
I have been mucking about with the idea of textual prose discussion documents,
which hyperlink markdown style to usecases, which link to tickets in Jira or
somesuch.

then as the document is discussed and altered by the owner the use cases alter
and flow

It's just trying to keep a prose discussion in line with everything else in
dev friendly manner (it can be stored in the docs folder in git and be
generated onwards etc)

still playing

------
flarg
Wireframes for UI design alongside UML for enterprise data models and process
flows, and finally good old truth tables work best in my experience. They work
to bridge the gap between business and development. You'll never get 100pc
coverage but you'll get a lot of the way there.

------
tnolet
I loved reading "User Story Mapping" and putting it to use. It really changed
how I worked. Highly recommended.
[http://shop.oreilly.com/product/0636920033851.do](http://shop.oreilly.com/product/0636920033851.do)

------
smartmic
Coming back to the question: One possible helpful tool is Doorstop
([https://github.com/jacebrowning/doorstop](https://github.com/jacebrowning/doorstop)).

I like the approach to embed requirements in source code (management).

------
cosinetau
Aren't there testing frameworks that were supposed to help solve this problem?

Maybe you could engage customers qualitatively, and translate that information
into software requirements and acceptance tests? I'm not entirely clear how
you want to engage these kinds of users.

------
gbtw
We did a lot with jstd and used doors succesfully [https://www.ibm.com/us-
en/marketplace/rational-doors](https://www.ibm.com/us-en/marketplace/rational-
doors)

------
tmaly
I experience issues with poor requirements everyday. The single biggest
problem I see is that the people writing the requirements have no idea how to
write them. We do not need a tool as much as we need better training.

------
bartkappenburg
I always refer people to this analogy on the problem of estimation in software
projects, I think it's pretty spot on:
[http://qr.ae/TUGpbW](http://qr.ae/TUGpbW)

------
DanielBMarkham
Technical coach here. This is a passion of mine. I've seen far too many teams
waste far too much time with horrible, horrible backlogs and tooling systems
-- usually set up with the most sincere good intentions, by the way. I care so
much about it I wrote a book on managing project information. The key example,
repeated throughout the book, is a small team meeting folks and starting to
make stuff people want. (Obligatory link: [https://leanpub.com/info-
ops](https://leanpub.com/info-ops) )

I believe if you can get the small team scenario working over-and-over again
scaling will work itself out. So far I have no reason to believe this isn't
the case -- and I've applied the principles in the book both to functional
coding and program management.

mjul's comment is the key one: you can't know everything before you start.
That doesn't mean you can't know _anything_. It means that there is a
"progressive elaboration" that has to happen on an as-needed basis. Otherwise
you're stuck either not knowing enough to get going -- or having created a
monster or a tools/information system that ends up running the project instead
of the team.

There are some sanity checks for whatever backlog/requirements system you are
using. Instead of my continuing to pitch, I'll just list the things that your
system should do no matter what kind of system you have.

\- Handle whatever detail is needed before actual development happens.

\- Be able to "flip-around" and start from scratch within an hour or so. (And
"starting from scratch" means beginning with nothing and ending with the team
starting coding) While keeping all that detail in test 1.

\- Be reusable with other teams doing similar work. A backlog/requirements
system can't make work fungible, but it can enable better conversations in
other teams without being a burden.

\- Limit meetings around organization to under an hour or so. Yep, you gotta
have those "meta" meetings from time-to-time and talk about things like
release plans. But they shouldn't take over your afternoon.

\- Help the larger organization (if there is one) learn and grow over time.
Orgs learn from the bottom-up. Good tools should facilitate this learning.

\- Drive directly to acceptance tests. Good backlogs are testable. Things
shouldn't be in there that don't drive tests.

\- Have controls in place to prevent abuse. As soon as you create some tool
for the requirements/scoping phase, somebody is going to go all architecture-
astronaut and overuse it. There has to be controls to prevent this from
happening. The tool should _facilitate_ the work of understanding and scoping,
not _replace_ it. (I think even a lot of tools vendors get confused on this
one).

I even wrote an analysis compiler that demonstrates all of this as part of
writing the book. So if you think all of this is impossible -- happy to do a
demo.

There are a few other testable criteria for whatever your requirements/backlog
system is, but that should be enough to get you started deciding whether some
system is better or worse than another system.

------
slaymaker1907
I really like TiddlyWiki since it makes it easy to link everything together
and organized.

------
j45
You may like a product roadmapping tool like Aha.io

------
wasd884
Better colleagues with better brains.

