Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you keep track of software requirements and test them?
195 points by lovehatesoft on April 19, 2022 | hide | past | favorite | 125 comments
I'm a junior dev that recently joined a small team which doesn't seem to have much with regards to tracking requirements and how they're being tested, and I was wondering if anybody has recommendations.

Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.

I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!




In a safety-critical industry, requirements tracking is very important. At my current employer, all of our software has to be developed and verified in accordance with DO-178 [0]. We have a dedicated systems engineering team who develop the system requirements from which we, the software development team, develop the software requirements; we have a dedicated software verification team (separate from the development team) who develop and execute the test suite for each project. We use Siemens's Polarion to track the links between requirements, code, and tests, and it's all done under the supervision of an in-house FAA Designated Engineering Representative. Boy is it all tedious, but there's a clear point to it and it catches all the bugs.

[0] https://en.wikipedia.org/wiki/DO-178C


Just wanted to ask, this pretty much ensures you're doing waterfall development, as opposed to agile, right?


Not sure about how parent concretely operates. But there's no reason you cannot do Agile this way.

Agile iteration is just as much about how you carve up work as how you decide what to do next. For example you could break up a task into cases it handles.

> WidgetX handles foobar in main case

> WidgetX handles foobar when exception case arises (More Foo, than Bar)

> WidgetX works like <expected> when zero WidgetY present

Those could be 3 separate iterations on the same software, fully tested and integrated individually, and accumulated over time. And the feedback loop could come internally as in "How does it function amongst all the other requirements?", "How is it contributing to problems achieving that goal?"


For safety system software most people I know would be very nervous (as in, I'm outta here) about testing software components and then not testing the end result as a whole, just too many possible side effects could come into play, including system wide things that only reveal themselves when the entire program is complete and loaded/running.

What you describe already occurs to some extent in the process and machinery safety sector, where specialised PLC programming languages are used - there is a type of graphical coding called Function Block, where each block can be a re-useable function encapsulated inside a block with connecting pins on the exterior. eg a two out of three voting scheme with degraded voting and MOS function available

The blocks are tested, or sometimes provided as a type of firmware by the PLC vendor, and then deployed in the overall program with expectation inside the block is known behavior, but before shipping, the entire program is tested at FAT.

Depending on the type of safety system you are building, and the hazards it protects against, there is potentially the expectation from the standards that every possible combination of inputs is tested, along with all foreseeable (and sometimes unexpected) mis-use of the machine/process.

In reality that's not physically achievable in any real time available for some systems, so you have to make educated guesses where the big/important problems might hide, fuzz etc, but the point is you aren't going to test like that until you think your system development is 100% complete and no more changes are expected.

And if you test and need to make any significant changes due to testing outcomes or emergent requirements, then you are potentially doing every single one of those tests again. At very least a relevant subset plus some randoms.

Background: I am registered TUV FS Eng and design/deliver safety systems.

It's a whole different game, across the multi year span of a project you might in some cases literally average less than one line of code a day, 95%+ of work is not writing code, just preparing to, and testing.


to reiterate on parents endorsement for agile and the point that you seem to be taking issue with: nothing in Agile says you can't run final acceptance tests or integration tests before shipping.

we have done this in quite a couple of companies where things like functional safety or other requirements had to be met. agile sadly gets a bad rep (as does devops) for the way it is rolled out in its grotesque perverted style in large orgs (wagile etc that are nothing but a promise to useless middle/line managers in large orgs not to fire them, or "dev(sec)ops" being condensed into a job title - if that is you, shoot your managers!).

if you increase test automation and get better visibility into risks already during the requirements management phase (e.g. probably you're doing D/FMEA already?) then nothing stops you from kicking these lazy-ass firmware hardware engineers who are scared of using version control or jenkins to up their game, and make your org truly "agile"). Obviously it's not a technical problem but a people problem (to paraphrase Gerald M. Weinstein) and so every swinging dick will moan about Agile not being right for them or DevOps not solving their issues, while in reality we (as an industry) are having the same discussion since the advent of eXtreme programming, and I'm so tired of it I want to punch every person who invites an Agile coach simply for not having the balls/eggs to say the things everyone already knows, it's infuriating to the point I want to just succumb to hard drugs.


to reiterate on parents endorsement for agile and the point that you seem to be taking issue with: nothing in Agile says you can't run final acceptance tests or integration tests before shipping.

This is exactly right. I work in a highly regulated space, and we have been working in an Agile framework for awhile now. There are two iterations baked into every release cycle (at the end) for final regression testing. That cycle will re-run every test case generated during the program increment, plus additional test cases chosen based on areas of the application that were touched during development.

On top of final validation, we also have an acceptance validation team that runs full integration tests after final validation is complete.


I would very much like to understand how it might be possible to improve on the conventional workflow for the work I am involved in. But I am not quite clear what agile as you implement it means, in contrast to v-model and waterfall, and how it provides advantages (I am guessing accelerated schedule?) to the process.

Can you refer me to any available online case studies etc, or provide me some more detail?

The sectors I work in we integrate off the shelf hardware such instruments, valves etc, we don't manufacture from components as such.


I recommend you learn more about SAFe agile https://www.scaledagileframework.com/ . They all have their merits, but I find this works up to complex organizations, and can simply drop the things that are not worth it for smaller businesses.



Waterfall is a great methodology where warranted. It ensures you're doing things in a principled, predictable, repeatable manner. We see all this stuff lamenting about and trying to implement reproducibility in science and build systems, yet seem to embrace chaos in certain types of engineering practices.

We largely used waterfall in GEOINT and I think it was a great match and our processes started to break down and fail when the government started to insist we embrace Agile methodologies to emulate commercial best practices. Software capabilities of ground processing systems are at least somewhat intrinsically coupled to the hardware capabilities of the sensor platforms, and those are known and planned years in advance and effectively immutable once a vehicle is in orbit. The algorithmic capabilities are largely dictated by physics, not by user feedback When user feedback is critical, i.e. UI components, by all means, be Agile. But if you're developing something like the control software for a thruster system, and the physical capabilities and limitations of the thruster system are known in advance and not subject to user feedback, use waterfall. You have hard requirements, so don't pretend you don't.


Even with “hard” requirements in advance, things are always subject to change, or unforeseen requirements additions/modifications will be needed.

I don’t see why you can’t maintain the spirit of agile and develop iteratively while increasing fidelity, in order to learn out these things as early as possible.


> I don’t see why you can’t maintain the spirit of agile and develop iteratively

The question is not whether you can't. The question is whether it provides advantages. Agile comes with its own downsides compared to a waterfall. Note, that I've been working with agile methods most of my career and I don't want to change that.


If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization. ~ Gerald Weinberg (1933-10-27 age:84) Weinberg’s Second Law

https://www.mindprod.com/jgloss/unmain.html


> If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization.

If builders built buildings the way programmers write programs, we’d have progressed from wattle-and-daub through wood and reinforced concrete to molecular nanotechnology construction in the first two generations of humans building occupied structures.

Bad analogy is bad because programs and buildings aren't remotely similar or comparable.


On that path a lot of people would have died due to building collapses and fires though.


Still I feel like your analogy is the better one, things are moving very fast. With declarative infra and reproducible builds you’re pumping out high quality, well tested buildings at record speeds.


Programmers don't build, they design. It's more akind to what building architects do in a cad program. They go through many iterations and changing specs.


When programmers are designing it is more likely to be in the early stages when the program is still small. Often once the program gets bigger, the effort devolves to simply building. They might feel like the design is wrong, but the inertia by then is against the design evolving.

What we need is a practical way to keep the design and implementation synchronized and yet decoupled


You don't have too, but it is very common to fall into the trap.

If working within a safety-critical industry and wanting to do Agile, typically you'll break down high-level requirements into sw requirements while you are developing, closing/formalizing the requirements just moments before freezing the code and technical file / design documentation.

It's a difficult thing to practice agile in such an industry, because it requires a lot of control over what the team is changing and working on, at all times, but it can be done with great benefits over waterfall as well.


Big waterfalls, yes.


And... is your team consistently hitting the estimated product delivery schedules? (honest question)


You can and will make changes on the way but every change is extremely expensive so it’s better to keep changes low.


Actually most functional safety projects use the v-model (or similar, topography can vary a little as to needs), which is waterfall laid out a slightly different way to more clearly show how verification and validation closes out all the way back to requirements with high degrees of traceabilty.

I've always wanted to break that approach for something a little more nimble, probably by use of tools - but I can't see agile working in functional safety without some very specific tools to assist, which I am yet to see formulated and developed for anything at scale. Also, there are key milestones where you really need to have everything resolved before you start next phase, so maybe sprints, dunno.

The thing about doing waterfall/v-model is if done correctly there is little chance you get to the final Pre-Start Safety Review/FSA 3, or whatever you do before introducing the hazard consequences to humans, and a flaw is discovered that kicks you back 6 or 12 months in the design/validation/verification process. This, while everyone else stands around and waits because they are ready and their bits are good to go, and now you are holding them all up. Not a happy day if that occurs.

FS relies on high degree of traceability and testing the software as it will be used (as best possible), in it's entirety.

So not sure how agile could work in this context, or at least past the initial hazard and risk/requirements definition life cycle phases.

FS is one of things where your progress that you can claim is really only as far as your last lagging item in the engineering sequence of events. The standard expects you to close out certain phases before moving onto subsequent ones. In practice it's a lot messier than that unless extreme discipline is maintained.

(To give an idea of how messy it can get in reality, and how you got to try and find ways to meet the traceability expectations, sometimes in retrospect - last FS project I was responsible for design we were 2.5 years in and still waiting for the owner to issue us their safety requirements. We had to run on a guess and progress speculatively. Luckily we were 95%+ correct with our guesses when reconciled against what finally arrived for requirements)

But, normally racing ahead on some items is a little pointless and likely counterproductive, unless just prototyping a proof of concept system/architecture, or similar activity. You just end up repeating work and then you also have extra historical info floating around and there's possibility that some thing that was almost right but no longer current gets sucked into play etc etc etc. Doc control and revision control is always critical.

Background: I am a TUV certified FS Eng, I have designed/delivered multiple safety systems, mainly to IEC 61511 (process) or IEC 62061 (machinery).


what does functional safety mean in the context you are talking about? like fighter jets? or what?


LNG Plants, Burner Management Systems, Mine Winders, Conveyors - any process plant or machinery where there is potential for harm to come to humans and the is an electronic programmable device mitigating the risk, eg a Safety PLC running a Safety Instrumented System.

I am about to do some automotive FS, so that is potentially ISO 26262, but it might actually be more 61508, which is the parent standard for the safety group of standards.


He listed standards. They're for industrial processes and machines - think factories where the processes and machines have life-safety hazards.


Waterfall and Agile are tools. If you need to hang a photo, a hammer and a nail. Cut down a tree? Maybe not the hammer and the nail.


Could you use both to good effect? Waterfall to make a plan, schedule, and budget. Then basically disregard all that and execute using Agile and see how you fare. Of course there would be a reckoning as you would end up building the system they want rather than what was spec'd out.


You could. You might even say it's difficult to make any project estimate without your plan being waterfall. Planning and execution are deliberately two very different things, and convincing the customer - or the steering committee of that - is key to a good product.


These are all just heuristics that help people manage the fundamentally unmanageable: the unpredictable future. Everyone does a little bit of everything when working. A big company will waterfall year long strategies with the individual parts agile’d. Individuals will waterfall their daily tasks while working on an agile sprint.


Well… the 737MAX seems to suggest it doesn’t catch all the bugs.


AFAIK the bugs were caught, known about, and deliberately ignored. In fact even when the bug caused a fatal error that brought an instance crashing (to the ground, literally!), it was ignored both by Boeing and the US government.


Saying they 'ignored' it is quite generous, considering the former CEO essentially blamed the pilots (source: https://www.bloomberg.com/news/features/2021-11-16/are-boein...).

Here's an excerpt from the article...

--- “No, again, we provide all the information that’s needed to safely fly our airplanes,” he answered.

Bartiromo pressed: But was that information available to the pilots? “Yeah, that’s part of the training manual, it’s an existing procedure,” Muilenburg said.

“Oh, I see,” she said. But in fact, MCAS wasn’t in the manual, unless you counted the glossary, which defined the term but didn’t explain what the software did. ---

A safety critical feature that can down a plane if not disabled in time... tucked away in a glossary.

The documentary 'Downfall: The Case Against Boeing' goes into great detail about the whole ordeal.


Typical to blame it on PEBKAC. "The pilots should have turned the plane off and on again." - Muillenberg, probably.


to make matters worse, as far as I understand, the bug was declared "out of scope" by Boeing claiming that dealing with a runaway stabilizer is part of standard 737 training/certification, so even if the MCAS goes bleh, it should be no problem.

which sounds reckless, after all if you make a system more complicated by introducing a "feature" at least try to make it fail gracefully, etc, etc.

then you learn that this glorious safety critical software thing thing was fed by one single angle-of-attack measurement device (oh and to make the system even more mystical the planes had two of these digitalized wind detector flappy flaps, but only one was active, and it switched on reboots, so if one pilot noticed that the system was behaving badly, and then the second one noticed that it was great after all ... the third one had no clue what to expect!)

:|


If you haven't seen, there is a Netflix documentary worth watching all the way about the 737 Max.


When it's technically feasible, I like every repo having along side it tests for the requirements from an external business user's point of view. If it's an API then the requirements/tests should be specified in terms of API, for instance. If it's a UI then the requirements should be specified in terms of UI. You can either have documentation blocks next to tests that describe things in human terms or use one of the DSLs that make the terms and the code the same thing if you find that ergonomic for your team.

I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.

I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."


What we do:

- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'

- there's one pull request per story

- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green

- even side effects like outgoing emails are verified

- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged

- practically no manual testing as anything that a manual tester would do is likely covered with automated tests

- no QA team

And we have a system that provides us a full report of all the tests and links between tests and tickets.

We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.

All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.

(we're hiring :) )


That sounds like a dream. We do medical systems as well, but depend heavily on manual testing. We use digital scans of a patients mouth to design a restoration in our CAD application. So we have a 3D scene where the user interacts with/manipulates the objects in it.

Not knowing what kind of application you produce, but how do you automate user interactions?


Dream board for any projects. One PR per PBI/US is already hard to make people understand this or that we/they shouldn't start working on a PBI/US without acceptance criteria.

After I am unsure of the whole "testing part" especially running all the tests for each PR for typical projects..


Let the product owner (PO) handle them.

The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.

Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.


This actually has a nugget of wisdom. I wish I was more open to soaking up wisdom - and less likely to argue a point - when I was a junior dev. Or still now, really.

Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem. Assuming the team is committed to some form of Agile and you have such a thing as a PO.

However, I also disagree with the main thrust of this comment. A PO should have responsibility, sure. But if that gets translated into an environment where junior devs on the team are expected to not know requirements, or be able to track them, then you no longer have a team. You have a group with overseers or minions.

There's a gray area between responsibility and democracy. Good luck navigating.


> Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem.

In some work environments, there may be unspoken requirements, or requirements that the people who want the work done don't know they have.

For example, in an online shopping business the head of marketing wants to be able to allocate a free gift to every customer's first order. That's a nice simple business requirement, clearly expressed and straight from the user's mouth.

But there are a bunch of other requirements:

* If the gift item is out of stock, it should not appear as a missing item on the shipping manifest

* If every other item is out of stock, we should not send a shipment with only the gift.

* If we miss the gift from their first order, we should include it in their second order.

* The weight of an order should not include the gift when calculating the shipping charge for the customer, but should include it when printing the shipping label.

* If the first order the customer places is for a backordered item, and the second order they place will arrive before their 'first' order, the gift should be removed from the 'first' order and added to the 'second' order, unless the development cost of that feature is greater than $3000 in which case never mind.

* The customer should not be charged for the gift.

* If the gift item is also available for paid purchase, orders with a mix of gift and paid items should behave sensibly with regard to all the features above.

* Everything above should hold true even if the gift scheme is ended between the customer checking out and their order being dispatched.

* The system should be secure, not allowing hackers to get multiple free gifts, or to get arbitrary items for free.

* The software involved in this should not add more than, say, half a second to the checkout process. Ideally a lot less than that.

Who is responsible for turning the head of marketing's broad requirement into that list of many more, much narrower requirements?

Depending on the organisation it could be a business analyst, a product owner, a project manager, an engineer as part of planning the work, an engineer as part of the implementation, or just YOLO into production and wait for the unspoken requirements to appear as bug reports.


> an engineer as part of the implementation, or just YOLO into production and wait for the unspoken requirements to appear as bug reports.

In my experience, engineer does it as part of implementation until they burn out and just YOLOs and leaves them for bug reports :p

But to be more serious, I don’t think this is POs or one persons job. This is exactly why engineers should learn about their domain.


> there may be unspoken requirements, or requirements that the people who want the work done don't know they have

That is just restating the problem that the "PO can't define the goals."

It's a bigger problem in the industry. Somehow, the Agile marketing campaign succeeded, and now everyone is Agile, regardless of whether the team is following one of the myriad paradigms.

I can rattle off dozens of orgs doing Scrum, but maybe 1 or 2 that actually are. Maybe doing two weeks of work and calling it a sprint, then doing another two weeks of work...and so on. No defined roles. It's just a badge word on the company's culture page.

The companies that are really doing something Agile are the consultancies that are selling an Agile process.


That would be nice, and maybe I should have clarified why I asked the question. I was asked to add a new large feature, and some bugs popped up along the way. I thought better testing could have helped, and then I thought it would possibly help to list the requirements as well so I can determine which tests to write/perform. And really I thought I could have been writing those myself - PO tells me what is needed generally, I try to determine what's important from there.

Or maybe I just need to do better testing myself? There's no code reviews around here, or much of an emphasis on writing issues, or any emphasis on testing that I've noticed. So it's kind of tough figuring out what I can do


This is good advice for multiple reasons.

One I haven't seen mentioned yet - When Product is accountable & responsible for testing the outputs, they will understand the effort required and can therefore prioritize investment in testable systems and associated test automation.

When those aspects are punted over to architects/developers/QA, you'll end up in a constant battle between technical testing investments and new features.


I don't disagree with you. In fact, I think it's just a restatement of the PO's job description.

But POs who are technical enough to understand the system to understand what the requirements of the system are, empirically, unicorns.


This is a LOT to put on a PO. I hope they have help.


This is why you need a QA


Im a technical Product Owner. The QA team are my best friends. They save my butt all the time.


I review software for at least 3-5 companies per week as part of FDA submission packages. The FDA requirements require traceability between reqs and the validation. While many small companies just use excel spreadsheets for traceability, the majority of large companies seem to use JIRA tickets alongside confluence. While those arent the only methods, they seem to be 90% of the packages I review.


Health tech - we also use this combo. The Jira test management plugin XRay is pretty good if you need more traceability.


Xray and R4J plugins make it pretty nice in JIRA... as far as traceability goes it's MUCH more user friendly than DOORS.


Exactly the same process for us, also in healthcare and medical devices.


I would love to see how other companies do it. I understand the need for traceability but the implementation in my company is just terrible. We have super expensive systems that are very tedious to use. The processes are slow and clunky. There must be a better way.


We have been working software for FDA submissions as well. We use Jama https://www.jamasoftware.com/ for requirements management and traceability to test cases.


I have also used Jama in a couple of companies. One for medical devices and one doing avionics. My experience is that it's quite similar to Jira in that if it's set up well it can work really well. If it's set up poorly it is a massive pain.


hi, we're trying to build a validated software environment for an ELN tool. I would be interested in learning more about your experience with this software review if you could spare a few minutes -- jason@uncountable.com


Zooming into "requirements management" (and out of "developing test cases") there's a couple of Open Source projects that address specifically this important branch of software development. I like both approaches and I think they might be used in different situations. By the way, the creators of these two projects are having useful conversations on aspects of their solutions so you might want to try both and see what's leading from your point of view.

* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc

Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.

How to build test cases is another story.


I was at lockheed martin for a few years where Rational DOORS was used. Now at a smaller startup (quite happy to never touch DOORS again)

I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.

Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.

Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior


Gitlab. Just use Issues you can do everything with the free tier. (It's called "Issues workflow" - gitlab goes a little overboard though, but I'd look at pictures of peoples issues list to get examples).

My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.

Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.

You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.

In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".


If possible, could I get your opinion on a specific example? In my current situation, I was asked to add a feature which required a few (java) classes. So -

* It seems like this would have been a milestone?

* So then maybe a few issues for the different classes or requirements?

* For each issue, after/during development I would note what tests are needed, maybe in the comments section of the issue? Maybe in the description?

* And then automated tests using junit?


I don't know your deployment schedule or rules. I represent milestones as groups of independent issues (bug fixes or new features) that would all go together as a release. I don't use milestones as a group of multiple issues that represent one requirement (that would be referred to as an epic). However epics are part of the paid version, however there's no reason why you couldn't use milestones this way.

If you have a requirement (doesn't matter how big or small) I'd treat that as 1 issue (regardless of how many java classes or lines of codes need modifying). If the issue is complex then within the issue's description you can use markdown (like checkboxes or bullet points) to identify subset requirements. However, if you can break that large requirement into functional changes that could exist/be deployed separately then I'd probably do multiple independent issues with some type of common identifier in the issue's name (or use your interpretation of milestones and put all those issues into 1).

If you use gitlab as your git repository then tying an issue to a merge request is easy and it would then show you the diff (aka all the changes to source code) that the issue required for implementation.

In terms of tests, same kind of answer - I don't know your rules. Every issue should have a test plan, perhaps using markdown in the issues description would convey that test plan the easiest. If you automate the test using junit then not sure the test plan is anything more than "make sure test xyz from junit passes", if it's a manual test then the issue's description can have a list of steps using markdown.


Each issue can be from a 1 line fix to a week or even 2 week thing. But by that point, probably a metaissue that corresponds to other issues or issue with inline subtasks (GitHub has check boxes).

Features cut across code, so no 1-1 mapping with classes. Tests are generally self-documenting and land alongside the feature they are for. You can document them, but likely either a comment in the issue/PR if technically interesting, or in a separate ~wiki doc as part of a broader specification.

Ideally each commit is valid & passes tests (see "conventional commits") and each issue/PR has accompanying tests whether around a new feature or bugfix. Particular test frameworks change every year.


This is super interesting and incredibly difficult. In some regulated environments, like medical devices, you MUST keep track of requirements in your product's technical documentation. I work on a Software Medical Device product and have seen tons of workflows at similar companies. There are many different approaches to this and none that I have seen work really well. In my view this field is ripe for disruption and would benefit from standardization and better tooling.

Here are some options that I've seen in practice.

A: put everything in your repository in a structured way:

pros: - consistent - actually used in practice by the engineers

cons: - hard to work with for non-developers - too much detail for audits - hard to combine with documents / e-signatures

B: keep separate word documents

pros: - high level, readable documentation overview - works with auditor workflows - PM's can work with these documents as well

cons: - grows to be inconsistent with your actual detailed requirements - hard to put in a CI/CD pipeline

A whole different story is the level of details that you want to put in the requirements. Too much detail and developers feel powerless, too little detail and the QA people feel powerless.


For option A, how do you put the requirements in the repo? Another user mentioned the possibility of having a "req" folder at the same level of e.g. "src" and "test". Maybe the file structure would match that of the other directories? And what do you use - excel files, word docs, .md files, something else?


There are some tools like https://doorstop.readthedocs.io/en/latest/ that streamline it.


I think it's important to keep requirements in Git along with the source code. That way when you implement a new feature you can update the requirements and commit it along with the code changes. When the PR is merged, code and requirements both get merged (no chance to forget to update e.g. a Confluence document). Each branch you check out is going to have the requirements that the code in that branch is supposed to implement.

For simple microservice-type projects I've found a .md file, or even mentioning the requirements in the main README.md to be sufficient.

I think it's important to track requirements over the lifetime of the project. Otherwise you'll find devs flip-flopping between different solutions. E.g. in a recent project we were using an open-source messaging system but it wasn't working for us so we moved to a cloud solution. I noted in the requirements that we wanted a reliable system, and cost and cloud-independence wasn't an important requirement. Otherwise, in two years if I'm gone and a new dev comes on board, they might ask "why are we using proprietary tools for this, why don't we use open source" and spend time refactoring it. Then two years later when they're gone a new dev comes along "this isn't working well, why aren't we using cloud native tools here"....

Also important to add things that aren't requirements, so that you can understand the tradeoffs made in the software. (In the above case, for example, cost wasn't a big factor, which will help future devs understand "why didn't they go for a cheaper solution?")

Also, if there's a bug, is it even a bug? How do you know if you don't know what the system is supposed to do in the first place?

Jira tickets describe individual changes to the system. That's fine for a brand new system. But after the system is 10 years old, you don't want to have to go through all the tickets to work out what the current desired state is.


I really like this idea.

However, what would be missing from this is discussions for each requirement specified. Or would you want to include that as well?

It would be nice having a dedicated directory for requirements, src, infra, tests and docs. Which would make things easier to track over long period of time I think


What happens when your requirements span multiple projects and repositories and microservices including frontend and backend work etc.? That doesn't all fit into a git PR.


In the world I live in (big org using Jira), that sounds like an epic, with lots of stories underneath. Some stories are defined early on, more can be added as the business comes up with new reqs or devs find things that need more work.


As a junior dev, this isn't your job.

Your job is to do what is being asked of you and not screw it up too much.

If they wanted to track requirements, they'd already track them.

People have very fragile egos - if you come in as a junior dev and start suggesting shit - they will not like that.

If you come in as a senior dev and start suggesting shit, they'll not like it, unless your suggestion is 'how about I do your work for you on top of my work, while you get most or all of the credit'.

That is the only suggestion most other people are interested in.

Source: been working for a while.


Sensing some sarcasm, but I agree there is some wisdom is "keeping your place." Not very popular to say that these days, but boy I wish I took that advice more as a junior dev. Still need to do that more.

However there is a spectrum, and if it turns from "listen rather than speak" in a respectful, learning sort of mentality to "shut up and do as I say, no questions", then requirements tools are not going to address the real problems.

In my experience, having requirements and processes and tools being used in a mindful way can be wonderful, but all that pales in comparison with the effectiveness of a well-working team. But that's the human factor and the difficult part.

Source: also been working a while. Seen good teams that were very democratic and also good teams that were very top-heavy militaristic (happy people all around in both scenarios).


Well so the reason I asked this questions is that I did screw up a bit, and I think it could have been caught had I done sufficient testing - but I didn't because it doesn't seem to be part of the culture here, and neither are peer reviews.

So I _was_ trying to do only what was asked of me, just writing the code, but I guess I thought what I did at my previous job could have helped - which is keeping track of what was needed and then how I planned to accomplish and test.

But yeah, you've got me thinking about how or whether I should broach this topic; I think my lead is great, seems open to ideas, wants things to work well, so maybe I'll just ask what they think about how to avoid these kinds of mistakes.


As a junior dev you shouldn't be able to screw up big time, if you do, that's on the team/company, not on you. As a senior it is trickier, but usually no one should be able to screw up monumentally, if they do it's a lack of internal process, not on the individual (exceptions being malicious intents).

Changing internal processes without being a decision maker inside the company (e.g. an influencial manager/lead, the owner, a vp, etc.) is hard, even if there are clear benefits. If there of things that make no sense, there are no horizons for the improvements to come and you are not learning from your seniors, consider if it makes sense to move forward. Trying to change internal processes at reluctant employers is a common cause of immense frustration (and burnout), don't let yourself get caught into that.


> so maybe I'll just ask what they think about how to avoid these kinds of mistakes

This, 100%.

Don't tell anyone at work you asked on HackerNews and got feedback - they don't want to debate the merits of various approaches. They want it done their way, because it is obviously the right way, or else they would've modified it, right? :)

Most jobs are repetitive, so you eliminate mistakes just by doing it for a while. Hence nothing extra needs to be done, which is exactly how most people like it and why your company has no peer review or much of anything - because it just works, with the least amount of effort, somehow, someway :)


"give it a quick test and ship it out, our customers are better at finding bugs than we are" - lecture from the CEO of a company I used to work for who didn't want me to waste any time testing and didn't want to pay me to do testing. I left soon after that to find a place with a different culture, trying to change it was way too hard


We use an issue tracking system like Jira, Trello, Asana, etc and each "ticket" is a unique identifier followed by a brief description. You can add all other sorts of labels, descriptions, etc to better map to the requirements you get. Next, all git branches are named the exact same way as the corresponding ticket. Unit tests are created under the same branch. After getting PR'd in, the code and unit tests can always be matched up to the ticket and therefore the requirement. For us, this system is good enough to replace the usual plethora of documentation the military requires. It does require strict following that can take extra time sometimes, but all devs on my team prefer it to writing more robust documentation.

Another useful tool to use in conjunction to the above is running code coverage on each branch to ensure you don't have new code coming in that is not covered by unit tests.


With a small team, I'm using an open source tool called reqflow (https://goeb.github.io/reqflow/) to track requirement to source code with a Doxygen keyword //! \ref RQ-xxxx. It's generating a traceability matrix and is quite simple to use (perfect for a small team). In my case, I'm using grep on the source code to create the traceability matrix.

For tracking requirement to test, I'm using testlink (testlink.org) where you can enter your requirements from existing documents and linked them to test cases. The documentation is not perfect, better start here:

https://www.guru99.com/testlink-tutorial-complete-guide.html

You can go to Bitnami to get a docker image.


Depends on the industry. In most web services, applications, and desktop software shops; you don't. You track them informally through various tests your team may or may not maintain (ugh) and you'll hardly ever encounter any documentation or specification, formal or informal of any kind, ever.

I wish this wasn't the case but it's been the reality in my experience and I've been developing software for 20+ years. I'm the rare developer that will ask questions and write things down. And if it seems necessary I will even model it formally and write proofs.

Some industries it is required in some degree. I've worked in regulated industries where it was required to maintain Standard Operating Procedures documents in order to remain compliant with regulators. These documents will often outline how requirements are gathered, how they are documented, and include forms for signing off that the software version released implements them, etc. There are generally pretty stiff penalties for failing to follow procedure (though for some industries I don't think those penalties are high enough to deter businesses from trying to cut corners).

In those companies that had to track requirements we used a git repository to manage the documentation and a documentation system generated using pandoc to do things like generate issue-tracker id's into the documentation consistently, etc.

A few enterprising teams at Microsoft and Amazon are stepping up and building tooling that automates the process of checking a software implementation of a formal specification. For them mistakes that lead to security vulnerabilities or missed service level objectives can spell millions of dollars in losses. As far as I'm aware it's still novel and not a lot of folks are talking about it yet.

I consider myself an advocate for formal methods but I wouldn't say that it's a common practice. The opinions of the wider industry about formal methods are not great (and that might have something to do with the legacy of advocates past over-promising and under-delivering). If anything at least ask questions and write things down. The name of the game is to never be fooled. The challenge is that you're the easiest person to fool. Writing things down, specifications and what not, is one way to be objective with yourself and overcome this challenge.


Gitlab does have requirements management integrated but it’s not part of the free tier.

[1] https://docs.gitlab.com/ee/user/project/requirements/


I just tried that feature. I added a requirement. I could only add a title and description, which wasn't great. The requirement appeared in the Issues list, which was a bit odd, and when I closed the issue the requirement disappeared from the requirements list.

Whatever that feature is meant to be, it definitely isn't requirements management. Requirements don't stop being requirements after you've written the code.


GitLab employee here - we list Requirements Management as at "minimal" maturity. I'm sure the team would love to hear more about why the feature didn't work for you - you can learn more about the direction and the team here: https://about.gitlab.com/direction/plan/certify/#requirement...


It depends where you are in your career and what the industry at the time offers.

For requirement, use any kind of issue tracker and connect your commit with issues. Jira, people here hate it for various reason. But it get the job done. Otherwise GitHub issue would (there are problems with GitHub issues, e.g. cross repo issue tracking in a single place. That's another story)

For QA, you want your QA be part of the progress tracking and have it reflected in Jira/GitHub commit.

One thing I think is of equal importance, if not more, is how the code you delivered is used in the wild. Some sort of analytics.

Zoom out a bit, requirement is what you THINK the user want. QA is about whether your code CAN perform what you think the user want plus some safeguard. Analytics is how the user actually perform in real world

A bit off topic here, QA and analytics is really two side of the same coin. Yet people treat it as two different domains, two set of tools. On one hand, the requirement is verified manually through hand crafted test cases. On the other hand, production behavioural insight is not transformed into future dev/test cases effectively. It is still done manually, if any.

Think about how many time a user wander into a untested undefined interaction that escalated into a support ticket. I'm building a single tool to bridge the gap between product(requirement and production phase) and quality (testing)


My first suggestion, wait it out for an initial period and see how much the “requirements” align with the results. Based on my experience, about 3/4 of the time those stating the requirements have no idea what they actually want. I can usually increase the odds of the result matching the actual requirements by interviewing users / requirement generators.

Anyway, no point on tracking low-quality requirements that end up being redefined as you build the airplane in flight.


Underrated parent comment.

Requirements are living entities and subject to Darwin rules. Only the fittest survives.


Having a similar discussion at work recently, I've written in favour of using Gherkin Features to gather high level requirements (and sometimes a bit of specifications), mostly stored in Jira Epics to clarify what's asked.

See the post at https://jiby.tech/post/gherkin-features-user-requirements/

I made this into a series of post about gherkin, where I introduce people to Cucumber tooling and BDD ideals, and show an alternative low-tech for cucumber in test comments.

As for actually doing the tracking of feature->test, aside from pure Cucumber tooling, I recommend people have a look at sphinxcontrib-needs:

https://sphinxcontrib-needs.readthedocs.io/en/latest/index.h...

Define in docs a "requirement" bloc with freeform text (though I put gherkin in it), then define more "specifications", "tests" etc with links to each other, and the tool does the graph!

Combined with the very alpha sphinx-collections, it allows jinja templates from arbitrary data:

Write gherkin in features/ folder, make the template generate for each file under that folder an entry of sphinxcontrib-needs with the gherkin source being quoted!

https://sphinx-collections.readthedocs.io/en/latest/


I have never met a dev who ever enjoyed Cucumber/Gherkin stuff. There's a lot of decorative overhead to make code look friendly to non-coders. Non-coders who eventually never look at the "pretty" code.

Spec-like BDD tests (RSpec, Jest, Spock, et al. - most languages except Python seem to have a good framework) have all the advantages of forcing behavioral thinking without having to maintain a layer of regex redirects.


We're an FDA regulated medical device startup, with a pretty low budget for the moment. Our current setup is two pronged, in-house, and automated.

The first piece is the specification documents, which are simple word docs with a predictable format. These cover how the software SHOULD be implemented. From these documents, we automatically generate the mission critical code, which ensures it matches what we say it does in the document. The generator is very picky about the format, so you know right away if you've made a mistake in the spec document. These documents are checked into a repo, so we can tag version releases and get (mostly) reproducible builds.

The second piece is the verification test spreadsheet. We start this by stating all assumptions we make about how the code should work, and invariants that must hold. These then are translated into high level requirements. Requirements are checked using functional tests, which consist of one or many verification tests.

Each functional test defines a sequence of verification tests. Each verification test is a single row in a spreadsheet which contains all the inputs for the test, and the expected outputs. The spreadsheet is then parsed and used to generate what essentially amounts to serialized objects, which the actual test code will use to perform and check the test. Functional test code is handwritten, but is expected to handle many tests of different parameters from the spreadsheet. In this way, we write N test harnesses, but get ~N*M total tests, M being average number of verification tests per functional test.

All test outputs are logged, including result, inputs, expected outputs, actual outputs, etc. These form just a part of future submission packages, along with traceability reports we can also generate from the spreadsheet.

All of this is handled with just one Google Doc spreadsheet and a few hundred lines of Python, and saves us oodles while catching tons of bugs. We've gotten to the point where any changes in the spec documents immediately triggers test failures, so we know that what we ship is what we actually designed. Additionally, all the reports generated by the tests are great V&V documentation for regulatory submissions.

In the future, the plan is to move from word docs + spreadsheets to a more complete QMS (JAMA + Jira come to mind), but at the stage we are at, this setup works very well for not that much cost.


Thanks for sharing. I just realized your use case could fit quite neatly into Inflex (my app, not open for signups, but has a “try” sandbox), a use case I never considered.

I have an unexposed WIP type of cell that is a rich text document. The rich document you edit has corresponding code, which you can also edit in the other direction. (screenshot https://mobile.twitter.com/InflexHQ/status/14923564133263360...) The neat part is that I added the ability to embed a “portal” to display another cell, such as a number or a table, which can be edited inside the rich text doc or at the cell’s origin. For tutorials, I figured it would also be nice to display the source code of a given cell in a rich editor. E.g.

The formula [plasma * 3.23224 + alpha] yields [7.289221].

You could edit the text (The Formula ... yields ...), or the formula and see the result.

Finally, the rich editor is already in a format ready to be printed as a PDF or Word doc.

Also, the source code being the source of truth means it’s very easy to version the whole system down to SHA512 hashes.

This could unify a use-case like yours, where you have the Google Doc and the Google Sheet bridged by Python, this would cut down that iteration feedback loop.

Being content addressable means that only tests that need to run would be run (see Unison lang), rather than running all tests every time, further cutting down on the feedback loop.

One question that comes to mind is: what if you could export code in the spreadsheet as a general purpose language like C or Python? Or even Lua? Also, would an on-prem/desktop version of the product would be valuable for this use case?

Thanks for the food for thought. This is a niche I never considered! And I’ve worked on a medical device for Amgen before! I forgot all about this.


The data in the spreadsheet is really the most valuable part! I like the idea of using a small DSL to manage this, we actually already have some (very simple) DSL-like things in the spreadsheet to make it easier on editors of the spreadsheet.

Of course, with systems like JAMA, that automatically ingest Word docs and extract requirements, even to the point of generating verification tests, in turn kicking off dependency updates, the development loop becomes quite tight.

We definitely have to improve on the process that exists right now, as it's quite cumbersome to get all set up, and won't really generalize to new devices, but it's been a great learning experience, and I think it's really improved our development process overall!

Definitely a very complex problem to automate in the general case, makes sense why the JAMAs and Greenlight Gurus of the world can charge ungodly sums per seat.


I'd go for integration or end-to-end tests, depending on your application. Name each test after a requirement and make sure the test ensures the entirety of that requirement is fulfilled as intended(but avoid testing the implementation).

As an example, you could have a test that calls some public API and checks that you get the expected response. Assuming your requirement cares about the public API, or the functionality it provides.

I've tried to be as detailed as I can without knowing much about your application: assumptions were made, apply salt as needed.

Personally, I like having a test-suite be the documentation for what requirements exist. Removing or significantly modifying a test should always be a business decision. Your local Jira guru will probably disagree


It's very useful to keep track of changes and to be able to have text to describe and explain, so for me the simplest tool would not be to use a spreadsheet but to create a git repo and to have one file per requirement, which can be grouped into categories through simple folders. You can still have a spreadsheet as top level to summarise as long as you remember to keep it up to date.

Top-level requirements are system requirements and each of them should be tested through system tests. This usually then drips through the implementation layers from system tests to integration tests, to unit tests.

Regression testing really is just running your test suite every time something changes in order to check that everything still works fine.


Agreed, monstrous and expensive project-management software isn't necessary.

git to manage the graph, grep to search the graph, and run a Python http server in the directory if you want to share.


> I would like to track what the requirements/specifications are, and how we'll test to make sure they're met

Why? Why would you like that? Why you?

If it's not happening, the business doesn't care. Your company is clearly not in a tightly regulated industry. What does the business care about? Better to focus on that instead of struggling to become a QA engineer when the company didn't hire you for that.

Generally, if the team wants to start caring about that, agree to:

1. noting whatever needs to be tested in your tracker

2. writing tests for those things alongside the code changes

3. having code reviews include checking that the right tests were added, too

4. bonus points for making sure code coverage never drops (so no new untested code was introduced)


1. Start with a product/project brief that explains the who, why, and what if the project at a high level to ensure the business is aligned.

2. Architecture and design docs explain the “how” to engineering.

3. The work gets broken down to stories and sub-tasks and added to a Scrum/Kanban board. I like Jira, but have also used Asana and Trello.

Testing is just another sub-task, and part of the general definition of some for a story. For larger projects, a project-specific test suite may be useful. Write failing tests. Once they all pass, you have an indication that the project is nearly done.

You can skip to #3 if everyone is aligned on the goals and how you’ll achieve them.


At my work we’ve needed a QMS and requirements traceability. We first implemented it in google docs via AODocs. Now we’ve moved to Jira + Zephyr for test management + Enzyme. I can’t say I recommend it.


Given the large, monolithic legacy nature of our backend, we use a combination of JIRA for feature tracking and each story gets a corresponding functional test implemented in CucumberJS, with the expectation that once a ticket is closed as complete, it is already part of ‘the test suite’ we run during releases. Occasionally the tests flake, it’s all just webdriver under the hood, so they require maintenance, but to cover the entire codebase with manual tests even if well documented would take days, so this is by far our preferred option.


As a bonus, we run the suite throughout the day as a sort of canary for things breaking upstream, which we’ve found to be almost as useful as our other monitoring as far as signalling failures.


"vi reqs.txt" is ideal default baseline

then only bump up to more complexity or superficiality as the benefits exceeds the cost/pain. for example, a spreasheet. perhaps a Google doc

if you're lucky enough to have any reqs/specs which have a natural machine-friendly form, like an assertion that X shall be <= 100ms then go ahead and express that in a structured way then write test code which confirms it, as part of a suite of assertions of all reqs which can be test automated like this


In a small team, I have found that a simple spreadsheet of tests can go a long way. Give it a fancy name like "Subcomponent X Functional Test Specification" and have one row per requirement. Give them IDs (e.g. FNTEST0001).

What sort of tests you want depends a lot on your system. If you're working on some data processing system where you can easily generate many examples of test input then you'll probably get lots of ROI from setting up lots of regression tests that cover loads of behaviour. If it's a complex system involving hardware or lots of clicking in the UI then it can be very good to invest in that setup but it can be expensive in time and cost. In that case, focus on edge or corner cases.

Then in terms of how you use it, you have a few options depending of the types of test:

- you can run through the tests manually every time you do a release (i.e. manual QA) - just make a copy of the spreadsheet and record the results as you go and BAM you have a test report

- if you have some automated tests like pytest going on, then you could use the mark decorator and tag your tests with the functional test ID(s) that they correspond to, and even generate a HTML report at the end with a pass/fail/skip for your requirements


This is where Cucumber is great.

I know it doesn't get much love on here, but a feature per requirement is a good level to start at. I'd recommend using `Examples` tables for testing each combination.

Having your features run on every PR is worth its weight in gold, and being able to deal with variations in branches relieves most of the headaches from having your requirements outside of the repo.


Been working as a consultant and engineer on FDA regulated software for about 8 years now. I have seen strategies from startups to huge companies.

I have seen requirements captured in markdown files, spreadsheets, ticket management systems like Redmine, Pivotal, Jira, GitLab, Azure Devops, GitHub Issues, and home grown systems.

If I had to start a new medical device from scratch today, I would use Notion + https://github.com/innolitics/rdm to capture user needs, requirements, risks, and test cases. Let me know if there is interest and I can make some Notion templates public. I think the ability to easily edit relations without having to use IDs is nice. And the API makes it possible to dump it all to yaml, version control and generate documentation for e-signature when you need it. Add on top of that an easy place to author documentation, non-software engineer interoperability, discoverable SOPs, granular permissions, and I think you have a winning combination.

yshrestha@innolitics.com


We use a system called Cockpit. It’s terrible to say the least.

I have never seen a requirements tracking software that worked well for large systems with lots of parts. Tracing tests to requirements and monitoring requirements coverage is hard. For projects of the size I work on I think more and more that writing a few scripts that work on some JSON files may be less effort and more useful than customizing commercial systems.


We have Word documents for requirements and (manual) test cases plus a self-written audit tools that checks the links between them, and converts them into hyperlinked and searchable HTML. It’s part of the dayly build. We are mostly happy with it. It is nice to know that we at any time can switch to a better tool (after all our requirements have an “API”), but we still have not found a better one.


Since you mention you're a junior dev, wanted to suggest taking the long road and (1) listening to what others say (you're already doing that by asking here, but don't overlook coworkers much closer to you) and (2) start reading on the subject. Might I suggest Eric Evans "Domain-driven design" as a starting point, and don't stop there? Reading is not a quick easy path, but you will benefit from those that have gone before you.

Of course, don't make the mistake I am guilty of sometimes making, and think you now know better than everyone else just because you've read some things others have not. Gain knowledge, but stay focused on loving the people around you. ("Loving" meaning in the Christian sense of respect, not being selfish, etc; sorry if that is obvious)


You may be aware of this, but this is as much a social/cultural discussion as it is a technical discussion.

Regarding requirements - they are always a live discussion, not just a list of things to do. Do not be surprised when they change, instead plan to manage how they change.

Regarding testing - think of testing as headlights on a car; they show potential problems ahead. Effectively all automated testing is regression testing. Unit tests are great for future developers working on that codebase, but no amount of unit tests will show that a SYSTEM works. You also need integration and exploratory testing. This isn't a matter of doing it right or wrong, it's a matter of team and technical maturity.

A bug is anything that is unexpected to a user. I'm sure this will be controversial, and I'm fine with that.


For smaller teams/projects I like to have as much of tracking requirements as possible as code because of how hard it is to keep anything written down in natural language up to date and having a useful history of it.

I really like end-to-end tests for this, because it tests the system from a user perspective, which is how many requirements are actually coming in, not how they are implemented internally. I also like to write tests for things that can't actually break indirectly. But it makes it so that someone who changes e.g. some function and thus breaks the test realizes that this is an explicit prior specification that they are about to invalidate and might want to double check with someone.


One framework that is appealing but requires organizational discipline is Acceptance Testing with Gherkin.

The product owner writes User Stories in a specific human-and-machine readable format (Given/when/then). The engineers build the features specified. Then the test author converts the “gherkin” spec into runnable test cases. Usually you have these “three amigos” meet before the product spec is finalized to agree that the spec is both implementable and testable.

You can have a dedicated “test automation” role or just have an engineer build the Acceptance Tests (I like to make it someone other than building the feature so you get two takes on interpreting the spec). You keep the tester “black-box” without knowing the implementation details. At the end you deliver both tests and code, and if the tests pass you can fee pretty confident that the happy path works as intended.

The advantage with this system is product owners can view the library of Gherkin specs to see how the product works as the system evolves. Rather than having to read old spec documents, which could be out of date since they don’t actually get validated against the real system.

A good book for this is “Growing Object-Oriented Software, Guided by Tests” [1], which is one of my top recommendations for junior engineers as it also gives a really good example of OOP philosophy.

The main failure mode I have seen here is not getting buy-in from Product, so the specs get written by Engineering and never viewed by anyone else. It takes more effort to get the same quality of testing with Gherkin, and this is only worthwhile if you are reaping the benefit of non-technical legibility.

All that said, if you do manual release testing, a spreadsheet with all the features and how they are supposed to work, plus a link to where they are automatically tested, could be a good first step if you have high quality requirements. It will be expensive to maintain though.

1: https://smile.amazon.com/Growing-Object-Oriented-Software-Ad...


We used to use MKS and switched to Siemens Polarion a few years ago. I like Polarion. It has a very slick document editor with a decent process for working on links between risks, specifications, and tests. Bonus points for its ability to refresh your login and not loose data if you forget to save and leave a tab for a long time.

For a small team you can probably build a workable process in Microsoft Access. I use access to track my own requirements during the drafting stage.


We use JIRA along with zypher test plugin which allows you to associate one or more test cases (aka list of steps) with your JIRA ticket. And tracks progress for each test case. Devs create the tickets and our QA creates the test cases. Docs and requirements come from all different departments and in all kinds of different formats so we just include those as links or attachments in the JIRA tickets.


Depending on how rigorous you want to be, for over $20k a year minimum you can use Medini, but it's pretty hard core.

https://www.ansys.com/products/safety-analysis/ansys-medini-...


Writing stories/tasks in such a way that each acceptance criteria is something that is testable, then having a matching acceptance test for each criteria. Using something like Cucumber helps match the test to the criteria since you can describe steps in a readable format.


My experience is mixed, non safety-critical industries (but some regulated):

1. Requirements first written in Excel, later imported to Jama and later imported to HP QC/ALM for manual tests

Pros: Test reports in HP QC helped protected against an IT solution which was not on par with what needed and requested

Cons: Tests where not helping the delivery - only used as a "defence", requirements got stale, overall cumbersome to keep two IT systems (Jama, HP QC) up to date

---

2. Jira for implementation stories, with some manual regression tests in TestRail and automated regression tests with no links besides Jira issue ID in commit Polarion was used by hardware and firmware teams but not software teams.

Pros: Having a structured test suite in TestRail aided my work on doing release testing, more lightweight than #1

Cons: Lots of old tests never got removed/updated, no links to requirements in Jira/Polarion for all tests (thereby losing traceability)

---

3. Jira with Zephyr test management plugin for manual tests, automated tests with no links besides Jira issue ID in commit

Pros: Relative lightweight process, since plugin to Jira was used

Cons: Test cases in Zephyr was not updated enough by previous team members

---

4. Enterprise tester for requirements/test plans, Katalon for e2e tests by separate QA team With automatic tests with Jira issue ID in commit (no links to Enterprise tester) inside team

Pros: Again, rather lightweight when it comes to automated regression tests inside team

Cons: Process not optimal, Enterprise tester only for documentation but no actual testing

---

Today, there are good practices which helps building quality in - DevOps, GitOps, automatic tests (on several levels), statistical code analysis, metrics from productions... Try to leverage those to help guide what tests needs to be written.

Many times requirements/user stories are incomplete, no longer valid or simply wrong. Or a PO may lack some written communication skills.

Overall, I want to focus on delivering value (mainly through working software) rather than documenting too much so I prefer a lightweight process - issue ID on commit with the automated tests. Bonus points if you use eg markers/tags/whatever in test framework like JUnit/pytest to group and link to eg Jira issue ID.


Interesting reading replies here. Way more formalities than I've ever engaged in.

Requirements are codified into test cases. After signoff of the spec/design/test plan is complete, there's no going back and checking.


disclosure: I'm involved in the product mentioned - https://reqview.com

Based on our experience with some heavyweight requirements management tools we tried to develop quite the opposite a simple requirements management tool. It is not open source but at least it has a open json format - good for git/svn, integration with Jira, ReqIF export/import, quick definition of requirements, attributes, links and various views. See https://reqview.com


If you are interested in a formal approach, Sparx Enterprise Architect is relatively inexpensive, and can model requirements, and provide traceability to test cases, or anything else you want to trace.


Tests with the Jira issue id in them. Simple, easy, scriptable.

Bonus point: you can run the code coverage with all the tests for a certain feature and see which code is responsible for supporting this feature.


I write have Gherkin use cases. It works well as it is plain English. This makes it easy to have in a wiki while also being part of a repo.


Even in the strictest settings, documentation has a shelf life. I don't trust anything that's not a test.


How is Rational (IBM) Requisite Pro these days?

Used that 15-20 years ago and loved it. Any present day insight on this?


There's no good answer here to a question with so little context. What you should be doing, in a company we don't know anything about, could vary wildly.

I've been writing software professional for over 20 years, in all kinds of different industries. I've had to handle thousands of lines of specs, with entire teams of manual testers trying to check them. Worked at places where all requirements were executable, leading to automated test suites that were easily 10 times bigger than production code. Other places just hope that the existing code was added for a reason, and at best keep old working tickets. And in other places, we've had no tracking whatsoever, and no tests. I can't say that anyone was wrong.

Ultimately all practices are there to make sure that you produce code that fits the purpose. If your code is an API with hundreds of thousands of implementers, which run billions of dollars a month through it, and you have thousands of developers messing with said API, the controls that you'll need to make sure the code fits purpose is going to be completely different than what you are going to need if, say, you are working on an indie video game with 5 people.

Having to long terms requirements tracking can be very valuable too! A big part of documentation, executable or not, is that it has to be kept up to date, and be valuable: It's a pretty bad feeling to have to deal with tens of thousands of lines of code to support a feature nobody is actually using. Reading documentation that is so out of date that you will end up with the completely wrong idea, and lose more time than if you had spent the same time reading a newspaper. Every control, every process, has its costs along with its advantages, and the right tradeoff for you could have absolutely nothing to do with the right tradeoff somewhere else. I've seen plenty of problems over the years precisely because someone with responsibility changes organizations to a place that is very different, and attempts to follow the procedures that made a lot of sense in the other organization, but are just not well fit for their new destination.

So really, if your new small team is using completely different practices than your previous place, which was Enterprise enough to use any IBM Rational product, I would spend quite a bit of time trying to figure out why your team is doing what they do, make sure that other people agree that the problems that you think you are having are the same other people in the team are seeing, and only then start trying to solve them. Because really, even in a small team, the procedures that might make sense for someone offering a public API, vs someone making a mobile application that is trying to gain traction in a market would be completely different.


JIRA? Confluence? *ducks*


You should probably first assess whether or not your organization is open to that kind of structure. Smaller companies sometimes opt toward looser development practices since it’s easier to know who did what, and, the flexibility of looser systems is nice.

TLDR adding structure isn’t always the answer. Your team/org needs to be open to that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: