
Ask HN: Software Testing Philosophy? - doitLP
What is your philosophy for deciding if your software is well-tested?<p>Beyond the amorphous &quot;do test reports inspire developer confidence?&quot; I realize I don&#x27;t know how to think about different testing thresholds.<p>I am taking over engineering at a young startup. Their approach to testing has been some automated smoke tests, and &quot;90%&quot; code coverage for unit tests. 90% seems too high for the speed tradeoff we have to make, but where to draw the line? Why 90% and not 84%?<p>Should we just try to find something that works and let production bugs be our guide? I would also like to explore other methods of testing I&#x27;ve never used, e.g. crowd testing, but I&#x27;m unsure how to fit that into my mental model of knowing  well-tested software when I see it.<p>Any insight or articles are welcome. Thanks!
======
tcbasche
I've found 80% to be a good number for unit tests. BUT things like integration
testing and end-to-end user story testing can really make all the difference,
and complement a solid unit test suite. Also focussing on the numbers can
leave you going down the wrong path and trying to shoehorn as much testing in
as possible without giving any tangible benefit for the sake of a higher
number.

As it's a young startup I would focus on writing tests for the business-
critical use cases, which should allow a team the confidence to "move fast and
break things" knowing comfortably that the day-to-day is still functioning as
per usual.

Ultimately, I would say most code should be covered with unit tests, but
functionality covered by automated integration/end-to-end testing. To me that
would be 'well-tested'.

(and no flakiness!)

~~~
ajeet_dhaliwal
I broadly agree with this approach. Automated functional end-to-end tests may
not pinpoint what line of code has an issue but they will identify there’s a
problem somewhere which is better than not knowing. Your product code and
tests can be setup to help isolate where the source of the issue may be. Have
developers continue to write unit tests but if you’re big enough to it have
one developer focused on writing end-to-end automated functional tests it will
give everyone a sense of assurance at each release that nothing is
fundamentally broken. If you’re not big enough, like the parent comment said,
try to do the major ones eg. sign up flow, core functionality.

Someone else made the point about using data effectively and I agree with that
too. I’ve been involved with automated testing and tooling for a few years now
and solid test reporting and test case management is something I advocate. I’m
the lead dev and founder at Tesults
([http://www.tesults.com](http://www.tesults.com)) - try it and push your
results data to it and house your manual test cases. There’s a free tier and
if you contact me I’m happy to expand it. Basically if you set up your CI
system to run your tests regularly, or even if you schedule them to run
regularly (E.g. nightly/daily) then having something like this boosts your ROI
because you can track history easily, link bugs, and do things like get
indications about which tests may be flaky.

We don’t know exactly how small your team is, it may be you have to rely on
manual testing if there’s no one to spare but I’d try to move some of your
end-to-end to be automated sooner rather later if you end up growing otherwise
you’ll make testing a bottleneck to releasing, full regression testing of even
a moderately complex application can end up taking days.

~~~
tcbasche
Why would I want to link tests to bugs? Or track the history of tests? Or
assign someone to a failing test? I've found they either fail or they pass,
and you find why they failed and you fix the failure.

Just genuinely curious about the purpose of this software, as it's quite
expensive, and most CI tools provide enough information for history /
scheduling etc.

~~~
ajeet_dhaliwal
If you don’t see it as something you need, you probably don’t ... at least not
yet.

Ive been an Engineering Manager and Head of Quality Engineering and when you
end up with hundreds or thousands of test cases, either in one product or
service or across many, managing the output of testing becomes a challenge.
There may be low priority regressions that you want to fix later and create
bugs for and link to failing tests so you know you’re waiting on those fixes
before you expect the tests to pass.

If there is a problem with a test itself then assigning it to a team member to
fix is useful. Tracking history helps identify flaky test cases - they’re
automatically highlighted as being flaky. Consolidating tests from several
sources helps understand the overall health of a project.

If you’re maintaining a product or service that goes through a lot of change
and there’s say 10 people or more on a team managing all of this becomes
increasingly difficult. With respect to pricing it’s free to try and $25/month
per target so not particularly expensive.

------
muzani
What I do is classify parts as either prototype or release.

Prototype is code that is built for the sake of proving something - it might
be to gauge user behavior, or convince investors/customers you can do this
cool stuff. It might be because it's hard to plan something and easier to hack
it together.

Prototype code is built disposable. You can throw it away and replace it with
an optimized thing later, and they're decoupled enough to be disposed. For
pre-PMF startups, nearly 100% of code is prototype.

For prototypes, I write tests, but it's a manual checklist. It's also usually
user behavior focused, such as "user tries to upload 15 images from third
party photo gallery and gets an error limiting them to 5 images at a time".

There's some point where it's more efficient to write tests than keep pressing
buttons.

Once you decide a feature is concrete (probably someone is paying for it),
then you can refactor that and do some TDD on it.

------
2rsf
Disclaimer: context is king, we don't know your business so you'll get an
"average" answer.

Google and read about the Modern Testing Principles [1], it's a set of
principles and a community around it (join the Slack channel through the same
link) that tries to bring change to the " test reports inspire developer
confidence" state of mind. There's not a lot of tangibles around it yet but
it's a good start.

As for coverage percentage, there are multiple papers showing that (usually)
anything above 80%-85% does not contribute to the quality of the product. Also
remember that unit test are not great at finding "good" regression bugs,
especially those subtle bugs related to timing, concurrency or external
resources (e.g. your DB).

I can throw dome ideas but you'll need to be more specific for them to be
useful-

\- Use Data, from CI tests through just-before-big-release tests to production
bugs having more Data and better ways to utilize it will help you find
problems earlier and solve them faster.

\- Don't be afraid of some level of manual testing, sometimes it's the most
cost effective solution especially for start ups. Mobiles are the best
example, it's very costly and difficult to build a stable test environment
especially if you want to have good phone models coverage.

\- Cover bottom up (aka "The Test Pyramid"), have more unit tests, a bit less
integration tests (contract tests, sub sets of modules where the rest is
mocked), have good but less API tests and finally have only a handful of E2E
tests and even less UI tests.

\- None of the above is set in stone though if it works for you, for example
if your UI is simple and stable enough go ahead and use it as your main
interface.

Finally remember that testing is hard and knowing that you have good results
is even harder than for your actual product, so iterate and learn as you go.

[1] [https://www.angryweasel.com/ABTesting/modern-testing-
princip...](https://www.angryweasel.com/ABTesting/modern-testing-principles/)

------
Raed667
On the frontend part of things, I tend to test the user behavior and not
functions (unless critical).

The coverage tends to revolve around ~75% but that's not a number I actively
think about. Instead I think about it in terms of interactions, possible
errors etc..

~~~
doitLP
Meaning you focus more on integration/e2e testing over unit tests?

That jives with my own inclinations. I’m ok with much lower unit test coverage
levels on the UI where I can use something like Cypress or Robot over just
testing functions.

~~~
Raed667
Exactly, pairing testing-library and jest make these kind of tests super easy.
No need to spin Cypress

------
AnimalMuppet
I find this question useful: What is your objective evidence that the software
does what it's supposed to?

Note that there's no "percent coverage" whatsoever in that question. Instead,
it's a question of what behaviors of your code you need to test.

