
Unit testing, Lean Startup, and everything in-between - itamarst
https://codewithoutrules.com/2017/03/12/software-testing-big-picture/
======
kkapelon
I can't believe that in 2017 we still need to discuss if the benefits of tests
outweigh their costs.

Anyway, the article seems like a brain-storming session from the author, so I
am not sure that I need to answer all the points. Those that stick out

1)As other people said already you always write a unit test against a spec. If
you have no spec, then unit tests are not really helpful. When you prototype
something you don't need unit tests. Most startups also would not need a vast
test suite in their early stages

2) The company that had only UI tests forgot to make them resistant against ui
changes. I don't know the tool they were using but most GUI tools support the
Object/pages pattern. If the UI was changing so much that even that was not
enough, then they did not have spec (see point 1)

3) A/B testing is not unit testing because by definition you don't have a spec
(if you are still trying to find if A or B are ok)

4) Code review is not unit testing because you will catch only obvious stuff.
Code review is more about the structure of the code than its correctness. It
far easier to judge correctness by automation than by visual inspection.

Basically I would suggest you to remove the word "unit" from your blog post
title.

Unit testing has a very clear definition and you are confusing it with code
reviews, A/B tests and other unrelated stuff

google also "the testing pyramid" for a sound model of unit testing. The tax
preparation company would do fine to follow this.

~~~
yomly
So you've got a working prototype, which is 80% of the way towards your spec
(which changes by the day, mind you) where and when do the tests then get
written? Do you start from scratch when you go from prototype to product? Is
that feasible in a tiny lean startup with few developers and real hard fast
deadlines?

~~~
itamarst
Going from prototype to tested code is possible.

Here's how I'm doing it, for command line tool that provides local development
environment for remote Kubernetes cluster
([https://datawire.github.io/telepresence/](https://datawire.github.io/telepresence/)):

1\. Build prototype, test manually.

2\. Have coworkers try it out.

3\. Fix UX based on feedback, test manually.

4\. Once initial UX was nailed down, write end-to-end tests.

Beyond adding features, next I will rewrite the core using test-driven
development (TDD), so that the internals are maintainable. and the end-to-end
tests will ensure the basic functionality continues to work.

The UI is fairly simple, and at this point I have confidence that it won't
change multiple times in a day, so it's worth the effort to stop relying
solely on manual testing and write some automated UI tests.

------
Avalaxy
Unit tests are not really meant to test whether the code does what it's
supposed to do initially, but more about making sure that it _keeps_ working
when you're changing a bunch of stuff. For a pre-traction startup/MVP, the
latter doesn't really matter. It will be the first version of the product, so
there won't be a lot of changes. If the first version of the product gets
traction, then it becomes interesting to write more tests.

~~~
itamarst
Exactly: unit tests provides Stable Functionality... and if you're rewriting
everything once a week that's not useful.

If you have library code that won't get rewritten it should have unit tests,
of course, since you want that to be stable.

------
hamilyon2
As right as it may sound. This makes unbacked statements.

Waste of time? I'd argue that right amount of unit and other automated tests
actually decreases time to market as it allows make changes to the software
much quicker and push it to production with more confidence. Yes, at early
stages, when it is not clear whether anyone needs the software.

Sure a badly written test suite only slows down development. Tests should be
written so they would indicate breakage early and address unstable, complex,
but highly used parts of program.

~~~
itamarst
Some automated test suites are inherently badly written: automated UI tests,
for example, when the UI is rapidly changing. "Tests should be written so they
would indicate breakage early and address unstable, complex, but highly used
parts of program." Yes: if your UI is changing every two days your automated
UI tests break every two days, with no knowledge gained!

For library code I'd use TDD, though.

UIs are more suited to human testing (though cost pushes towards automated
testing once the UI is stable enough), APIs are more suited to automated
testing. And if you're doing prototyping or usability testing you might not
need either (you might not even need to write code, sometimes).

~~~
kkapelon
Effective UI tests use the page pattern and should be resilient against minor
UI changes.

[https://martinfowler.com/bliki/PageObject.html](https://martinfowler.com/bliki/PageObject.html)

There are several testing tools that support this pattern.

~~~
itamarst
In initial stages of product development the UI can change much more
drastically than just minor tweaks.

~~~
kkapelon
So the tax preparation company was still in the initial stages?

It isn't clear in your article.

~~~
itamarst
I'll clarify in future revisions, yes.

------
eikenberry
Tests don't always add time/overhead and I find quite the reverse true in
practice, that by using a simplified version of TDD I write code faster than I
used to. Writing new features/functionality in isolation while writing tests
along side it to run it. Experience is probably also at play here as I've
learned this habit after 20 years of programming, but when forced to write
code in other ways (eg. working on legacy spaghetti code that resists this
method) I find it takes significantly longer.

~~~
itamarst
For library code, yeah, I just TDD by default and I doubt it slows me down.
Might speed things up. End-to-end tests... those are expensive, slow and
brittle. Often necessary, though.

------
ekidd
> The web-based user interface The UI was changing constantly, which suggests
> Stable Functionality wasn't a goal. Correct Functionality was still a goal,
> so the UI should have been tested manually by humans (e.g. the programmers
> as they wrote the code.)

This sounds plausible right up until you realize that 1 out of every 3
production deploys breaks _some_ corner of your web UI, and you're always
needing to scramble to fix the deductions table every time you change your
basic popup widget. Or until you realize that you need a full week of time
from two very skilled testers to make sure your UI actually works.

JavaScript UIs bitrot at incredible speed. Sure, if you're using TypeScript
and Angular, you can maybe get away with a bit less testing. But in most cases
it's a terrible tradeoff.

There's a huge comfort in knowing that _every single time you deploy your
application_ , your basic signup flow and core features still work. You can
accomplish this with very high-level tools like Cucumber in most cases.
"Stable Functionality" is always desirable, and it allows you to develop
_faster_ because you don't have to spend all your time terrified of breaking
things and endlessly retesting them by hand.

Or you can keep breaking weird corners of production and getting upset because
somebody made a mistake and annoyed an important customer again.

~~~
itamarst
If your UI is relatively stable, yeah. If you're redoing the UX every two
weeks this isn't feasible.

There is no single right answer: it's all about tradeoffs. And in order to
make those tradeoffs you need to understand strengths and weaknesses of each
form of testing.

------
ser0
I have seen this opinion many times, in real life and online. I would say that
if you have no users then yes, tests are optional. _However_ , once you have
users that have certain expectations about how the system behaves, then you
need automated tests to at least do your regression tests for you.

In terms of unit tests, the audience stops being the users, and starts being
other developers. Good unit tests should do more than just check that a
function/method works as expected, but provide a list of examples of how to
use a method as well as noting the return types/exceptions/etc for other
developers.

Using the author's example that even after writing Selenium tests the
application is still buggy. There are several issues in just this one example:

1\. The application may be buggy, but did the business know about the bugs?

2\. If the business knew about the bugs, did users work around the problems or
did they avoid certain features altogether?

3\. When the UI changes, yes, your Selenium tests may be expected to break. If
a user expects to click "Next" and you changed that label to be ">", the test
breaking is a sign you need to communicate something to the user either
through change-logs or training.

At the end of the day, tests exercise and lock down behaviour of code. Whether
behaviour and functionality is desirable, correct, or worth locking down is
not a technical problem. Furthermore, your tests themselves are code,
therefore some care must be taken to ensure your tests are valid too.

~~~
itamarst
You say "At the end of the day, tests exercise and lock down behaviour of
code." That's exactly why tests can be expensive! If your UI is changing
rapidly the tests try to lock things down when they shouldn't be locked down.

Sometimes you want stability. Testing for stability is worth doing then.
Sometimes you _can 't_ have stability, and automated tests are just an
expense.

It all depends on situation and goals.

------
lewq
This post makes an excellent point. I've been bitten both by not enough
automated testing when a product has traction, and by spending too much time
writing automated tests for a product without sufficient traction. Carefully
treading this line is the essence of building a successful technology startup.

An interesting addendum might be: How to manage expectations when
transitioning from the pre-traction to traction stages? How do you decide
between adding new features vs. adding new automated tests to a system when
things seem to be going well? "Technical debt management" might be another way
of phrasing this conundrum.

~~~
beager
Treading that line is the art of the science, to be sure, and I've found that
the right amount of testing is the amount that will allow you to continue
moving fast, even after hitting traction. Technical debt will slow you down,
and just because you've hit traction doesn't mean you no longer have to stick
and move to keep traction and be successful.

What's worse, failing fast in spite of your tests, or squandering your
traction because your technical debt slowed you down?

------
YZF
This is progress from more dogmatic ways of thinking about testing.

One thing that would be interesting to explore is how things can change over
time. Something that you thought would be stable may end up not being as
stable as you thought. I feel this should factor in somehow.

Quality is a multi-pronged effort. It starts with the architecture, design,
coding. As this says, the connection between testing and quality is vague.
Testing can give you more confidence the quality is there or it can show you
the system under test is of poor quality but it won't change one into the
other. If you have something of poor quality, you do a lot of testing, you
file a lot of bugs and fix them most likely it's still of poor quality.
Testing definitely has a role in building high quality systems but on its own
is insufficient.

This discussion reminds me a little of efficient frontier in portfolio
management. There is some optimal mix of all your development activities just
like there is a mix of different assets in your portfolio. The assets you pick
and their mixture affect the probability distribution of the outcomes. Just
like in portfolio management part of the problem is that the historic
performance of these assets doesn't necessarily indicate future performance.

------
aussieguy1234
Unit test in cases when it's the fastest way to test your code.

I find manual point and click testing is often cumbersome and slower
particularly if you need to go through a few screens to test your
feature/bugfix.

And always unit test at least your core functionality.

------
henkelibonk
Excellent article. A pragmatic approach to testing is usually needed to get a
good cost/benefit ratio imo. This may include testing different parts of the
system in different ways and also test differently over time. For example, it
can be worth skipping automated tests altogether from start and then revisit a
month later to add tests to parts that have now "proven" themselves to be
useful and central.

------
itamarst
BTW, I appreciate all the feedback, even people who disagreed with what I
wrote: this will make my talk better. Thank you!

------
euske
I found the concept of unit testing useful _even when you don 't actually
write test code_. Just thinking of making your code testable will already make
it better, because you start caring about the clarification of specs and
separation of logic, and so on.

------
mannykannot
I thought that perhaps this article would get to the inspection of automated
test suites.

------
valuearb
Excellent

