
Ask HN: For a startup, when should I begin writing tests? - hazz99
Hello!<p>We&#x27;re building our MVP and have got commitments from our first few customers. I&#x27;m still young and haven&#x27;t managed production software before, only personal projects.<p>I&#x27;m hesitant to sacrifice developer velocity for tests. I believe product agility is a huge asset early on, and I think testing cuts into this.<p>I&#x27;m also afraid of digging an inescapable hole of untestable spaghetti code and tech debt.<p>I&#x27;d love to hear the thoughts of more experienced engineers and founders on this issue! How did you balance it, and what would you have done differently? When should I start testing?<p>My current idea is to skip testing, and instead put that effort into product development and a robust deployment pipeline with easy roll-backs. Ideally this would allow us to move quickly and revert any big mistakes, without the burden of a complex test suite.<p>If it&#x27;s important, we&#x27;re React+Go, but I&#x27;m not looking for stack-specific advice.
======
SkyPuncher
We wrote tests from day 1, but I have mixed opinions about the value of it. It
certainly has helped at times, but it also adds a huge tax to everything.

Pros

* Established the proper culture from day one. If you end up being successful, this is extremely valuable. Culture is very, very hard to change.

* Eliminated a lot of bugs and regressions.

* Gave us confidence working in a regulated space.

Cons

* Adds a lot of time to initial development

* Product complexity doesn't really require testing in the early stages.

* Slows down ability to pivot

\-----

If I were to do it again. I'd write fewer tests, but establish milestones for
requiring changing/additions to be tested.

* My rule of thumb would be: If you can't easily test it in the UI, write a test. Or, if it's business critical, write a test.

* Verify authorization and authentication. It's very important.

* Test anything that makes/loses money (for example, Strip, Recurly, etc).

* Jest snapshots are quick and easy. Use them.

* Write tests for utility functions

Don't test:

* Database migrations

* UI interactions (unless critical). They take a lot of time to write and UI changes frequently.

* Vendor/3rd party API integrations beyond surface level. In my experience, quick implementations always involve a bunch of mocking/stubbing - making these types of tests less useful.

------
ysavir
Start writing tests the moment you have something to lose. Earlier if you
like, but no later than that moment.

Tests are about system integrity. If you risk losing something due to lack of
integrity, tests are worth your while.

Think of it like health insurance: When are you comfortable forgoing health
insurance, and when are you not?

------
nightsd01
I would highly, highly recommend tests. The problem with not writing tests is
that you _think_ it will save you time, but instead, it will end up slowing
your velocity greatly as your codebase grows and it becomes harder to discover
what specific change broke feature X.

You don’t have to write thousands of unit tests. But you should absolutely
have a basic test suite with integration tests that give you confidence in a
new release. And it will make it easier to hire developers in the future.

A good test suite doesn’t slow you down at all, in fact, it speeds you up by
helping you identify problems rapidly so you can fix code before it becomes a
real problem.

------
ArturT
If you have no tests at all I would start with simple E2E tests to ensure your
critical parts of app are working like sign up page and billing. You can add
later other important sections of your app to ensure happy paths for your
features are working fine.

When I started my own product for Ruby & JS developers
[https://knapsackpro.com](https://knapsackpro.com) I was doing mostly unit
testing and later relayed on E2E tests for testing user dashboard to ensure
happy paths are covered.

It's very easy to introduce weird bugs and it's much faster to run CI build to
test your app than manually verify if it works correctly in your browser when
you make some changes.

------
gorgoiler
Working inwards from the edges of your product is a quick strategy that could
provide immediate business value (compared to working from the bottom up
watching metrics like coverage) and that means _end-to-end testing_.

Testing the product with end-to-end tests will stop you from shipping a
completely broken product to your customer, where the most damage can occur.
Add the test script at the end of the build and alert to a dashboard. “Alert”
can mean email, and “dashboard” can mean “PC showing inbox for
testalerts@mycompany”. You can get that running immediately. Alerts won’t tell
you what broke of course, though you can use version control and bisecting to
isolate breakages to commits, which is almost as good.

When you find bugs, then each bug is an opportunity to put some scaffolding
around smaller subsystems as smaller tests to show the actual bug, as well as
the fact that the bug has been fixed. Avoid committing the test without
committing the fix. It’s a bad habit. You can show how the test _used to fail_
in your commit message.

You’ll get an idea as to what are the most problematic pieces of your app. If
one part is much less reliable than the others, then you can select that
subsystem for verification with bottom up unit tests. Don’t spend too much
time on each one though — it’s highly likely that in exercising your APIs
through unit testing, you’re going to see ways in which they need to be
refactored to make more sense and, importantly, become more reliable.

~~~
2rsf
End to end tests also tend to be more fragile and require a more complex test
environment, this means you might end up spending too much time fixing broken
stuff (tests, environment) and getting a lot of false failures.

On the other hand if you start early you can take testability into the design
and make at least some of the above problems go away, for example by having a
modular architecture that allows easy mocking.

I do recommend to minimize the use of UI tests (Selenium for web for example)
as they tend to be even more fragile, focus on the most obvious sunny day
scenarios and never ignore a broken test as it get rotten over time

------
sixhobbits
> I'm also afraid of digging an inescapable hole of untestable spaghetti code
> and tech debt.

How many engineers do you have and how fast are they likely to write code? It
depends more on the product that you're building rather than the stack - if
the tech _is_ the product, spend more time on making sure it works as
advertised. If the tech _enables_ the product, spend less. Any competent
engineer can pick up thousands of lines of technical debt/spaghetti code and
spend a few afternoons swearing at it and figure out how to modify it.
Technical debt is a problem, but more of a problem when you hit millions of
lines, hundreds of engineers, and you need to be sure you can change stuff
without breaking it.

Debt is useful - as long as you can take it on in a calculated way, don't
stress about it too much. You might need to pivot and rewrite everything
anyway, or at least have your fundamental assumptions changed in such a way
that requires a partial rewrite. Tests won't help you much in those scenarios.

------
amirathi
I was in the same boat as you a year ago (solo developer building
reviewnb.com). I wrote API tests for the backend (assert API x with y input
returns z) and completely forgo any unit tests. This has given me 80% benefit
with 20% effort.

I wrote these tests after launching the product publicly and just before
support requests / bug fixing period started with the simple aim of avoiding
regressions as I work through the improvements.

If I were to give general suggestions on the topic,

\- In most cases having high level sanity tests is low effort high value
decision.

\- To write tests or not shouldn't be a binary decision. E.g. If you think a
specific aspect is going to need refinement work then tests for those modules
can help you immensely.

\- Going for high % coverage very early on for the sake of mental satisfaction
or bragging rights is usually not a good time investment

My only regret is not having any automated tests for frontend. Investing in
some basic selenium tests after launch would have been worth it.

------
inertiatic
I'd write high level tests, integration ones for sure, from the get go.

These don't take too much time to write and speed up your development
significantly because you can make significant changes and be sure that
certain flows still work without doing all the manual testing over and over.

------
caseymarquis
Try to use tests to save time? A good compromise might be that if something
doesn't work the first time you try it, you can write a test to make debugging
faster. If it turns out you made a bad design decision down the road, you can
always delete irrelevant tests and reapply the rule about testing to save
time.

Unfortunately, inexperience is going to make it take longer to write tests.
That probably won't be what kills your startup though. Most companies die when
they try to scale due to a variety of organizational issues, and having a few
tests kicking around when you start bringing new developers in will probably
help you a bit in this respect.

~~~
jfe1234
I'm curious - what kind of organisational issues relating to hiring new
developers would be alleviated by having some tests? Are you referring to
incoming developers being put off by the perception of a legacy codebase which
is horrible to work with? Just curious!

~~~
Jtsummers
Not perception, here's what happens in projects I've been on with poor test
quality (low quality tests, too few tests, sometimes essentially no tests,
etc.):

Experienced developers (the ones who made it) are generally fast, quick to
identify the source of an issue or where to make a change. They have better
than even odds of not mucking it up (discovered by users later because testing
in-house is too poor to catch it).

New developers are either paralyzed by uncertainty (will this really do what I
want, does it have an unintended effect elsewhere that I can't detect?) or
make changes and have better than even odds ( _much_ better) of mucking it up
and creating a mess. Hopefully it's discovered before release, but most likely
it's discovered after.

One of the critical things for new developers to a project is the opportunity
to make changes and see what happens, with good feedback. If you permit them
to make the changes, but lack the ability to provide good feedback on the
effect of those changes, then they cannot learn the system _from the
perspective of a developer_. Along with exploratory testing, it should be
possible for developers to make compilable, but arbitrary, changes and see
tests fail. This lets them know the impact of a change and understand how that
section of code relates to the system as a whole. That holistic view is
impossible on larger projects without good testing, or more experience than
you want to wait for.

The importance of this will vary based on the importance of the system and its
size and scope. Smaller size and scope, then they can probably learn the
system well enough (say, less than 100k lines of code, depending on language).
Beyond that size, the system will become a rotten mess and your customers will
leave you if they have any choice.

------
snorberhuis
Once you get experience in having a TDD approach, my view is that you don't
lose time with testing. Writing tests is a way to formulate your thought and
assumptions. You implement this in your code and you selectively run these
using the tests. You reduce the size of the code that you think about to small
building blocks. This makes you much faster-writing code. The amount of rework
getting components to work together is reduced allowing you to implement
features faster.

Only if your code is complex, will you need a complex test suite. If your code
is complex you won't be fast implementing features anyways. TDD will help you
keep your code simple.

~~~
collyw
Of course you loose time testing. Maybe not writing the tests themselves, but
usually setting up data for tests is the most time consuming part.

------
ajeet_dhaliwal
Rather than skipping it as you are currently considering doing I’d add them as
soon as possible for at least some end-to-end coverage of important features
and authentication. If you have two or more developers I believe regression
can take place shockingly quickly and it’s kind of sad to be made aware of a
major problem by a user, I know first hand.

One thing that has been touched upon is that you are setting the culture, if
you say you’re hesitant to sacrifice velocity for tests already that will
probably become the long term culture and one day there will be no tests and
everyone will be afraid to make any changes. Tests do take up front investment
but they run multiple times a day and thousands or even tens of thousands of
times over the lifetime of a product and pay it back.

I think striking a balance where you’re not overly concerned about coverage
but have key end-to-end flows covered can make things more efficient while
saving your skin and help avoid unhappy situations. I say limited UI front-end
tests (these are the greatest time sink) of key flows, but then as much
coverage of apis as you can do.

Also I’m founder at Tesults
([https://www.tesults.com](https://www.tesults.com)). Check it out. There’s a
free plan but I’ll go beyond that and give everyone who posted in this thread
one full target free if you email me. So you can report your test results
nicely too and keep track of failures without cost. Any direct feedback is
appreciated.

------
mooreds
I can speak to this. I wrote the vast majority of code for two years for a
startup. The stack, if it matters, was ruby on rails with jQuery.

I would write tests from the get go. I would have to look at the got history
to be sure, but I believe I started adding to the test suite after about a
month of adding code (I did use an existing open source project as the base of
the product, and they had a lot of tests written). Your fears about spaghetti
code are valid and tests will help with that. At the least you can refactor
with less fear, though with golang that will be less of an issue than it was
with ruby. In addition they will help prevent regressions. When I (or, far
more likely, our users) found a bug in the system, I would write a test before
fixing it. Therefore, we had very few bugs return.

However, a deployment pipeline is critical too, because otherwise your tests
won't get run often enough (especially if they are slow. Due to the way I
wrote the tests, and the number of them, it took approximately 20 minutes to
run through all the tests in our ci environment).

Re: your fears of slowing things down. In my experience a test suite, even a
bad one, will speed things up once you are in production. This is because it
will prevent regressions and doing manual testing, both of which chew up
additional time. I saw this at other companies.

------
fwsgonzo
At the very minimum you should write a test each time you find a bug, so that
it doesn't happen again. If you already know what the problem is writing a
test case for it should be simple. The only time writing a test suite would be
hard, is if you are doing something special like writing a VM guest or
embedded, where automating tests take much more work. Even then, just knowing
that the basics work all the time is very beneficial, even if the test suite
is small.

------
sethammons
I think others have made the point well: tests give you the confidence to
refactor and change and know if you broke something a customer expects to
work. One caveat is that they should be good tests and not overly coupled.
Don't mock as those are brittle. How so? Usually mocks assert more than
contracted behavior. If you have a method under test, it doesn't matter what
its guts are doing. You shouldn't assert its state. What matters is that its
observable actions and errors behave as expected. You don't assert that
internal functions of that method are called so many times in such and such
order with the right arguments. Just test at the exposed edges.

You will want unit tests this way and to verify error handling and logic flow.
You will want integration tests or acceptance tests to verify things work
together and can perform the actions your users will take. These pay dividends
and prevent the manual need to verify you did not break anything on a given
change.

You also don't need to exhaustedly test everything. Start with critical parts.
And, like others said, add to your test suite any bugs to prevent regressions.
A paying customer who reports a bug, sees it fixed, and sees it come back is
not a happy camper.

------
deepaksurti
\- Have only the most important use cases covered with automated integration
tests.

\- Don't spend time in writing unit tests until

\- You have paying customers and a steady revenue

------
EliRivers
_I believe product agility is a huge asset early on, and I think testing cuts
into this._

I disagree that testing cuts into agility.

I believe that in this case "agility" means that you can change your product
in significant ways quickly. Presumably, when you make these changes, you need
it to still work. Having tests already in existence will inform you when
you've broken something you didn't mean to change. If you don't have tests
that can run frequently on your swiftly changing, rapidly pivoting codebase,
you will break things and you won't even know until it takes off someone's
foot.

If you don't have these tests, you can't rapidly change your product with
confidence that it still works, and I suspect that you would value "working"
above "not working" very very highly.

If you want agility, you need tests. Lots of automated tests. Testing doesn't
cut into your agility; testing _gives_ you agility.

------
jakobegger
We have automated tests for some parts of our app, depending on how much value
tests add.

For example, code that parses data generally has lots of tests. It's usually
easy to write unit tests for parsers, and it's hard to write parsing code
correctly. These tests are invaluable and catch lots of errors, especially
when you add features to the parser, or refactor the code, or optimize it. I
would not touch any parsing code that doesn't have tests, because no matter
how smart you think you are, you are going to break something everytime you
make a change.

We also have tests for code that interfaces with 3rd party components. When
those components change behavior, we can detect problems early.

We don't test most UI code, since tests are often difficult to write, need to
be updated every time we make changes to the UI, and they catch few bugs.

------
new_here
This is a good question. Everyone aspires to have a good test suite but if you
are building an MVP with limited time and resources you can get by using a
service like [https://sentry.io/](https://sentry.io/)

It will email you a stack trace whenever an error occurs and also keeps a log
of them which you can order by frequency and so on. It's an extremely useful
service.

When you get bigger and have regular users and need stability then tests start
to become important so that you don't break existing things when rolling out
new features.

Not having tests comes with a risk though. For example, I once broke our
registration flow and didn't know about it for a week. Flipside is we hardly
had any users registering at that point. So it's probably a good idea to cover
a few of your core user journeys. Hope that helps!

------
pbecotte
If the accepted wisdom was true, that you can trade quality for speed, then
you shouldn't write tests.

I do not believe that to be the case. I find the biggest value of tests is
being able to change the code more quickly. Running tests is MUCH faster than
a regression script.

There is a scale where you'll never have to change the code enough for the
tests to pay off. Personally, I think that scale is quite small, maybe a week
of development work.

However, it takes some experience to get good enough at the tests that you
feel that benefit. If you're not there, the answer is probably different.
However, if your startup scale is non trivial, I'd bet that the payoff if
learning that skill and building some testing framework from the get go would
be worth it.

------
2rsf
Adding a bit to the great answers here. Are you doing Exploratory Testing [0]
? if not then you probably should.

You will probably find out that you need a bunch of tools to help you
exploring the product, so why don't you spend some time turning those tools
into crude test automation ?

Remember that the best test is the test that actually runs, an unwritten test
doesn't really help you.

[0]
[https://martinfowler.com/bliki/ExploratoryTesting.html](https://martinfowler.com/bliki/ExploratoryTesting.html)

------
leommoore
I think it depends on the nature of your application. In my experience, in a
startup situation the product design can be iterating quickly so both the
tests and the code itself can be out of date very quickly. Personally would
wait until I was close to product/market fit before writing tests.

Obviously, if you have a clear understanding of the finished product like in
many enterprise applications then testing makes sense.

I would however, strongly recommend it if you are doing nuclear reactor
software ;-)

------
streetcat1
I see software as composed of: data (what) -> control (when) -> alg (how). And
software should be designed in this order.

Hence you should write tests only when you want to freeze your design in a
specific area. I.e. make sure that you got the domain model right, and freeze
it via tests, etc.

I found this book very valuable:

[https://www.manning.com/books/unit-
testing](https://www.manning.com/books/unit-testing)

------
orbifold
I would try to make adding tests as easy as possible. That is for every piece
of code you write think to yourself: How would I test this. If in many cases
the code only makes sense in the context of the whole application running this
might be a sign of danger. Basically you thinking testing might slow you down
is a sign this might be the case. For some problems it is way easier to write
tests than to solve them correctly.

------
udayrddy
I stood up the product in 23 days, launched, will be 3 month young in couple
of days. We have test cases written on our open to public python library, just
one, nothing more than that. We purposefully asked a outsider, a freelancer,
to write test cases for us looking in to the API documentation available on
the website. This way we validated our documentation as well as the python
library.

------
jlengrand
Start writing test as soon as you start losing time due to regression bugs.
But don't ignore the first signals :).

------
lm28469
If you don't do it from the beginning chances are you'll never do it. Testing
isn't an after thought, I mean, it ends up being one for most companies and it
shows...

------
subhajeet2107
There are various levels of testing, at an early stage you can write only
integration and functional tests, later on when you get time you can write
unit tests as well

~~~
AndrewSChapman
If you don't unit test from day 1 then there's a strong probability that you
wont be writing your code in a way that you _can_ unit test later. e.g.
mockable dependencies. I'm included to say go with TDD right from the start.

------
paulcole
If you have hardly any experience, how can you be confident in your belief
that "product agility is a huge asset early on"?

------
davidjnelson
If you're changing something that's working, but a bug could corrupt your
data, you should probably test that :-)

------
ooooak
> I'm also afraid of digging an inescapable hole of untestable spaghetti code
> and tech debt.

if you are worried about tech debt you should watch "The art of destroying
software". It will be hard to understand at first.

Here is TL;DR. Writing small services that will take only one week of work to
rewrite. That way you can burn your services and recreate when you truly
understand your domain.

Recently I worked on a REST API using go. I think the go package system has
good support for writing small and independent packages. Just make sure your
microservices are micro.

You can do the same for your React Codebase.

------
kffcc
I like to share my experience: Writing tests from day 1 is essential. Think of
tests as a road for your business logic, which is a car. It might seems
expensive to write tests but bad quality code slows you down within weeks
rather than months.

Unit tests are important tests and are very useful for engineers. As others
mentioned, thinking about test-ability keeps code simple, which is a must when
you want to move fast. Test-ability (have the TDD mindset and thinking, if you
don't then apply TDD until you think tests while writing your code) usually
translates to readable and maintainable code. When you write good unit tests,
then most assumptions are documented in an automated way and tested at every
execution of the unit tests. When a unit test fails, then the error message
and the test name should tell you precisely the assumption and what is broken,
that ultimately makes you really fast in maintaining and (with a good overall
design) changing code.

When you want to be really fast then have fewer or no integration tests. That
is because integration tests usually span over several parts of your software.
So a change in any part will lead to probably several changes in the
corresponding integration tests. That makes it more expensive to change things
fast. On top of that, many integration tests are more complex to write because
of the work in mocking adjacent parts of the software.

System/Black box tests. Those test are very important for evaluating business
value. One tests common happy paths from your customers and validate that
business value/ feature x still works. Again like with the system tests, the
process of doing, deciding and prioritising will lead to clear business values
and very important a better shared vision between the developers and business.
The software serves the purpose to be used by a customer, the system tests are
ultimately a distilled version of what you the business is selling and it's an
automated process to ensure that this value is delivered.

Even the best software developers do mistakes and forget. I have not yet seen
a team which profits from not writing tests. It takes some time to write good
software -> testing and approaches like TDD are some of the ways to start
climbing that hill.

About your questions:

A robust deployment pipeline is the right way to go and can be even better
leveraged with tests. Example: you can connect your system tests to the roll-
back functionality and re-deploy the last working state.

You are right about your worries about Spaghetti code and technical dept, from
my experience they are always an issue from early on. But also keep in mind
that no one is perfect so even if it's only you writing code, you will find
code written from you, which gives you goose bumps. A nice approach to code
quality is the in-place refactoring, that is you don't do re-factoring as a
task. Each time you touch code and it's not very clear or the structure/
design is not ideal then you re-factor with each and every task (piece by
piece). A rule of thumb: When a 1 day tasks adds 3 days of re-factoring (only
fixing the touched code), then you know that you are sliding the slope of
lower productivity and you need to act and going back up the slope usually
takes a lot of discipline in the team (that's an extreme example, personally I
act much earlier).

Your current idea has some risks associated with it. I like to expand with an
example: Usually a code base changes and grows with new features, rarely it
gets smaller. When your code base grows and your team grows -> productivity
declines (people: communication overhead, technology: overall more
complex/bigger system, probably many more factors). In the first weeks you
might deliver very fast without tests but in the future that is not given.
Let's assume that you and your team are unicorns and will produce perfect code
all the time. Your productivity declines naturally by having a bigger system
and more people. In the case of declining productivity it is very unusual to
add workload like writing tests. It might be easier in your project/startup,
but usually it's hard to add more workload. The future business will probably
be under constant pressure to deliver fast and follow the market. Now back to
the real world, you will probably face a more pronounced slowdown than in the
unicorn case. Several risks are coming from your decision, if you think you
can manage them then try and keep your eyes peeled and learn from every
mistake along the way (btw that's always good advice for every part of your
journey ;) ).

Tip use a tools like SonarQube
([https://www.sonarqube.org/](https://www.sonarqube.org/)) to asses and
measure obvious code quality.

Another point I like to share: Keep in mind that good tests don't stop bugs,
there will be enough bugs even with good tests. Tests help
developers/engineers to understand code and assumptions, they help to reduce
bugs from being introduced, they help you doing the right design decisions and
much more. You can even write strongly unreadable code with a very good rating
from SonarQube. Maintainability, readability and change-ability of your code
are human related problems and only slightly technology problems.

Software Quality often relates to tests, if you replaces tests with software
quality in your question this article has some good talking points:
[https://martinfowler.com/articles/is-quality-worth-
cost.html](https://martinfowler.com/articles/is-quality-worth-cost.html)

