

Ask HN: How well tested should an MVP be? - wmwong

A Minimum Viable Product is used to validate whether your idea has legs. It should be released early enough that you should be embarrassed about it. So how well tested should it be? And by this I mean unit/integration tests, not a manual click through test.<p>A safe estimate for including tests is to double your development time. Theoretically, you could pump out two untested MVPs versus one fully tested MVP. However, if your MVP has legs, you have an untested product which slows you down in the long run.<p>Three scenarios off the top of my head (not exhaustive):
1. Untested. If it works out, add tests after. You can pump out MVPs quickly, but will be slow afterwards.
2. Fully tested. If it works out, you can move quickly afterwards, but will take longer to reach validation.
3. Half-tested. Best of both worlds? However, modifying existing tests feel just as hard (or harder) than starting from scratch. Could be worst of both worlds?<p>Should you go for speed or robustness? What has worked for you?
======
brk
I think the answer somewhat depends on what purpose the app fulfills for the
user.

Are you launching hot-or-not_2.0? Well, there's not a lot than can go really
wrong if it totally burns.

Are you launching mint-for-the-enterprise? Well, you might not get a 2nd
chance if you accidentally transfer all of a users funds to Nigeria as part of
your automated email handling code.

------
ncash
It definitely depends on what your product is. Some customers will not be very
forgiving while others will be relentless champions for you.

Our approach has been somewhat unique. Our MVP is mostly a throw-away version
we got out into the world as fast as possible to test the idea. As such, we
didn't do much testing. We excluded most of the back-end stuff we'd like to
have but was not absolutely essential for basic operations (mainly automation
of some tasks). This worked because we can do most of the required tasks for a
small number of customers by hand in the mean time. This has been very
successful and has been helpful on a number of levels -- we know our core
customers quite well, they get personalized service and seem to be very loyal,
and we were able to launch much earlier. The idea was to avoid spending a lot
of time in development/testing in case things didn't go well.

Now that we know there is a place in the world for our product we can build a
2.0 version with everything we want/need in it. Plus, given our experience in
the field and with the customers, we have a much better idea of what exactly
our software will need to handle. Rather than mod our product to fit the
requirements, we'll just build a new one and migrate the existing data. In the
mean time we have some grass roots growth and customers who give us constant
feedback.

I would note that if you are seriously looking at the possibility of growing
very quickly then you probably don't want to use a throw-away MVP. If we got
hit hard right now we would be in a tough spot for a few weeks (maybe more).
Since our niche is rather small we felt a bit more comfortable taking on this
risk.

------
airfoil
I think it depends on what you're trying to learn from your MVP. If you can
get the data and feedback you need without building tests then I don't see
anything terribly wrong with that.

------
damoncali
As little as possible while still being able to live with the risk and
consequences of the thing blowing up.

------
erik_landerholm
having a regression test of some kind is always good. You don't have to test
everything, but at least have something you can run each time you change it to
make sure the basics work.

