
It took 12 weeks to ship an MVP I thought would take 3 - davnicwil
https://boxci.dev/blog/why-it-took-12-weeks-to-ship-an-mvp-I-thought-would-take-3
======
pgt
How did you manage to constrain the time overrun to only 4x?

As a mercenary engineer, I've had to answer the estimation question many
times. Whenever I've given estimates, it has only ever shot me in the foot.
Spolsky might have figured it out, but I haven't. So I try to avoid giving
estimates as far as possible and instead focus on demonstrating velocity of a
working system they can choose to stop funding at any time.

Some clients, especially from the construction industry, expect me to give
them GANTT-chart style estimates down to the hour. Here is how I convey the
software estimation problem to them:

If I write the same code twice, I done fucked up. So any code I write, is new
code. If I can reuse code, I will, but that's not the code you're asking
about. In your industry, you have built the same house many times before with
the same team, on the same foundation. But you are employing me to design,
architect and construct a new house that has never been built before. If it
existed already, you would just buy it off the shelf instead of employing me.

So think of me as more of an architect than a labourer, and together with you
as the client, we are embarking on a design that you're not quite sure you
want yet, that won't collapse on a potentially unstable foundation.

~~~
endymi0n
Having done hundreds of small to mid sized projects I take comfort in the fact
estimating is a very distinct task that even the best of developers learn
last.

I've got three tricks in my pocket that at least help me with this:

The first is that I go by the formula mentioned in "The Mythical Man-Month" —
which is that the effort in larger projects is distributed as follows:

1/3 planning 1/6 coding 1/4 component test and early system test 1/4 system
test, all components in hand

Developers tend to mainly estimate their _coding_ time, which leads exactly to
the effects mentioned. It also explains all the annoying situations with all
the holdups once the project is "almost done", this is simply a systems test
and will add another 25%.

The other is that I simply add 5-10% on top for each person involved into the
project (dev, stakeholder, anything) for communication overhead. It usually
holds true.

The third trick is meticuluous focus on lean delivery, which involves scoping
down, early testing on as much as possible, hidden and pre-deadlines and
setting tractable sub-milestones.

Unfortunately, all of these techniques don't accelerate the project, they just
help you be more realistic. If you find something that accelerates complex
projects, let me know ;-)

~~~
ido
> If you find something that accelerates complex projects, let me know

Developer experience. I've been working as a developer for a living for 18
years & I thought I was pretty hot-shit 10 years ago already, but I can
probably do more today in 4 days than I used to in 4 weeks (or 4 months a few
years before _that_ ).

~~~
WrtCdEvrydy
This holds very true up to the architect level.

~~~
mannykannot
As learning curves generally flatten out, that is to be expected.

~~~
ido
I've switched sub-fields of development a few times during my career
(scientific computing, back-end web, front-end web, desktop applications,
embedded, games [various platforms - native desktop/mobile/consoles, html5,
flash, etc]) and I found that helped me keep the learning curve steep.

------
commandlinefan
I can’t accurately estimate non-trivial software. I’ve never met anybody who
can accurately estimate non-trivial software. I’ve never seen anybody claim to
be able to accurately estimate non-trivial software. Yet for some reason
people still insist that estimating non-trivial software is not only possible,
but trivial.

~~~
danpalmer
The trick is to break it down until it’s all trivial. That process takes time
itself, but you can estimate that much more easily. I’ve recently found I end
up giving an estimate of, say, 2 days to get a solid estimate, and then come
away from that with 2-5 weeks of tasks that are no more than a day each.
Estimates of a day are pretty accurate (for me).

Another approach I had some success with was estimating the 80% likelihood
case, I.e. I’m 80% sure I can finish this in the time I’m estimating. Less
accurate, but much more predictably on schedule, which is more useful for many
stakeholders.

A last one is giving rough estimates and being clear about how rough they are
and how much time it will take to get more confidence. I was handed a 50 page
PDF of API docs for a service and asked what the estimate was for integration.
After 15 minutes I said “1-4 weeks”, if you want more accuracy then I’ll need
a few days. The answer was “no problem, we wouldn’t consider it unless it was
< 1 week”.

These examples are all on the weeks to low number of months scale, but the
same applies further up. Having a good understanding of the codebase and
domain are very useful.

Estimating isn’t easy, but treat it like any skill, it can be improved. It
requires some flexibility from product managers, but open communication goes a
really long way and results in better decisions overall, less wasted effort.

~~~
bradknowles
Breaking it down until it's all trivial is the most non-trivial part of the
exercise.

If everyone could do that trivially, then there would never be any problems.

~~~
TeMPOraL
Breaking tasks down is like writing another program. Plenty of tasks branch
off based on their result; you end up building a task DAG and not a task list.
You could try to estimate that with Monte Carlo, but in reality, you won't get
the correct shape of the DAG up front anyway.

------
kstenerud
The thing about estimating is that you can't factor in the things you don't
know:

\- The tool you planned to use has a bug/defect that blocks you.

\- The people who said they could give you some information you need, can't.

\- You misunderstood or were misled as to the capabilities of a tool you need.

\- The parts you need don't actually fit together as planned.

\- The documentation you were relying on is wrong.

\- A delivery you are relying on won't actually be ready on time.

\- A process that's worked every time has an unhandled edge case that you're
going to trigger in this project.

\- A regression in a software package will block you because you can't use the
previous version for other reasons.

The more moving parts there are, the more likely it is that one of the
interface points between them will block or delay you in one of these ways,
potentially for weeks, maybe even months.

So for any nontrivial project, you should expect a 2x-8x delivery inflation
over your "safely padded" estimate. You have a 75% chance that you'll deliver
within 2x of your padded estimate if you've made a good estimation, sliding to
worse as the number of moving parts increases.

~~~
phito
You can add personal problems that can affect focus to the list

------
andrewstuart
Developer to self: "It'll take me about a week"

Developer to development lead: "It'll take about two weeks"

Development lead to project manager: "It'll take about 4 weeks"

Project manager to self: "It'll take double that plus 2 weeks"

Project manager to management: "It'll take 12 weeks"

Management to client: "It'll take 8 weeks"

Actual time taken: 16 weeks.

~~~
riantogo
Client: Wow, this is the first project that took only 2x time. We usually
anticipate 4x.

~~~
andrewstuart
>> We usually anticipate 4x.

All software estimation boils down to the long established scientific
methodology:

(2 X what the last person said) optionally plus 2 weeks

~~~
alttag
Years ago I worked with a developer who had a different methodology: 2x, then
bump the unit of measure. Thus, 1 day -> 2 weeks; 2 weeks -> 4 months. It's
been remarkably accurate over the past couple of decades.

~~~
eberkund
8 weeks => 16 months, 2 months => 4 years

???

Doesn't seem consistent

~~~
WrtCdEvrydy
> 2 months => 4 years

It's consistently correct.

------
msteffen
I once had a manager who insisted on time estimates, but used them in a way
that I actually found valuable. You'd write out all the tasks involved in a
project add them all up, and then say "this will take T"

Then he'd say, "okay, what would it take to do what you just said in T/2?"
Then you'd cut stuff until you got to T/2\. The project then, very often,
would take me the original T.

This seemed to work a lot better than just doubling the initial estimate to
2T.

~~~
techsin101
In UI design you do this exercise where you put everything in 4 categories.
Must have, should have, could have and won't have. Take your idea and jam it
into these for while being 100% honest. It does the similar thing as you
mentioned. But truly like the idea

~~~
mettamage
MoSCoW, I learned this studying CS

------
eyegor
_> Sometimes it takes a conversation with someone else unfamiliar with your
work to really question your fundamental principles_

This is one of my favorite tools for development. I always sketch out an idea
(no coding) and then try to describe it to someone with zero CS background
before diving into an mvp. The times when I didn't follow this process - well
- just as op described while trying to document the cli, I basically wrote the
damn thing 2-3x over. So instead of a couple hours of chatting with other
people, I had to spend days/weeks running in circles around my own work.
Please, if you have juniors or if you are a junior, try to drive this lesson
in.

~~~
davnicwil
Whilst working on Box CI, I've had a few conversations that have totally
change my trajectory and way of thinking about the product. It's really the
best thing you can do to advance things. Sitting writing code is only ever
linear - you make progress one hour at a time, very slowly. A 15 minute
conversation however, especially with someone you haven't talked to about it
yet, can save you days of unnecessary work on something useless or, better
yet, get you to an idea that might have taken weeks to come to on your own.

------
davnicwil
OP here - this is a sidebar but the HN hug was making the page load really
slowly (~10s for me) even though the blog part of the site is cached. Anyway,
because I'm using kubernetes all it took to fix it was bumping the nodes in my
cluster, 4x the replicas, _kubectl apply_ and it's snappy again! All done in
about a minute. What an awesome tool kubernetes is.

~~~
skrebbel
Please take this comment in good faith because I'm genuinely curious: why do
you need a kube managed cluster of N nodes (an now 4x N) to host a static
blog? I wonder what works goes on under the hood that this needs such scale.

~~~
davnicwil
The blog is just attached to the product site
[https://boxci.dev](https://boxci.dev), thats the reason for using kube.
Should be separately hosted but this was just simpler.

The static blog pages though are all cached via nginx so I was surprised I
needed to increase the nodes & relicas. Short answer is I don't know, and will
need to investigate, but I suspect it's because the kubernetes node instances
themselves are fairly low powered, there's a low cpu limit on nginx, and
perhaps nginx is doing a lot of work serving the js bundle to so many
simultaneous users, which really should be hosted on a CDN (probably also the
blog).

~~~
marcosdumay
> Should be separately hosted but this was just simpler.

Probably shouldn't. If you are already managing an automated environment,
creating another environment that is easier to manage has nearly no upside.

Unless that phrase was meant to use the CDN you talk about later, but again,
do you need a CDN to solve the easiest part of your infrastructure? (Or
rather, do you have enough traffic so that it isn't the easiest part?)

~~~
davnicwil
Yeah my phrasing wasn't that clear there but I essentially meant hosted as in
cached on CDN servers, just to reduce load on the nginx instances. Though my
best guess at the moment for what was causing things to be slow was just
sending the JS bundle to so many simultaneous users. Putting that on a CDN may
be enough.

------
maschinenz
I would consider myself a fairly experienced developer (~ 10 years), but yet I
have had the same experience on countless occasions. People always think of
you as "being the expert" knowing everything, so I try to explain my situation
like this:

"Think of me more like being a journalist. You want a good story and have a
rough idea, so I will find the right people and interview them (domain
experts, business people), try to figure out how to distill their knowledge
into something that is both accurate, yet not too detailed and most
importantly has to fulfill the needs of my readers/users (easy to read, yet
super insightful, with some pretty images etc.). Wheter the article is about
nuclear physics or siamese koalas is secondary, the process is more or less
the same, yet I am neither a koala expert nor a nuclear scientist. You are the
expert."

I also try to explain why I am unsure about estimating even relatively small
tasks like this:

"You know restaurant (or another famous location) Y, right? We both know how
to walk, I mean we have had like 30 years of experience in doing that,
correct? So how many minutes does it take you to walk from here to restaurant
Y?". The more people in the group, the more interesting it might get.
Estimates usually differ by a factor of 2-4. People usually cannot even
correctly estimate a trivial thing like taking a walk.

~~~
davnicwil
The walking example is really good, I also regularly underestimate how long it
will take me to arrive to meet a friend walking, by public transport,
whatever.

The reason it's an underestimate even when I know the distances and usual
times is because I think about the ideal set of conditions (not missing train
connections, clear streets for fast walking, being able to find the place
we're meeting instantly on arrival rather than looking around for the entrance
for 5 mins) and go with that. I never account for the possibility of missing a
train connection by 15 seconds and then having to delay the journey by 30
mins. Even though I know, of course, there is a non trivial chance of that
happening.

------
mgbmtl
A colleague from a previous job once told me: "when a developer gives you an
estimate, always double and add one of that unit" (estimate = 2n+1). He told
me this after a while, because he would often nag me to clarify if I meant 7h
or 1 day.

i.e. if you estimate 7h, then calculate 2x7+1 = 15h.

If you estimate 1d, then calculate 2x1+1 = 3d = 21h.

It's a silly joke, but it also gives insight about how we perceive time and
the elasticity of estimates.

And of course, instead of multiplying by 2, you multiply by a time factor that
depends on the risk factors involved (which are also guesstimates, but for me
'x2' would mean a rather low risk factor).

~~~
rhizome
That sounds like a relative of a rule of thumb I read once upon a time:
projects slip by the units they're estimated in. N weeks slips by weeks, days
and days, months, years, etc.

------
amirathi
It's astonishing how many developers resign to the idea that "estimating
software projects is impossible". It's impossible when there are either,

\- Technical uncertainties (e.g. self driving cars)

\- Human uncertainties (multiple different teams building single large
software system)

\- Scope uncertainties (we don't know what we are actually building until we
get into the weeds)

Outside of these we should be able to make _reasonably_ accurate (+/\- 30%)
estimates. I wrote a blog [1] about it if someone is interested but here are
the main takeaways,

\- Don't just estimate writing code but also include time required for
testing, documentation, communication, setting up infra/deployment

\- If certain part is hazy ("is there a reliable python library for speech to
text") then research it enough to know the path ahead

\- Breakdown the system into smaller units until you feel confident to
estimate each piece

[1] [https://blog.amirathi.com/2018/02/05/science-of-software-
est...](https://blog.amirathi.com/2018/02/05/science-of-software-estimation/)

~~~
marcosdumay
Scope uncertainty is always larger than the certain part. It's often nearly
all of the scope.

Technical uncertainty is also more common than not, unless you are using a
dying platform, it will change between the estimation and implementation.

Human uncertainties are an avoidable one, if you work alone. If you are in a
team, they are also certain.

Or, to put it shortly, yes it's perfectly possible if you are doing a
university project alone in an old platform without an active community.

------
ibudiallo
It's hard enough to estimate accurately when you are building your own project
with no one breathing down your neck. But be a contractor, where your estimate
becomes your salary, and you'll find that you tend to give longer estimates.

I once gave a 3 days estimate for a client project. 2 days to make sure I
build it right and a third for testing. It turned into a 7 weeks project:
[https://idiallo.com/blog/18000-dollars-static-web-
page](https://idiallo.com/blog/18000-dollars-static-web-page)

------
jrumbut
I really wish the estimation process would get turned around. Let stakeholders
decide when they need it, and engineers are responsible for delivering the
best version they can on that date. As the project unfolds, stay in close
communication and make timely decisions about what tradeoffs are acceptable.

My estimates get accurate once I know where we are on the spectrum from quick
and dirty to pushing the boundaries of what's possible. There's a 3 month
version and a 3 year version of a lot of ideas.

~~~
cpeterso
That's the idea driving agile product development: defining and estimating big
projects is hard, so let's do many small projects (sprints). If the
stakeholders have a hard deadline, you can ship whatever you have then because
your product is supposed to be in a shippable state, though possibly with
reduced functionality, at the end of each sprint.

------
edoceo
Aty place we don't estimate time at all. It's nearly impossible. But value of
the fix or feature is loads easier to estimate and measure. Bug $X is worth
$10000, dedicated $5000 to fix it. Check progress. Dev says almost done? One
more cycle. Still seems far? Punt.

I'm always confounded when managers want a time estimate but cannot provide a
value estimate.

The goal is the same: fix important/urgent bugs, develop product and keep good
close velocity.

------
themmes
Thanks for sharing, I just quit my sideproject after a year of coding and find
posts like this helpful to learn from that experience.

> Then I stopped, took a step back and looked at the product from the
> perspective of someone who would want to pay for it.

Though I think this is dangerous advise. You do not know what this perspective
is like (unless maybe when your product is a “scratching your own itch”-type
product) and therefor making design changes makes no sense. You will never
know if users maybe even preferred your first “design”, or that there is a
different blocker like landing-page, installation-instructions or anything
really preventing them from starting using it.

~~~
davnicwil
Thanks for saying that as it's basically the reason I wrote the post. Really
good to share these lessons with fellow bootstrappers as so many of them are
general and applicable between projects, even whole businesses.

Totally agree with what you're saying here and let's see, I might write
another post in a few months essentially saying this was a mistake and a waste
of time, but yeah if you're interested in seeing a comparison, check out the
twitter account for the blog @zero_startup and you can see the old screenshots
and the newer ones.

------
mikekchar
As my colleague says: The time where you know the least about a project is
when you start. This is also the time when your estimates are likely to be the
worst.

When can you say with fairly good certainty how much longer it's likely to
take? In my experience it's about 1/3 the way through the project. I'll give
you my reasoning behind this number.

There is a model of software defect discovery called Littlewood's model. It
basically says that the rate of software defect discovery is a random
variable. As you discover more defects in the code, the number of defects that
are left to discover diminish. Assuming you put in a constant effort to
discover defects, your rate of discovery decreases. You can estimate the
number of defects left in the software, by looking at the decrease in
discovery rates over time. There are lots of scholarly articles on
Littlewood's model, so I won't go into more detail than that. There are newer
models too, but I always found Littewood's "good enough" for my purposes.

It occurred to me that requirement discovery might follow a similar curve. As
time goes on and you understand more and more about what you need, the number
of things to discover decreases. If you work on your project in a constant
manner, the rate at which you discover new requirements will decrease over
time. I took data for a number of project and the discovery rate curves were
very similar to defect discovery rate curves.

The key is to look at the rate of change of discovery. Once it is tailing off,
you can estimate the speed of the drop and get a good idea of how much work
you have left to do. Each curve is different depending on a number of factors,
but by the time you are about 1/3 the way through the curve, you have enough
information to estimate the parameters.

Note: Littlewood's model doesn't actually hit zero, but you can set a
threshold that is "close enough to zero". Where you place it will change the
1/3 figure, but I hope what I'm saying is understandable.

------
robsun
In my career I have estimated a lot of project. I tried different techniques
and I find
[https://en.wikipedia.org/wiki/Program_evaluation_and_review_...](https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique)
is the most accurate one.

If I was asked to give some hints, I would say:

\- in optimistic and pessimistic cases don't fear to use extreme values

\- don't estimate on your own, take reasonable amount of developers with
variety of experiences

\- don't give exact value to client, give probabilities ("it is very likely
that we will be deliver this in between X and Y man-hours/days)

All estimates I made using PERT were quite accurate (<5% time difference
between estimated and real execution)

~~~
mitchtbaum
Yes. I was about to link this too.

From my experiences, I've found that 3x my initial "estimate / goal / ego trip
timeline" gives the most realistic picture. Sometimes I can hit my best; most
often things come up. It took a little while to accept, mostly because I
wanted to do my best always and be able to promise and fulfill it. But Reality
hit back, and I had to adjust.

It's hard to see the network effects of each micro-interaction in long range
projecting, but once you really break down how long each thing could take, how
that would affect each other related part, and who/what else is involved at
each step/layer, it clears up the fog.

[https://en.wikipedia.org/wiki/Fog_of_war#Military](https://en.wikipedia.org/wiki/Fog_of_war#Military)

------
brigandish
Being asked "how long will it take?" is actually my greatest fear in tech
work. Give me long enough and I'm (fairly) confident I can overcome any other
problem, but estimating time, even for seemingly trivial things…

<gulp>

I'll probably give everyone an upvote here as a) it's nice to see I'm not
alone, and b) there's some really good tips.

Not sure why every tech interview introduces a whiteboard with an algorithm
problem when all they have to do is ask me "how long will this simple command
line client take to build" to really see the difference between the way I
attack a difficult problem!

------
18monthsin
Im 18 months into a project i thought would take 6. Why? Co founder bailed and
i took some shortcuts early on to try and secure some clients that fell
through, those shortcuts led to a substantial rewrite.

------
rkangel
I agreed with everything right up until the "extra time for design" section. I
spent last year building a startup and have thought a lot about MVPs. Coming
from a background of building big expensive systems, it took a lot to shake
the mindset of "do it right from the ground up". My cofounder did a great job
at pushing me towards the absolute minimal solution to learn what we wanted to
learn - I don't need to spin up a backend and write CSS, I can use Wix to mock
it up and see how that goes, THEN build it properly.

The idea is to learn as much as possible (read The Lean Startup if you
haven't). If you build too much in one go, you're going to have tied yourself
to one vision of how your users will use the product and be more closed to
learning from them what they actually need. Note that you were persuaded by a
_designer_ to make the design nicer, not a user, potential user or even
product manager.

What was the problem with launching, and then deciding that design was the
next priority and working on it while your product was running?

[disclaimer: I definitely haven't got this figured out as we shutdown the
company and I'm back building Big Expensive Systems]

~~~
davnicwil
> What was the problem with launching, and then deciding that design was the
> next priority and working on it while your product was running?

Great question, and one I asked myself when I was about 2 weeks into the
redesign :-)

On reflection, basically no, you're right, there was no reason I couldn't have
shipped and then improved the design in the meantime. The thing about it not
being good enough for people to pay for was just my opinion - not tested with
real customers.

------
raveenb
My personal experience is that traditional software estimation techniques
don’t work when done for a MVP situation. Yes it’s frustrating like hell to
not know how long before your money, patience and interest runs out. In a non
MVP situation some estimation models work, some 40% of time. But managers make
it look like they work all the time. Cutting scope to hit deadline doesn’t
count.

~~~
raveenb
Also the Viable part of MVP is subjective. As you build the MVP, your idea of
what is viable changes. This needs to be accounted for. The minimum product
has to be viable for a paying customer. If the MVP is not a paid product the
you can get away with less polish, not so with paid products.

~~~
davnicwil
Exactly what happened with Box CI :-)

The _viable_ I had in mind was 'it works' \- later realised that actually
viable was 'I think there is a chance I can ask people to pay for this' \-
turned out that second part around doubled the already overrun build time!

------
giancarlostoro
Yep. Me and my boss estimated rewriting what looks like a really simple
Silverlight application in HTML5 in about 3 months time. He's far better at
estimating than I am, and a year later we finally did it. Problem was we chose
to use .NET Core cause it was more modern, and who knows if Microsoft will
dump .NET Framework at some point (and honestly after .NET Core 3.0 I could
finally see this being possible). So we upgraded to .NET Core.

Estimating requires you already understand the very thing you're estimating.
This is why plumbers, and electricians give you reasonable estimates. Heck
when we hired movers after buying our first home they estimated two hours, and
were done just a few minutes before the 2 hour mark on the spot.

The issue with programming is, your tools change, your approach changes with
new iterations (usually for the better!) and you notice new problems, or
sometimes you do one thing in 5 minutes, and your second run through using a
similar approach is broken because it requires an edge case you didn't
anticipate code wise.

------
notjustanymike
I've gotten pretty close with two 4 months projects this year (2 weeks off in
both cases). It took two weeks of planning, research, and understanding
roadblocks to get to that point.

It's doable, really, and it feels an awful lot like waterfall when you're in
the moment. The most important steps were:

1\. Write it down 2\. Don't keep it to yourself

Everyone else is smarter than me and provided immeasurably good feedback.

~~~
davnicwil
2 is really helpful, 100% agree.

I shared my deadlines and progress on this build via the blog and on twitter
under @zero_startup and I would say doing that probably pushed me to ship the
MVP at least 50% faster.

------
jordiee
Man your telling me. I planned on spending 4 months on
[https://appdoctor.io](https://appdoctor.io) and ended up launching it after 1
year and 2 months.

~~~
davnicwil
Have heard so many stories like this. It's unbelievable how much longer these
things can take compared to what you think at the idea stage. How's it going
now?

------
mikepalmer
"What's the idea for Box CI? A CI service that does everything for you, except
for running the builds."

Marketing advice: define "CI" near the top of the first page. The word
"Integration" is not on the page either. I am familiar with Continuous
Integration but not the abbreviation CI.

Adding to the confusion, the font on (most of) the home page is sans serif, so
CI cee-eye looks like Cl cee-ell.

Moreover, you mention CLI on the page... though I did know what that is!

Good luck with it.

~~~
davnicwil
Thanks for this. Yeah you're right - it's that thing of when you are so
focused on something for ages, lots of things about it seem obvious to you
that aren't necessarily at all to other people coming in cold. This is useful
feedback.

------
Cederfjard
Is there any well-regarded book, study, methodology or even individual who
claims that the development time of non-trivial software can be accurately
estimated?

------
mtlynch
Interesting writeup! Thanks for sharing.

>Seems easy right - it's a CLI - just document every option and you're done in
half a day. That's what I thought too.

>But then you realise that each individual option is its own thing that
requires thought to explain, and crucially, interacts with all other options.
It's not enough to explain it in isolation. You need to tie everything
together. You need examples. You realise from doing all this that there's a
simpler way to name or combine options, you go back and make that change and
have to update all the examples, explanations, etc.

You question whether your MVP at that stage was viable, but when I read this,
I wondered whether it was actually _minimum_. Is it possible that you added
too many options and your MVP would have been easier to ship if it had been
more limited in scope?

>The product did not look like something anyone would want to pay for

One pitfall to look out for is that you probably can't accurately predict what
customers pay for until you talk to a real customer. Maybe things like
aesthetics that _seem_ important wouldn't actually be important to a customer
if your product solves their problem. A designer gave you feedback, and they
no doubt have valid expertise, but they're also not your customer.

I had the experience of shipping too late last year:

[https://mtlynch.io/shipping-too-late/](https://mtlynch.io/shipping-too-late/)

tl;dr of my takeaways:

* Even if you've been warned about startup pitfalls a million times, sometimes you just have to make the same mistake to learn it.

* The goal of the MVP should be to get something in the customer's hands ASAP so you can hear their feedback. If you focus on polish and design, you might be polishing something that nobody wants.

* Question whether your "must haves" are actually just ways of delaying scary conversations with customers.

~~~
gspetr
Yep, the good old "build it and they'll come" trap for the naive engineer that
I've stepped into myself. Thankfully, having read Rob Walling's book now I
know that marketing comes before code:
[https://news.ycombinator.com/user?id=rwalling](https://news.ycombinator.com/user?id=rwalling)

It's a bit dated now, but most of the advice is still relevant to solo
engineers looking to turn solo founders.

~~~
mtlynch
I like that book too! I wish I had read it before shipping that first MVP.

My notes: [https://mtlynch.io/book-reports/start-small-stay-
small/](https://mtlynch.io/book-reports/start-small-stay-small/)

~~~
davnicwil
Follow up to my other comment, read your post and it's extremely entertaining
- found myself laughing at (you can probably guess) the estimates part and a
lot of the other stuff in there I know so well I felt almost as though you
were writing my own thoughts better than I could :-)

Very cool to hear from other bootstrappers / solo founders like yourself
talking about things they've done, successes yes but also failures and
lessons, and that was also my aim with this post. I feel like it's a pretty
small niche within the much larger overall 'startup' world.

~~~
mtlynch
Oh, thanks! I'm glad you liked it.

------
redroot
Hofstadter's Law: It always takes longer than you expect, even when you take
into account Hofstadter's Law.

------
celticninja
Army place we point stories, Fibonacci series. Works to help identify a thing
over an 8 usually needs to be broken down further. 5 and 8 are pairing
stories,1 and 2 for when you have a half day available (depending on the
seniority of the dev) or for a Friday afternoon if you finish another ticket.

------
m12k
I think most programmers tend to do their estimates in this wonderful parallel
dimension where the code base they will be working on is pristine, with well-
designed, documented and thoroughly tested code that they get to work on
without interruptions or meetings of any kind.

~~~
davnicwil
The thing is in this case I'm building this thing from scratch on my own, so
literally without interruptions or meetings of any kind, and all the code,
architecture, devops, everything is exactly how I want it, and the estimate is
still wildly wrong :-)

------
m0zg
That's why, when people ask me to estimate work, I tell them that if they need
a hard deadline things will (counterintuitively) take much longer, and try to
convince them that hard deadlines are bullshit. If you want a hard deadline, I
have no choice but to allocate a huge amount of buffer time, at least twice
the "aggressive" estimate so that if shit doesn't go as planned (which is what
usually happens) I don't miss the deadline. And I have to plan for the worst
possible kind of excrement hitting the fan, which is not realistic. And then,
in spite of that, people are still under more stress, because procrastination
being in human nature, hard shit gets done last, and there's more in it that
can go wrong.

Consider instead shipping stuff incrementally, feature by feature, when
features are done, without any hard deadlines. Both the speed of delivery and
quality will be improved compared to hard deadline driven development.

Plans are worthless. Planning is indispensable.

------
qaq
I think it depends I work on a large enterprise product we are generally
within 20% of the estimate. If some prominent team members were not aggressive
in planing that figure would be smaller. Average age on the team is over 40
avg experience around 20 years.

~~~
mathgladiator
Experience makes a huge difference in planning simply due to know where all
bodies are buried and where dead bodies will emerge.

------
thrownaway954
so basically all the common reasons why every deadline is missed. nothing new
here.

------
gaoshan
I started multiplying estimates by 3 and found it to be, in general, much
closer to reality once all was said and done.

------
MrStonedOne
What is an MVP?

~~~
grzm
Minimum viable product.

[https://en.wikipedia.org/wiki/Minimum_viable_product](https://en.wikipedia.org/wiki/Minimum_viable_product)

------
_pmf_
You're pretty fast.

------
jillesvangurp
Viability is a KPI and not a set of requirements for some product.

You can't plan for a product to be viable. You first build a product and then
you verify whether it is viable. At best you can formulate a hypothesis about
what set of features you think is good enough in terms of KPIs and then plan
to build only that. However, you have to factor in the likelihood that your
hypothesis is wrong and also that the process of building something should
result in refining that hypothesis over time. If that doesn't happen, you are
not learning and you are probably not really building a viable thing.

The fastest way to get to viability is to take baby steps: short
sprints/iterations, ship often, re-assess where you are every step of the way.
Do the most valuable/risky/uncertain things as early as you can so you can
adjust course if your assumptions about their value turn out wrong. Most
startups get this wrong and fail for this reason because by the time they
figure out they are on the wrong track they've already wasted most of their
seed funding on building pointless things.

The lean movement tends to focus on the M part too much which has a built in
risk for products to be unexciting and ultimately non viable. It's great if
you are copying somebody else's business model or building some kind of market
place. It's not so great if you are trying to do something new.

Lean has a tendency to postpone value creation until you've built a lot of low
value commonalities like a login system, user management or crap like that
that every startup seems to spend ages on without getting it really right. If
you hear the words MVP and Android IOS and web in one sentence that translates
as we're building a lot of common functionality three times and all our
experiments have 3x the cost. The chances of that being utterly unremarkable
and non viable are huge.

The minimal thing would be to postpone that stuff until you have something
worth logging into and worth having multiple implementations of on multiple
platforms. Building a good mobile experience is a huge investment. Don't even
think about it until you have something viable.

You are not proving the viability of a login system with your MVP nor are you
proving the viability of a slick IOS experience. Your thing is viable if
despite obvious UX and feature issues you still get positive KPIs. Once you
get there, you can justify the expense of making it better. So spend as little
time as you can on stuff like that instead of making it the top priority in
your first iterations.

As long as you are doing minimal things you are not actually creating a lot of
value. You are actually postponing value creation and viability. It's the
hardest things that create the most value and that are the hardest to plan and
the easiest to postpone when planning. Therefore I believe, Scrum is the wrong
process for building something that has a high risk/reward balance and the
right process for building something that has low risk/rewards. All the value
is in the stories that everybody struggles to estimate. Scrum results in most
resources getting sucked up by the least valuable stuff.

------
trpc
nice way for marketing. You could have posted a SHOW HN a thousand times and
nobody would care, but this is creative way for a succesful SHOW HN

------
transitivebs
This is one of the core reasons why we've built
[https://saasify.sh](https://saasify.sh), an extremely simple solution to
launch FaaS-based businesses and start validating product / market fit in
hours instead of months.

I agree with the author in terms of estimates and their assessment of the
process, but the point I'd like to make is that the vast majority of those 12
weeks it took them to launch the MVP weren't spent on core, differentiating
functionality, but rather on boilerplate that's common to every SaaS product.

Important but non-core features like billing, accounts, documentation, and a
marketing site end up causing many projects like this to fail before they even
get launched.

If you can launch MVPs like this significantly quicker albeit in a more
constrained yet platform-agnostic way (e.g., using serverless functions as an
abstraction), it becomes really powerful for makers like the author of this
article to focus on core value and ship quickly instead of wasting time on all
of this other stuff.

