
Ask HN: How bad should the code be in a startup? - andy_ppp
Hey Hacker News! I was recently involved in a startup where the CEO had made a crazy complex app with prisma - it did loads of things but it was a mad balancing act of insecurity, bugs, badly mangled code and database design that left a lot to be desired. I think my problem is they were just copying something that already exists rather than making something new that needs extreme user testing for it to become a thing. Obviously on such a codebase the CEO could get things done pretty fast but I couldn’t help feel it was completely hopeless for anyone else trying to make the project work correctly. Of course even with all this brittle code there were no tests.<p>My question first is<p>a) has Hacker News&#x2F;YC ever seen a startup fail because the codebase is so bad.<p>b) what is the best calculation to make when trading off code quality vs features?<p>c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?<p>Should we just be chucking shit at the wall and seeing what sticks? Do most startups bin v1 and jump straight to v2 once they have traction?
======
tikhonj
In my experience, "code quality" vs "features" is simply not a real tradeoff.
Writing clean code with tests, function documentation, a good level of
modularity, automated deployments... etc will save you time _in the short
term_. It's pretty simple:

1\. Writing quality code is not substantially slower in the first place,
especially when you factor in debugging time. You just have to have the right
habits from the get-go. People avoid writing quality code because they don't
have and don't want to build these habits, not because it's inherently harder.

2\. After the initial push, code quality makes it much easier to make broad
changes, try new things and add quick features. This is _exactly_ what you
need when iterating on a product! Without it, you'll be _wasting_ time dealing
with production issues and bugs.

The only reason people say startups don't fail because of code quality is that
code quality is never the _proximate_ cause—you run out of funding because you
couldn't find product-market fit. But would you have found product-market fit
if you had been able to iterate faster, try more ideas out and didn't spend
50% of your time fighting fires? Almost definitely.

Pulling all-nighters dealing with production issues, spending weeks quashing
bugs in a new feature and duct-taping hacks with more hacks is not heroic,
it's self-sabotaging. Writing good code makes your own life easier, even on
startup timeframes. (Hell, it makes your life easier even on _hackathon_
timeframes!)

~~~
adenta
As a technical cofounder who just finished the YC W20 batch
([https://terusama.com](https://terusama.com)), I can agree with some of what
you are saying.

At its core, an early stage startup's only goal is to create business value as
ruthlessly as possible. Let's talk about how I apply this principle to my
testing strategy.

Do automated test suites help create business value? Absolutely, I no longer
have to test everything after making a change. Your application is going to be
tested, either by you, or by your users.

Does having a well defined layout of UI, service, & integration tests, a-la
Martin Fowler add business value? I would argue it does not. I write mostly
integration tests, because you get more, 'bang for your buck', or 'tested code
per minutes spent writing tests'.

Does this testing strategy create tech debt? Absolutely. I view this as a good
thing. I am causing problems for myself in the future, in exchange for
expediency in the present. Either my company grows to be successful enough to
care about these problems, or we go out of business. If we become successful
enough to care about rampant tech debt, hooray! we are successful. If we fail,
it does not matter that we leveraged tech debt, we still failed.

Writing good code is an art. There are people out there who are incredibly
talented at writing good code that will be battle-tested, and infinitely
scalable. These are often not skills that an early staged startup needs, when
trying to find product-market fit.

~~~
creyes
I think I disagree with this. I think the short-term harm of this kind of tech
debt is more substantial than you're leading on. "Causing myself problems for
the future" might be true, but that future could be in a week when you need to
pivot because of user testing, a shift in the market, product market fit etc.

I think the mistake you're making is conflating "getting code written now"
with expediency. Adding/removing features and shifting when necessary are
"expediency." That's the value of a thorough test suite.

~~~
lumost
It's not just the test suite that is the subject of tradeoffs. One may write
good code that fundamentally doesn't scale beyond a small number of customers
e.g. doing everything with postgres and no batching because it's easy. Or
building a solution for a demo to an individual customer.

These solutions will break, and if monitoring is skipped will break at 2 AM
when customers really start using the product.

These situations can be avoided with better product research and a stronger
emphasis on design, but these are also the approaches large established
companies take who can't afford to lose customer trust, and will gladly build
a product on a 2 year time horizon.

As a startup you need to weigh the risk of failure, the need for direct
customer engagement, and limited resources against the risk of losing customer
trust. If you're a startup making a new DB, then you're product lifespan is
approximately equal to the time until your first high level customer failure
or poor jepsen test. A new consumer startup, may simply be able to patch
scaling issues as they emerge rather than investing in a billion user infra
from the get go.

------
PragmaticPulp
Never forget that your startup customers aren't buying your code. They're
paying for whatever the product does for them.

They don't care if the code is good or bad, as long as the app does what they
need it to do and does it well.

So to answer your question: The code should be bad enough that it allows you
to ship as fast as possible, but not so bad that the app doesn't work
properly.

This can be a shock if you've been raised on a steady diet of HN posts and
comments, Medium articles from opinionated and often highly critical
programmers, and open-source projects that only accept the best quality code.
No one likes to brag about writing proof-of-concept grade code, so you won't
be hearing about it online or in public.

a) Yes, startups have failed because their product doesn't work properly or
the product is full of bugs. However, startups don't fail because the codebase
is ugly, or convoluted, or not following best practices. You might be
surprised at how hacky many early startup codebases are.

b) Regarding the calculation of code quality vs. feature velocity: When in
doubt, consult with the senior devs and your manager. Knowing when, where, and
how to strike this trade off is one of the defining features of being a senior
developer, in my opinion. In most cases, it comes down to estimating the
negative impacts on future development. A core component that touches every
part of the app should be more carefully designed than a single-use feature
only 1% of your customers might ever use.

c) Regarding tests and clean code for V1: In short, the only thing that
matters is getting traction in the early stages. Every day you spend writing
tests or refactoring code to feel cleaner reduces your chances at getting that
next funding round. In the early days, it's all about a proof of concept and
getting customers so you can grow the company. You can't grow the company if
you don't have investors and/or customers, so that perfect code may be doing
more harm than good in the early days.

~~~
redleggedfrog
"However, startups don't fail because the codebase is ugly, or convoluted, or
not following best practices."

Yes, they do.

The obvious one is one senior developer who writes a bunch of trash code to
get stuff done in a hurry. Later is asked to maintain it and add features. But
it's no fun cause it's a pile of poo. New shiny attracts his attention and he
moves on (cause, you know, he delivered at his current job!). New developers
try to pick it up, including a new hire, try to work with it. Warnings about
runway loom. Support is swamped and many of the tickets get kicked up to
developers because support can't answer because they're obscure bugs. Most of
developers time is spent trying to fix the worst bugs, but things just get
worse because each bug fix introduces new bugs, cause the code-base is well
neigh incomprehensible. Some developers see the writing on the wall and flee,
leaving even more work for the remaining developers. No money for new hires. 3
months later, layoffs. 1 month later, closed. One poor guy is laid off 4
months after being hired.

Lather rinse repeat.

The upshot is crap code makes a crap product. Just like crap engineering makes
a crap car. Customer _do_ care about that. They'll get tired of the bugs and
the infrequent updates and the poor support and eventually they'll move on.

~~~
nickv
Can you name a company that failed because of a bad codebase as the number one
reason?

I feel like people walk through hypotheticals like that, but I've not heard
people say "Company X failed because of that scenario."

~~~
sukilot
Friendster imploded during its growth stage due to either bad code or not
enough spending on servers.

------
gregdoesit
Let me chime in my (personal) opinions, working at one of the fastest-growing
startups at one point: Uber, and the details I gathered from the early times.
While today, Uber is big on code quality, engineering best practices,
reliability, and many others, as much as us engineers want to take credit for
the success of the business via code quality: they are pretty unrelated.

Few people know that when Uber started and the first $1M was raised, the apps
were built by contractors. The app was bad, the code terrible - but even with
a bad app, customers used it over taxis, that didn't even have a bad app. The
business took off, the next round of funding came, as did the first few full-
time engineers.

The first thing that the full-timers did was throw away the mess of a code,
and rewrite the app. However, moving fast was still more important than
quality. Launching in a new city needed to be done in a few weeks - if the ops
team could mobilize a whole city in this time, engineering was expected to
move fast as well. So while generally, forward-looking decisions were made,
still, many-many shortcuts were added, most notably "The Ping", which was the
backend sending over all state data to the client in a massive JSON object,
ever 10 seconds. This was to speed up development, not having to make
backward-compatible state changes all the time. It's something I'd cringe over
today, but it did help moving fast, at the expense of loose contracts and lots
of bandwidth usage that could have been avoided.

As the business proved to be successful, in year 3 or 4, reliability and
quality started to be more of a focus: things like tests, listing,
architecture, rollout best practices, and so on. A big push happened when in
year 4 or 5 (I can't remember exactly) a sloppy change almost took down all of
Uber's core systems at rushour. But for the first few years, quality took a
relatively back seat. Was it worth it? I'd definitely say so. As another
commenter noted, the customers of a startup do not buy code quality: they buy
something that meets their needs and is good enough.

When a startup becomes wildly successful, you'll have the funds to pay off
tech debt. Until then just make sure it doesn't suffocate you - otherwise pile
it on, and move fast.

~~~
rdgthree
I think it's worth noting that Uber has rarely gone down (I can't even
remember one example, though I'm sure it's happened). While I'm absolutely
sure some parts were downright horrifying at times (we've all been there),
someone clearly had a good idea of how to make tradeoffs for development speed
without compromising the core bits so much that they couldn't keep up with the
rapidly increasing usage.

Huge difference between something like "The Ping" and deciding _not_ to
rewrite that original contractor code.

------
coddle-hark
Code quality is all about risk management. You're balancing the risks of:

1) Bugs/outages that affect your customers

2) Hard to grok code that slows down onboarding of new staff

3) Features taking longer time to develop

How you weigh these risks is different from business to business. For a
fintech startup a bug in the code could end up bankrupting the company. For a
VC backed social network, being able to quickly onboard new hires is really
important. For an app that supports say BLM protestors, time-to-market is
everything.

In the great scheme of things, having a crappy codebase that makes money is a
good problem to have.

~~~
geofft
Special case of 1: security bugs. If your app's audience is BLM protestors and
it leaks their personal information somewhere, it would be better (both for
the world in a moral sense and for your business in a pecuniary sense) not to
release it at all until it doesn't.

~~~
thekyle
Ideally an app targeting protesters wouldn't collect personal information to
begin with.

~~~
geofft
There's a lot of ways it could do so unintentionally - for instance, if it
captures photos, it needs to scrub metadata like geolocation and it needs to
allow you to black out portions of the photos before uploading them. But yes.

------
brey
> what is the best calculation to make when trading off code quality vs
> features?

> do most YC startups write tests and try to write cleanish code in V1 or does
> none of this matter?

It only matters when bad code hurts your overall business velocity - what that
means, only you can answer.

Nobody's writing tests for their purist aethestics, they're there to let you
go faster - but there's an up-front cost you have to pay for them. Sometimes
that's worth paying, sometimes the land grab is more important.

There's no single answer to this question.

~~~
_ix
Tend to agree. Leadership needs to send strong, clear signals about quality
and acknowledge existence of potential technical debt well before the team
starts feeling crushed by it.

------
carapace
Here's the thing: I know of at least one company that made it big starting
with a steaming pile of technical garbage. They used their users as a QA
department. The business logic was all in stored procedures in the DB, which
was always on fire as a result, and the front-end was _bad_ PHP with so much
indentation that it was 25% blank space _after_ rendering the page!

Yet their users loved them, the numbers went up and up, investors lined up to
take the founders to dinner and vie for the chance to pound millions of
dollars up their asses. It was crazy. They built a half-pipe in the office,
you know, for _skateboards._ They became a household name and IPO'd a few
years ago.

The point is this all happened _despite_ their garbage architecture and crappy
code. Yet it would have all been much easier and cheaper to do it right the
first time.

(Word to the wise, the founder/CEO wound up crying at his desk as the
investors wrested the company from him. "Be careful what you wish for: you
might get it.")

~~~
Toine
Thanks for sharing, fascinating to read

------
jonnycat
One critically important question here I think - has the startup achieved
product/market fit, or gotten a strong market signal that you were building
the right thing?

If the answer is "no", then the tolerance for bad code goes way up.

Either way, in the early stages of a startup, a great deal of the code will
end up being throwaway, and the trick is sometimes knowing which things are
important to get right upfront, and which things can be punted on.

Well-defined service boundaries help a lot. This doesn't mean going to
microservices, but it does mean keeping things well-isolated and independent
even in the same codebase. In effect, you can have "well-architected bad code"
which will help you stay flexible even as you move quickly.

------
anonu
My experience as a technical founder:

a) The codebase will always be "bad". There will always be things that need
improving, testing, fixing, enhancing, revisiting.

b) The optimization function to determine code quality vs features has "user
adoption" as one of the main inputs. If you are going to spend time and money
working on code quality, but with no users, then whats the point? Are users
interested but leaving your product because of bad quality - then patch the
code to bring it right above the threshold of usability and maintenance - but
not more. That may seem controversial - but you need to optimize your
resources (time+money) in a startup

c) I dont know anything about YC startups. But I can tell you writing tests
and bad code are not mutually exclusive. Having said that, you should think
about testing - always.

Having been at this startup for 3+ years now - I can tell you we've gone
through 3-4 iterations of the same thing already - each time designing for
more scale, more developers and more users. So I think that may be just par
for the course.

------
leonardteo
I feel it depends on what you mean by "bad code".

We went through a very tough journey ourselves. When I started the company, I
wanted us to just use out of the box Rails. But some senior devs disagreed -
we had huge disagreements about it. We ended up spending months building a
complex SOA, only to find 3 years later that it wasn't a great implementation
and rewriting it (now it's even more complex). Meanwhile, Shopify and others
seem to be happily still using mostly stock Rails. And we're in a tough spot
where finding developers who can work and be productive with our NIH-stack is
quite challenging.

I agree with what the others are saying here. Customers aren't buying our
code, they are buying our product/service. Code should not be "bad" (i.e.
there should be tests, etc.) but as a startup, I think velocity is more
important and we just have to weigh that. We can hack stuff temporarily to
ship or do experiments, but we'd have to deal with the debt if we keep that
around.

If I had the opportunity to start all over again, I would: \- Stick to well-
known frameworks. Use "boring" tech. \- Outsource as much as possible first
and don't reinvent the wheel e.g. don't write your own subscriptions/billing,
just use Stripe/Braintree/Recurly/Chargebee, use Algolia (don't write your own
Elastic) for search, etc. Move fast until you've figured out product/market
fit, then optimize for costs, etc. \- Stand your ground on rejecting NIH. Devs
will complain because they want space to learn, try new tech, do NIH things (I
want to hack stuff to!). IMO it's those NIH things that are often said to be
"bad code" \- they're not "bad", they were just written in a short amount of
time to solve immediate problems and they often don't account for all the
strange edge cases, etc.

------
zjs
Disclaimer: I've never worked at a startup. However...

Tech debt is like any other kind of debt: a way to increase leverage.

Some tech debt is like a mortgage. You get significant value, immediately, and
can keep the payments manageable.

Some tech debt is like a payday loan. You get ahead by days, but behind by
weeks.

Some tech debt is like margin trading. You make an educated bet about the
future and if you're right, you've multiplied your success, but if you're
wrong you've multiplied your failure.

There's a time and a place for each kind of debt, but taking on debt in a
haphazard fashion can get you into a situation where you need to chose between
putting an inordinate amount of effort into paying off the "interest",
declaring bankruptcy, or risk having the "repo agent" come calling when you
least expect it.

(And note that even "tech bankruptcy" isn't necessarily a bad thing, if you
can do so in a way that limits the blast radius.)

~~~
dpenguin
Great answer along the same lines I myself look at tech debt as well.

Another important thing to keep in mind is that while you can leverage tech
debt to move the business forward all you want, be extremely aware of your
tech debt and reduce it before you go bust. It’s very easy to develop a belief
of “this has worked for 4 years so it’s solid and doesn’t need to be looked at
anymore” when in fact, you could be teetering on a total collapse of the
system within 3 months because some aspect of the system/business started
gaining traction non-linearly.

PS: I have worked at very large, medium and small companies that grew big.
Haven’t worked at a failed startup so far - so a bit of selection bias in my
opinion.

------
vinay_ys
Unless your startup is building some new tech product (like a new database
technology, or a crypto/blockchain system) that is going to be sold to
customers, software code quality doesn't matter much, at least initially for
sure, and may be ever too.

Most likely you are startup building a software application service that is
augmenting or automating or orchestrating some real-world interaction (like an
e-commerce shopping or supply chain systems), then you care most about getting
your product market fit figured out.

What this means is testing your understanding of the potential customer's
needs, selling your product value to those customers (switching them from
their existing way of life to your way of life), figuring out the business
model (what costs are you optimizing, how much it costs you to run it your
way, who will pay for it, can you cross-subsidize something, how does your
business scale, at what scale your business becomes viable, at what point do
you make profits etc).

This usually requires a lot of experimentation and product iteration. For
this, you need to have very high feature developer productivity with very low
costs for getting experiments wrong. For the past half decade, this is
achieved by not building any IaaS/PaaS stuff in house and using stuff from
some public cloud platforms.

Today, a new movement is happening – it is #lesscode or #nocode movement – you
use frameworks and rapid application development tools that allow you to write
very little or no code to create your applications and iterate quickly with
very low software engineering skills. This allows a startup to go very far
with very little burn while hunting for product-market fit.

Once you know you have a good product that is on the cusp of scaling, you can
revisit your choices and figure out how to optimise costs through in-house
software development. The bar is raising every year for what makes sense to
build in-house.

~~~
nicoburns
It depends how bad it is. I inherited a codebase that had wildly inconsistent
data in it's (schemaless) database, because there was no (or very little)
input validation. When I joined, the dev team was spending 50% of their time
fighting fires and dealing with bugs reported by customers. This was all
justified by "move fast and break things", but the reality was that the code
quality issues were massively slowing down feature development.

------
jlengrand
Heyo,

I've worked in very small and very large companies, though never owned a
startup myself.

A few things I have seen and experienced, personally or from close friends:

* It's ok to not be scalable from day 1 as long as you're not certain who your customer is. Because you are likely to have to shift a lot left right and center and it might slow you down. But do keep in mind that it will become an objective at some point.

* Your code should be reliable and high quality enough that you can refactor it fast and without headaches. I have lived situations where a change in one part of the application was creating bugs somewhere completely different. I've also been in places where tests were forbidden (bugs never come twice at the same spot , RIGHT?!). Not having tests with f __* you hard because you wont be able to move without breaking stuff soon, but also because you won 't be able to easily expand your team.

* Tangential to 1 and 2, do try to keep abstractions layers in place. That will make your life easier.

* You shouldn't be afraid to let new employees in the code, and to deploy. Otherwise you're a liability.

* Security is a tough one. It'll never be good enough, and it's usually a cost more than a revenue... Make sure that all the data of your customer is safe though, that should be the hard limit. Because if you're successful and get hacked you might never recover from it.

I have seen a brilliant company that had a nice business model go down not
because the code was not high quality, but because lack of tests and lack of
design abstractions made every step of the way 100 times harder a few years
down the line.

You seem to have a pretty good idea where you're going already :).

All in all you wanna move as fast as possible, while making sure that you're
not creating the shit of tomorrow. So if you write crap because reason, make
sure it's contained :).

~~~
karatestomp
> * Your code should be reliable and high quality enough that you can refactor
> it fast and without headaches. I have lived situations where a change in one
> part of the application was creating bugs somewhere completely different.
> I've also been in places where tests were forbidden (bugs never come twice
> at the same spot , RIGHT?!). Not having tests with f* you hard because you
> wont be able to move without breaking stuff soon, but also because you won't
> be able to easily expand your team.

A cost of sloppy code and move-fast practices & attitudes that's not well-
accounted for most places, I think, is that it makes it harder to add people
to the project and get them contributing effectively. New hires, contractors,
agencies. All will be less effective, longer. This factor gets _much_ worse
the longer you operate in that mode, and the more sloppy code goes to prod.

> I have seen a brilliant company that had a nice business model go down not
> because the code was not high quality, but because lack of tests and lack of
> design abstractions made every step of the way 100 times harder a few years
> down the line.

I suspect the "tech choices don't kill companies" wisdom is actually BS and it
does happen often enough to worry about, it just doesn't often _look_ like
that's what killed them.

~~~
jlengrand
> A cost of sloppy code and move-fast practices & attitudes that's not well-
> accounted for most places, I think, is that it makes it harder to add people
> to the project and get them contributing effectively. New hires,
> contractors, agencies. All will be less effective, longer. This factor gets
> much worse the longer you operate in that mode, and the more sloppy code
> goes to prod.

Yes definitely! I have seen VERY FEW startups that feel at ease with getting
new people onboard the codebase. But as soon as your business model is
validated, that's what will most likely happen so you better be ready for it.

> I suspect the "tech choices don't kill companies" wisdom is actually BS and
> it does happen often enough to worry about, it just doesn't often look like
> that's what killed them.

I don't know about that. Not that I don't agree, I really just don't know. In
that specific case though it seems (though the exact opposite) related. They
essentially had rebuilt everything. Their own SOAP layer, their own XML
parser, UI framework.... And that was OK when the company was created, because
there was no alternatives. But they never made the move to start using
mainstream solutions when they appeared. Wait a few years, and what takes you
a day of work takes 30 minutes with the current state of OSS in other
startups.

------
throwaway_churn
Managing technical debt is always a trade off. The company I work at is
failing at it. We:

* Bootstrapped a startup, left ourselves tons of tech debt

* Glommed as many features onto the core product as possible to meet enterprise needs

* Got a ton of MRR and are the leader in our corner of the industry

* Never pivoted to being a mature company, never paid off the debt. Now the bugs are pretty unmanageable and the software is too complex. It’s hard enough keeping the service afloat, let alone adding new features.

* About 50% of our customers try the software and churn out within six months. Our client industry is only so big, and we’re actively pissing off a huge chunk of it.

* Now we have a PR problem. Industry people leave us bad Google reviews, which our company owners can usually get deleted. They also warn people in industry Facebook groups not to try our product.

If you don’t pay off tech debt eventually, it will catch up with you in lost
growth.

~~~
ensiferum
About 50% of our customers try the software and churn out within six months.
Our client industry is only so big, and we’re actively pissing off a huge
chunk of it.

What that means is that you haven't yet found your product market fit. Better
not scale up or you will burn.

Check "sell more faster" by Amos.

------
koonsolo
There is an easy rule: good enough is always good enough.

41 year old developer here who worked on various projects going from solo to
around 50 person teams.

If you want to move fast, you have to hack stuff together. That is exactly
what your CEO did.

In the end it all depends on your project. If you make a game, let your users
find the bugs. If you make life critical software, you better have some
rigurous tests in place. A 1 person project can be really messy, but a 5
person project can't.

Don't put effort into code that might be thrown away.

Most things are an investment, so always question how fast you get the ROI.
It's always a balancing act.

But in the end, it always comes down to the same question: is it good enough?
If yes, continue. If no, do the investment and move to the next level.

------
hodgesrm
At the risk of sounding like an HN pedant, your questions are backwards. To
make sound engineering trade-offs you need to understand the problem. I would
start with questions like the following:

1.) What market are you targeting and what are the overall user expectations
for features, quality, reliability, etc.?

2.) What is the minimum viable feature set (i.e., product) to get into that
market?

3.) Is it more important to be fast to market or the best to market?

Products are built iteratively. Even if it's OK to deliver on the fast and
crappy model you still need a path to fix things incrementally. This applies
to just about every product I've ever seen.

------
hliyan
a) Almost every startup I know of that failed, failed because of business
reasons, not tech. Even when it was tech, the reasons were delays in feature
delivery and production issues, not code maintainability or tech debt. Some
companies paid dearly later on to fix tech debt, but if they hadn't moved fast
in the first place, they wouldn't have had customers to lose.

b) This really depends on having a combo of a product manager who appreciates
technology and an engineering manager or CTO who appreciates business. You
have to weigh the benefit of shipping feature X now vs. later, in favor of
tech debt T. Both sides need to be honest about the consequences of delaying X
or T.

c) Not a YC startup, but always _try_ to write tests and good code. Never
abandon it. But in the early days when you're trying to gain traction, don't
feel bad about having to compromise on them during crunch times (which is most
of the time).

~~~
Conan_Kudo
> _a) Almost every startup I know of that failed, failed because of business
> reasons, not tech. Even when it was tech, the reasons were delays in feature
> delivery and production issues, not code maintainability or tech debt. Some
> companies paid dearly later on to fix tech debt, but if they hadn 't moved
> fast in the first place, they wouldn't have had customers to lose._

Delays in feature delivery and production issues are usually symptoms of poor
code maintainability and high tech debt. It's a really difficult balance to
strike, but it's worth tackling low-hanging fruit as you work on the code, and
introduce good practices for new features as you keep going if it doesn't
impair development too much.

~~~
hliyan
I know I might be in the minority here, but tech problems that are already
affecting users (even indirectly in the form of missing features), I tend to
consider as more than tech debt. The payment has already come due. If there
are real world impact beyond just standards & best practice compliance, we
have to fix right away.

------
bcrosby95
My advice is to not worry too much, but to follow the boy scout rule: leave
code in slightly better shape than you found it. Code quality matters to no
one - not even other developers - if the code in question is never revisited.
The boy scout rule helps ensure that code that doesn't need to be good doesn't
have time wasted on making it better, and code that needs to be higher quality
naturally becomes higher quality.

~~~
mandelbrotwurst
This is fine advice, but it's really advice for an individual more than an
org, and if you are in leadership you need to consider whether your teams and
processes are set up in a way that both leaves room for your engineers to do
this and also ideally even actively encourages their doing so.

To phrase it another way: Each engineer might have the best of intentions but
if success is measured by new feature velocity, adherence to this rule becomes
less likely.

------
thescribbblr
I once worked a php developer for a small e-commerce company situated in
India. The codes written by earlier engineers were so poor that it took me 3
weeks to get how the whole site works.

The variable names were random bollywood movie names, there was no class,
functions all was hand coded in core PHP and it was too complex to add new
codes.

------
steve-s
It depends on the definitions of the terms used.

Strictly following MISRA like guidelines while developing web SaaS? Spending
day on orchestrating mocks of this and that service so you can test some
trivial class and tick the 100% code coverage box?

I think that code quality of individual methods does not matter as much as
quality of overall architecture and that requires some design planning and
regular refactoring and that I would imagine can hold you back from delivering
something in a tight time frame. It does pay in longer term, I am not debating
that, but before it pays off it may be too late.

------
qppo
To me, the single biggest distinction between working in a startup versus a
traditional organization is that all your work has immediate impact. There are
points in the lifetime of a startup where code quality and test coverage have
immediate impact, and other points where they don't. In a startup you have to
learn how to budget your time and effort to create the _most_ immediate impact
to further the mission, so if code quality/testing aligns with that then it
makes sense to spend time and effort to do it.

I've worked at a few startups and I'll give you some examples where it matters
and where it doesn't.

I was hired as employee #4 at a stealth startup that would turn into a zombie
and I was the last non-founder to leave when our runway ran out. At no point
in my time there did code quality or test coverage matter even a little bit -
our biggest problem was convincing people to pay us, which was particularly
difficult because our value to the people we _wanted_ to pay us was intangible
(at least to them). This was why the startup failed, we tried to sell to the
wrong people for too long (ie, misaligned our values with what our target
market actually valued).

Code quality didn't matter because we essentially strung up demo after demo in
different contexts, the core technology was basically finished within a few
months of founding, and the rest of us worked to put it into different
contexts to show people what they could do with it. Those demos would never
reach production, and most of them had a single developer. Who cares if there
were no tests or it was all spaghetti? We were just trying to show off.

I'm currently an early employee at another startup and spent a lot of time
over the last six months developing ci/cd infrastructure and we're going to
make a major push for testing/benchmarking coverage in the next month or so.
The reason is that we have a tangible and immediate impact to our business
because it directly affects our value proposition.

So to answer your question, the answer is it depends. It all matters when it
affects the bottom line, because code quality/testing doesn't make you money;
it just costs you less money in the future. There is a very definite stage in
the life of a startup where that matters, and as a developer in the org you
have to budget your time to commit to it when it matters.

------
staysaasy
Great qs!

a) A really bad codebase like the one you're describing hasn't been the root
cause of any failure that I've seen, and I have seen several companies recover
from it and become very successful. Unfortunately what you're describing may
be a symptom of a different root cause (poor judgment around what matters most
to the business), and that can def kill you.

b) These things don't trade off against one another directly. Code quality
helps feature velocity. In the early days the only thing that matters is
getting product/market fit as that's an event horizon beyond which the future
is unknowable; the way to get there is to iterate fast, which does require
things like CI/CD and a coherent/non-spaghetti structure (even if the code
itself is ugly).

c) I've seen both modes. My main view: what matters most pre-product/market
fit is rapid iteration (see above). Once p/m fit has been achieved you need to
be able to add features rapidly, which requires a different level of code
quality (comprehensive tests, etc). There's no hard and fast rule here, but
most products ultimately throw away most of their pre-product/market fit code
within 1-2 years of scaling.

I actually recently wrote a blog post that touches on a lot of this here:
[https://staysaasy.com/engineering/2020/05/25/engineering-
at-...](https://staysaasy.com/engineering/2020/05/25/engineering-at-a-
startup.html)

Good luck with whatever you're up to, whether at this company or elsewhere!

------
rdgthree
_Startups tend either to do many things right, or many things wrong. If doing
x right were a coin flip, there would be a bell curve with the peak at doing
half right. But it is not a coin flip._ [0]

Code can be decently sloppy, but there's likely a strong correlation between
good code and good startups. Not because the code made them a good startup,
but because the good startups are good at most things.

Many startups will do fine with a rough codebase, and obviously you should
value the code accordingly (if it's an API as a service, highly, if it's a
physical product with no software component, not so much). But be wary of any
startup that's close to _so bad you worry it might fail_. Good founders will
rarely let it tip so far to that side of the scale.

Obviously there are loads of exceptions to this rule. But I think if you want
to be a founder of a software driven startup or you want to find a great place
to work as a software engineer, aim your expectations higher than feels
reasonable and you'll probably land at a decent medium.

[0][https://twitter.com/paulg/status/1240308316808626176](https://twitter.com/paulg/status/1240308316808626176)

------
princevegeta89
My 2c is that it's not only your customers that you need to keep happy, but
also your own employees.

Engineers look for projects that don't give them headaches while working on
them, and those that help them learn the right things. At the same time, they
do look for maintainable and extensible codebases that they can enjoy working
on.

The problem with bad code that is a clustermess is that things leak
everywhere, and fixing one bug will lead to another. You won't have a product
that is stable for your users either. At one point your engineers will make a
point of rewriting things from scratch, but management may stop them. This
will force them to quit ultimately so you'll have to deal with loss of
resources.

On the other hand, using your users for QA is terrible. They do not report
bugs at all, they get frustrated and spread the bad word. If they're paying
users they will start looking for alternatives at one point.

This is all a part of your business.

------
JanisL
I define good code in terms of economics, the whole point of writing code is
to generate some sort of benefit. The nature of the utility that is created by
code is therefore heavily context dependent. So from this perspective I'd
argue that code should never _aim_ to be "bad", if such a situation is coming
up it strongly hints that a discussion about the goals of the code and why it
exists is badly needed. Also organizations that aim for "bad" in certain
departments have a nasty tendency to generate cultural and political issues
that become toxic for the organizations as time goes on.

As for some of these questions:

a) Yes I've seen a few companies go under because their code wasn't able to
generate profits. A couple of times it's been so bad that customers didn't get
what they needed immediately as a result. But usually the sorts of company
failure modes from bad code are less dramatic. Sometimes this is like bad debt
in that it looks good initially but comes at an existential cost later. Other
times it's been more boring like lower velocity making the company
uncompetative or too expensive to run.

b) If you are thinking of trading features vs code quality you've already lost
because this isn't something that can be traded.

c) Writing some tests tends to be a pareto-optimal choice, in the sense that
lower defect counts tend to allow you to create more economic value from the
limited software development staff you have in a given time frame. Frequently
you'll find that having some tests allows you to deliver things like features
more efficiently than you would without them. High defect counts tend to
result in not meeting requirements or unnecessary rework. There's a sweet spot
here about tests and test coverage, there's definitely diminishing returns and
getting to 100% coverage is very expensive because of the last few percent
being disproportionately hard to get while not being worth the cost of getting
it in many cases.

~~~
Aeolun
> b) If you are thinking of trading features vs code quality you've already
> lost because this isn't something that can be traded.

My enterprise would like to have a word with you. This is a trade they make
daily.

~~~
JanisL
What I'm trying to say is that it's not just some simple linear trade you can
make where "less quality" implies "more features" or "more quality" implies
"less features". Usually when I encounter this line of thinking, especially
when it simplistic, it does a lot of damage. The biggest damage tends to be
when people who are less familiar with the fundamentals of software
construction use this line of thinking when allocating resources or making
planning decisions.

------
dep_b
If you feel there is a lot of technical debt and the work you are doing never
seems to also diminish it then you have a problem. If you still have some
parts that could be improved but you are tackling technical debt constantly
while revisiting features you're OK.

Sometimes you didn't understand how the feature should be built until it was
done. Sometimes you need to live with a suboptimal architecture until it
clicks in your mind. Sometimes I read my own code and realize "this is
bullshit". It might need some time to rest.

But refactoring is easiest when you just worked on a feature and everything is
still completely in your mind, "striking when the iron is hot" as I call it.
What you can refactor in minutes after you checked off all of the requirements
of a feature can cost hours if you don't have a complete mental model anymore
if you revisit months later.

~~~
andy_ppp
The whole thing has no boundaries and is extremely difficult to add new
features, but the CEO is extremely fast!

------
zaptheimpaler
Some obvious points - each startup is at different stages. Unacceptably bad
for a mid-size startup can be good enough for an early startup that barely has
revenue or a path to profitability. All decisions are made from a business
POV, where bad code is a form of debt - it can be used well or not.

I have some maybe non-obvious thoughts though - some useful questions to ask

1\. "how difficult is this bad code going to be to cleanup later?".

For the vast majority of issues, its usually not very difficult to clean up
later. Only very few things like e.g an API that many customers use, or the
way core data is modeled/accessed are difficult to change later.

2\. "how well encapsulated is the badness in this code?"

A shitty function, or a janky microservice with a well thought out API is much
better than a sprawling mess. The more you can split your architecture into
independent pieces, the less bad code in any one piece matters, and the easier
it is to reason about. Horrible code has no clear separation into layers and
everything feels like one giant tangle - that genuinely slows down dev speed
and makes building stuff feel risky.

Good engineers often write code thats bad but but also encapsulated well
enough to change easily.

3\. "what are the business consequences if this code fails?"

Code quality on a feature not used by many people matters far less than a core
feature. Database code should be more stable than web tier code. Code touching
the core of a web server should be reviewed more carefully because it may
cause downtime. A bug on a peripheral feature can often be fixed later without
much impact to customers.

4\. "how quickly and confidently can the people responsible for this code
change it?"

Super spaghetti code is hard to change for everyone. In contrast, some code
has some historical design baggage or intricate business logic which may be
simple enough for experienced devs to change, even if it is hard for newcomers
to understand.

------
nojvek
As others have echoed, customers don’t buy code. They buy a tool that is
reliable and solves their problem.

You could have 100s of tests but server could easily fall over. So it’s always
a trade off.

One thing I can say is if you sow the seeds early, it’s easier to add a test
with a new feature than add a 100 tests to a 2 year old feature that no one
understands and keeps on falling over.

Some companies take this to extreme on both ends. Either no tests at all or
everything needs 100% coverage delaying time to get things in the hands of
customers.

Most pragmatic places I have worked at invest in test infra once they have
good product market fit. Make it easy to write tests and fast to run and debug
them. If it’s easy to do the right thing, why not do it ?

------
muzani
> a) has Hacker News/YC ever seen a startup fail because the codebase is so
> bad.

Yes, but it's more that the _programmers_ are bad instead of the code. Bad
code can be patched fast by a good programmer, but becomes rapidly
unmaintainable by a bad programmer. A lot of techniques and style guides out
there are designed to manage bad programmers.

> b) what is the best calculation to make when trading off code quality vs
> features?

I have two modes: prototype and production. Prototypes are disposable, and
value speed/results above all else. They should be thrown away after. Treat
them as a demo to get budget for a feature or a hack to solve a problem _right
now_. Design it to be completely destroyed and replaced, instead of replaced
gradually, although you can probably reuse interfaces/contracts in between
these modules.

Production code is kept clean and as maintainable as possible, but keep the
engineering to a minimum. If you have to ask whether something is
overengineering, it probably is.

> c) do most YC startups write tests and try to write cleanish code in V1 or
> does none of this matter?

I'm not sure about YC but I don't write automated tests. I have a text file
with all the manual tests I need to run. Features are usually scrapped hard in
a startup. IMO it's better to release a broken thing to 1000 people who
complain it's broken than to release a well built thing to 100 people who
think it's nice but won't pay for it.

> Should we just be chucking shit at the wall and seeing what sticks? Do most
> startups bin v1 and jump straight to v2 once they have traction?

Rule of thumb is you need dozens, if not hundreds of prototypes, so optimize
for speed and experimentation quality. You're like a prospector, looking for
ore. You don't want to build an entire mine, where there is none, and you
don't want to commit too hard until you know there's a sufficient number of
it.

But things are different for "ramen profitable" startups, and you should start
looking into how to maintain better and add features faster.

------
caseymarquis
A rule I use for testing: If a feature doesn't run correctly the first time
it's manually tested, then an automated test should be created which checks if
the feature is working. This rule typically means that tests save time, and
that tests are created for the code that's likely to break.

I bypass this rule when I think it's obvious I'm going to want automated
testing. For example, I needed a customer facing DSL for importing data with a
lexer/parser/interpreter; manual testing was bypassed from the start.

------
gkaemmer
In my experience it’s not about “bad” vs. “good” code, and not about a
tradeoff between speed and quality.

It’s more about how much abstraction is built into the system. A mature
codebase has a clear purpose and therefore can contain durable, high level,
even beautiful abstractions. On the other hand, a founder doesn’t always (nor
should they) know what their code will need to do in 6 months time, so they
typically avoid writing abstractions.

You can still write good code as a founder—it’s just that good founder code
looks different than good BigCo code.

------
m0llusk
This is totally context dependent. If the code is for controlling a nuclear
power plant or security for thousands of customers then the core may need to
be robust or the enterprise will be doomed. If the code is handling some basic
business processes just a bit more reliably and efficiently than some existing
but rotted code base then there may be enormously wide bounds for code
quality.

And this isn't just startups or side issues. I don't know anyone who has
looked seriously at OpenSSL without being completely horrified.

------
avilesj
I am part of a startup for the first time and, coming from a project that was
a bit messy on its own but had some structural integrity, I feel a bit torn
apart with the current code quality.

For starters, the project _must_ be done in a completely serverless manner
(AWS was the chosen provider) and _nobody_ in the team had experience making a
complete product just using this kind of architecture.

Since performance is the main concern, at the beginning we did a very shallow
research on our options for languages and relevant items to the lambda's
performance. One of those was cold startup time, which the bundle size has
influence in. This led us to split our custom dependencies as much as we
could, making the development and testing more painful.

With both previous points presented, I can say our code quality is not good.
As for velocity and delivering on time, we have had some issues because of
planning mistakes and unforeseen inconveniences while using AWS SAM and AWS
CF. Nonetheless, we're "on time".

We have identified some pains that we would like to fix post-launch but that
moment seems to never going to happen. I got a feeling we won't have time to
do maintenance on the product and we'll just be bombarded with either bugs or
new features.

As others have said before, customers will only look at the app's
functionality and UX. And in our case the application looks amazing. The
backend, not so much.

------
mharroun
I have been in the startup world for like 13 years, and have been everything
from an IC up to CTO. This is IMHO:

> a) has Hacker News/YC ever seen a startup fail because the codebase is so
> bad.

No, but I have seen the mass velocity hits from short term decisions living on
over the years. Tech Debt is real and can eat into 20 - 60% of a teams output
because of bugs/issues/lack of documentation & context. These places are
miserable to work at.

> b) what is the best calculation to make when trading off code quality vs
> features?

Unfortunately this may not be a popular opinion but here is what has worked
best for me. You need a sound ARCHITECTURAL base from inception, to do this
the person who makes the decisions or is in charge needs to use
tools/languages/etc that they are experienced with to develop a clean base to
work from. Its not hard to set up CI/CD, unit testing, proper devops, and code
decisions like inversion of control, and proper service segregation from the
outset IF you use technologies you are strong in. This lets you move quickly
if need be but the "bad" code is limited to services/systems. Its easy to fix
a single poorly coded rushed class/function/file. Its a nightmare if your
entire basis you build off of is crap.

Startups tend to be limited on time... and sadly often startups hire
inexperienced people who cant do the above or experienced people who focus
more on shiny new technologies then using things that work and and be quickly
executed.

c) do most YC startups write tests and try to write cleanish code in V1 or
does none of this matter?

Never been part of a YC startup, but I would say my general experience is that
when your still figuring out what your product/market fit is things like
scale/code quality/architecture shouldn't matter... however two things need to
be kept in mind. The first is having an "escape hatch"... this code is crap we
all know it but its the code we need right now, is their a way we could
pivot/transition to a new system/architecture in a few weeks when we finally
get funded or "grow"/"scale". The second is identifying that pivot point and
investing time to create the the first generation foundation (if you go full
unicorn/scale again you may need to deal with this yet again).

In conclusion you need to do what gives you the most velocity for your effort,
this means when you are super small and still figuring out the basics a costly
foundation inst worth much. Then if you survive and shift into growth mode you
need to expend some effort/rescourses into a good base to keep that velocity
alive.

------
padseeker
Personal confession - I've been building something for more than a year. I
hope to finally release it before the end of the year, although FYI I pushed
back the release date multiple times.

When I started coding I was disciplined and organized, writing tests, etc. And
as time has passed I've had to sacrifice those guiding principles. At some
point changes to UX and logic to provide a better user experience has taken
higher priority to well tested code. I've changed and modified things so
frequently that the tests I wrote would break. There are tests I wrote for
code that is no longer in my code base. It felt like a complete waste.

If you have a clear vision of your MVP, or you have a designer giving you
requirements and wireframes, or you know exactly what you want when you start
out early on (waterfall?) maybe you can stay true to all these well
established and proven software development principles.

But if you are flying by the seat of your pants and figuring out as you write
code I'm not so sure doing all the right things should be your first priority.
I feel that building you MDP - minimum DELIGHTFUL product - may be more
important than building the MVP. And that might produce substandard code.

It could also be that I am a terrible developer and product manager and
designer and entrepreneur.

If you are at all curious what the hell I'm doing, you can see my landing page
- [https://www.keenforms.com](https://www.keenforms.com) \- its a form builder
with rules

------
odomojuli
a) Yes. All the time. But it has more to do with management than the
programmers. If your code is approaching catastrophe, it's time to seriously
reassess what it is you're trying to do and if it's feasible for the
programmers to understand, not for you to build.

b) The best metric is what is most boring and what is most comfortable. Boring
tech is good. Boring code is good. Things are more or less defined by their
failures than successes with languages. You want to be defined by what doesn't
happen in your code because you made cogent decisions.

c) Do most YC write good code? Yeah but that's not what defines their success.
Clean code is presentable. Clean code sets a tone. Tests are sometimes snake
oil, sometimes valuable. It's hard to assess how valuable a metric is once you
become invested in increasing it. No, writing tests won't save you. But decent
DevOps will hopefully reduce cognitive load in managing features. Writing unit
tests is in my opinion, a nice reprieve in between coding sessions. I look at
it as paid downtime.

d) As someone pointed out, there is survivorship bias to consider. It's pretty
common for v1 to a complete disaster where nobody knows what they are doing.
Most fail and do not attempt v2. Eventually the to-do's and somedays just pile
up and you lose to a competitor.

e) Another perspective, almost everyone's code will be some kind of dumpster
fire. You'll realize perfect pipelines will always be desirable, as in nobody
has one. The only code that is 'bad' is the code you fail to take
accountability for.

------
estebarb
I think that the issue is not bad code: having to deliver asap, sometimes
writing shit happens everywhere, from University homeworks to big fortune 500
companies and everything in between.

The biggest issue is not knowing your problems. If you are aware of your
technical debt it means that probably you have a plan, or at least an idea of
where to look when shit hits the fan. Otherwise people would run like crazy,
deny the problems, miss deadlines and customers expectations and ultimately
fail.

------
CraigJPerry
There’s 2 ways to approach writing greenfield code in my mind.

The decision between them is simply “once this is shipped, would you accept
having to completely re-write it from scratch to add even the smallest
feature?”

If the answer is no, decent tests will make you go faster. Your commit volume
by SLOC will be roughly:

1\. Refactoring (~50%) 2\. Tests (~35%) 3\. Actual impl code for features
(~15%)

That is, you’ll transact more than three times as many lines of code with your
VCS repo just re-writing impl code smaller and cleaner and better organised
than you will actually writing code to build the functionality.

You’ll spend more than double the adds/deletes/changes to lines of test code
than adding features to the product.

You’ll implement new features at roughly the same speed today as tomorrow as
next year. You can drip feed more devs into the team every 4 months or so to
build out velocity further.

If you’re willing to throw it away after the first release, you’d be silly not
to ditch the tests, forget the architecture and just crank out something that
works - best done by a solo dev, deploy each dev in a solo fiefdom _from the
beginning_ if you want to throw more devs at the problem.

In practice, almost all code is written as a mix between these two views and
is slower and more expensive than either approach above because of it.

------
axegon_
Never worked in a proper start up, but I've given a hand several times.
Usually it starts with textbook code. And that lasts around a month, month and
a half. After that deadlines start knocking on the door, as well as patches
over patches to cover up things that were either not required in the beginning
or extreme edge cases and it's a race to the bottom from that point on, as far
as code quality is concerned.

------
DanielBMarkham
You always test before you write any code. That's the only way to make sure
the code does what it is supposed to do.

And that's exactly what good startups do, they do business tests before they
write any code. Don't write code at all unless it's providing value to the
customers or helping you learn something you need to know to provide value to
the customers.

Now that this is taken care of, we come to the problem of the code itself.
Each bit of structure you add, whether it's a line of code or a database field
on a table, is a bit of infrastructure you may have to maintain, possibly
forever.

Some folks want to take their eye off the business tests and move directly to
system tests, testing and then coding to make sure everybody can easily
understand and maintain any code that's written.

Most startups fail because they never ever got the business tests working
right. They either never got around to creating them and making them pass or
they came up with something that worked but were unable to flywheel it or lost
the plot somewhere. Some startups have almost-perfect code that nobody wants;
that's actually one of the most common way of failing.

So the natural state of affairs is to always be experiencing some kind of
stress between value discovery and code quality. Personally I believe you
solve a lot of this by changing the way you code and the way you look at
coding, but there's too much to go into here. The key thing for most
programmers to remember is that if you're dying of thirst in a desert, you're
not going to care very much if the guy selling glasses of water has glasses
that leak or water that's muddy. The value proposition always comes before
anything else.

~~~
TehShrike
The most stable codebase I've worked on, at a successful startup that does a
great job delivering business value to its customers, didn't have a single
automated test during the 7 years I was there.

~~~
DanielBMarkham
I think with the current style of coding, that doesn't surprise me. Most all
of the testing we're doing in code is because we're writing code in far too
complex a manner for the value (if any) it is providing.

But that's a tough thing to explain to a person that doesn't know any better.
We're teaching coding as if it were a stand-alone thing instead of simply a
tool to get us other things we want.

------
furstenheim
Writing a service thinking that you'll throw everything away is a waste of
time. And so is trying to get everything perfect, because you don't yet
understand the business correctly.

In my experience you should not treat all parts alike, the more foundational
the more time you should dedicate.

It's important to think the db schema properly, anything else will cripple
your development, and the longer in the run the harder it will be to fix it.
You don't want to sanitize wrong data two years into business.

If there's a library, it's better to spend time on thinking the proper API,
the code can be later be improved.

It's ok to have garbage as long as it can be isolated and you can keep on
going. For example, we had configuration files that had to be synced with the
db. That could have been automated, but it was ok to hardcode them in config
files. It was not ok to hardcode them across the whole code. First one could
be turned clean in the future, second one would've been a mess.

Invest in tests, specially setting up the process. At the beginning they can
be just smoke tests (this API returns success), as the start up grows you'll
have more options to add proper tests.

------
mariopt
a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.

Yes, I've been seen this happening on some projects that I joined. The most
hilarious story I've is when a funded startup spent 12 months with a team of 5
and the app would crash with a single user with little usage. I managed to
rewrite a functional prototype/v3 in 3 months that worked much better. Others
were so costly to refactor and got shutdown.

More often that not, it is all about the specification and company culture
that creates this chaotic outcome.

b) what is the best calculation to make when trading off code quality vs
features?

This one, personally, I like to put the responsability on the dev team. Not
having an exact spec is far from ideal but the dev team should work with the
business team to create a good enough first version. If the code is garbage,
you've to question the development team, period. If you take nano refactors
(around 20 minutes) every day before you push your code and follow the
community guidelines for the stack you're using, technical debt won't become a
problem in the first stage.

When you're asking this question you need to ask: Why has the dev team wrote
code that lead to this situation? Do we have PR reviews? Coding conventions?

c) do most YC startups write tests and try to write cleanish code in V1 or
does none of this matter?

I don't know about YC startups but I can tell you that I'm yet to know a
company that has, at least, 50% code coverage. Any time I mentioned writing
tests, the other side looked at it as an unnecessary expense. Personally, I
believe it is up to the dev team to identify key code components and write the
tests. If you've a function that keeps breaking all the time, that is a great
candidate for unit testing.

It is possible to write clean code on V1, this is what I do today. I've faced
so many situation where I didn't and it ended up always costing me more time
and working too many hours. I would rather to delay the release of v1 and
having something stable than trying to please the business team at all costs.

Believe it or not, the business team don't give a f __* about your codebase.
Many times I reported security vulnerabilities and they thought I was creating
problems lol. I 've seen devs not reporting bugs because of the company
culture.

As a developer, do your best and always keep learning and growing. If you do
this, you'll produce better codebases naturally.

Negotiating with the business team is also key to have a sucessful release.

~~~
dgb23
Those last paragraphs hit the nail on the head.

Business often doesn’t seem to treat developers (or any workers) as a first
class value. This is true in the small just as much in the large. Hence the
grotesque term “Human Resources”.

Writing “clean” aka pragmatic, well abstracted, robust, performant and
readable code becomes naturally less “expensive” if practiced regularly (who
would have thought).

So it is also an investment in developers: their skill, communication,
happiness and engagement.

Disregard of that is short sighted and cynical. Just paying someone (well)
instead of investing in them and growing with them leads to unhappy, stressed,
uncreative workers, erodes trust and limits engagement.

------
ashtonkem
a) Knight Capital traded themselves to nothing due to a bad deployment. That’s
close.

There’s gonna be a big survivor bias here. You won’t hear about most of the
startups that collapsed because the product just didn’t work.

b) Keep the bugs non fatal, make sure the features are worth it.

c) I’m not in YC, but yes. There’s a really good reason why tons of startups
duct tape shit together with node and ruby, only to rewrite it later in
something else.

------
nartz
There is a spectrum:

Quality+Speed+Efficiency cowboy coding |0-----1------2-------3------4----5|
perfect iphone

I would never expect a startup to be operating above 4 or 4.5, it might mean
you are spending too much time future proofing.

The best teams operate around 3 or above, but they can do so because they are
experienced, disciplined, trust eachother, have a set of tools they know very
well, and can move at a quick pace because they automated a lot, have code
patterns they follow and are not "re-inventing the wheel" or trying new
frameworks for fun.

A LOT of startups are being started by inexperienced developers, where they
jump onto some new language or framework, and end up doing a lot of non-core
work due to inexperience and due to choosing some nascent framework. This
immediately puts them at less than 3, probably between 1-2.

If you are at a 2, i would say you are doing OKAY, any less than that, and I
would say you probably are suffering from inexperience, bad choice of
frameworks, no tests, etc.

------
timwaagh
I think it should be pretty bad, tbh. a) Not part of yc. Never seen anything
fail completely but I have seen projects get major delay over wanting to
ensure quality. b) I'm currently leaning towards features = 100. quality = 0.
This changes when you can't implement something or it takes too much time
because of the state of the code base, at which point you refactor. this is a
bit of a judgement call but features should have the clear priority. c) of the
startups i have been at one had the cleanest code ever and the other some of
the most incomprehensible code. the difference is due to who wrote it and made
framework choices. some devs write clean code naturally. some platforms make
that easier. unit testing wasnt a thing at either.

------
nailer
> a) has Hacker News/YC ever seen a startup fail because the codebase is so
> bad.

Yes. Velocity slows, features don't get out, new versions don't get released,
investors don't see product progress, funding runs out.

> b) what is the best calculation to make when trading off code quality vs
> features?

Wrong question. Avoid code. Avoid implementing things at all, use other
people's APIs, fake features with manual scripts that you eventually automate.

> c) do most YC startups write tests and try to write cleanish code in V1 or
> does none of this matter?

Yes. expect(result).toEqual("hello world"). You don't have to do TDD if you
don't want to, but it's not fucking hard to record the output once, save it,
make a test and then know what you broke later. Don't be lazy.

------
gpsx
The difference between a startup and a big company is not just dollars. Making
a product in a startup is sort of a process of discovery. In a big company
they will generally have a pretty well defined picture of what they want. When
a startup says they want to do "X", I don't think that is where the big
tradeof in code quality versus timeline. The problem comes when you decide you
want to do "Y", but your codebase does, or is working towards doing "X". In my
experience that is where there are a lot of decisions to make about how soon
you get something finished. And in a startup there are a lot of these changes
in direction.

------
waheoo
a) likely a symptom more than the cause

b) system sunset date is the tienbreaker, its hard to justify shit code for a
space probe, and its hard to justify perfect code for an email collector

c) automated tests are a development tool, theyre not there to make sure your
code works, theyre used to ensure your code is sufficiently decoupled,
modular, maintainable, and easily scalable in the future. Theyre also
frequently used to spike problems that are otherwise hard to solve.

The level of importance of tests you place on your situation is super
dependent on your devs. Some types of project i wouldnt write tests for.
Others i do. It depends on scope and experience.

d) yes, maybe

------
xupybd
I have seen businesses thrive with bad code. But it's painful to work on that
code. Near soul destroying.

I've also seen a good lead come in and rescue the direction of the code. That
requires expertise in the language, a good understanding of how to rescue
legacy code and political power within the organisation.

If you have to work with bad code make sure you find ways to enjoy work. Also
don't allow yourself to think you're a bad Dev because you can't work fast.
It's the code not you. If no one will allow you to get tests in place and fix
it, it's not your fault.

------
andy_ppp
These comments are really excellent, I would caution this isn’t an unknown
industry and the solutions are well known... I think they have had two people
now who weren’t fast enough for them at producing features inside the
codebase. And their junior is also quite unproductive.

I guess in the end I would still go as fast and make as many mistakes but I
tried to encourage them the have clear boundaries around the components so
that really bad stuff can be rewritten. I guess they’ll probably be a huge
success and that’s the only thing that matters really!

------
winrid
Here's the thing - you can spend forever making the code pretty. It would
never end.

I would say writing the code "with care" simply depends on the initial team.
There are plenty of startups that take the extra 20% or so time to build it
right and with care that are successful.

I worked at one company with a wonderful code base for seven years. They're
about to hit 100m ARR. We wrote tests, mostly used Java, and cared about
building reusable components and a platform. I would say hitting 100m ARR in
that timespan is good.

------
mcnamaratw
Due to intense survivorship bias (say 0.01% of code goes viral) I think it is
extremely difficult to get the real answer to that question by talking to all
of us out here on a message board.

------
mister_hn
As long as the startup in early stage (e.g. no paying customers, no MVP), the
code can even be ugly and not performant.

But as soon as the startup is earning money and winning customers, a rewrite
with better code quality standards must be planned. Unfortunately, maintaining
high quality standards means also investing tons of time in setting the tools
and the development environment and sometimes it is pretty hard (especially
when dealing with IDEs like IntelliJ and you want to use your own Checkstyle)

------
bravura
In my last Startup, my cofounder pushend that we should write no tests until
we had paying customers. Ultimately I came to respect the level of discipline
that this imposed.

------
thih9
> what is the best calculation to make when trading off code quality vs
> features?

In my opinion it's:

code quality = "How long are we going to need this feature" * "How much money
are we getting from people who use this" * "Cognitive load added by the
feature to the whole project"

V1 projects don't bring big profits, can be shut down any time and their
codebase is relatively simple. I'd keep the code quality low until some of
these factors begin to change.

------
bobbydreamer
Little learning from google, Singhal rewrote the search algorithm which larry
& brin wrote initally. Your app should be able to do the very basic things
it's advertised to do and once you get clients and funding, in the initial
years you should try to add features and at the same time remove all the
inefficient codes.

Great code and no users is a code that never going to run.

------
issa
Startup code needs to eventually scale and it needs to be flexible. I have no
problem with "crap" code as long as it makes sense in the context and works.
In a lot of cases, it would be extremely counterproductive to write "perfect"
code that then needed to be thrown away a few months later when your product
changes.

------
l0b0
There's a big problem with even asking this: it's asked as if there are
objective answers, but no single person can possibly have worked in enough
startups to have statistically significant knowledge in this area. Unless
someone has done a scientific study in this area it's just a bunch of
anecdotes and disagreements.

------
DethNinja
a) None that I know.

b) Just take into account these: Will feature introduce major bugs like
database corruption, or will it just cause minor bugs like UI bugs? Also
consider if this feature is really necessary for MVP and will have a
considerable financial return or not. In my startup I definitely don’t
deliberately write bad code, but there is a limited time/financial funds, so
it is ok for some of the code to be hacky(though never horrendous), I just put
a TODO there to remind myself to fix after product release.

c) I favour an agile approach, try to implement the feature first without unit
tests and see how it works with the overall architecture. I only unit test
code: That can cause major bugs, or code that involves heavy math.

------
robjan
a. No but it has slowed down time to market.

b. It's all about extracting the max value out of your dev time. Will
refactoring / improving code quality mean that future features get delivered
quicker?

c. Most POC don't have tests from my experience. They are usually added later.

------
an12345
I think there's a non-zero chance I know the exact company/codebase you're
talking about as I was involved with the same start-up with the same
concerns... South of England? NextJS, Apollo, Prisma, Postgres, Heroku stack?

------
karmakaze
The way I've heard it is if you're not ashamed of the code quality of your
MVP, then you spent too long on it. Until you see traction with a clear
willingness to pay, the MVP is a practically a throwaway.

------
m463
two things of the top of my head:

\- the most expressive languages might not be the most readable. This is
because a language that can match YOUR way of thinking and MY way of thinking
might not lead to you being able to read MY code.

one example is Perl - where you can say:

    
    
      if foo { bar;}
    
      bar if foo;
    
      bar unless !foo;
    
      etc...
    

the takeaway here is: the most efficient way to get an idea out of my head and
into code, might be person-specific and hard to maintain.

\- Working code can lead to survival. Only survival can lead to the time to do
it "right"

------
azhu
The codebase affects the overall outcome of a business venture built on it in
the same manner that the car affects the overall outcome of a drive.

The more specific an outcome you're looking for the more factors you'll have
to consider. You can loosely think of the relationship between code and
companies like the code is "the matrix" and the tangible business world is
"the real world". It might help to think of the code as a child being raised
in the matrix.

The first product market fit stage is the hardest. Here it befits the code to
be maximally extensible such that you can most effectively steer it around the
market landscape and most effectively capitalize on any discoveries made. But
you also need it to work decently enough to have traction. This stage is like
parenting a baby that needs to decide its life mission and begin it during the
first few years of its life. Its main purpose is self-discovery, but also it
needs to be set up to become whatever it discovers it wants to be. Here, luck
is the main name of the game.

The next stage is growth (farming the land you've staked, becoming the thing
you've decided your codebase baby's life is about). Here you need less
extensibility and more fidelity. You're clear on what your code needs to do,
and you just need to make sure you do it well enough to last long term. But
also things get more complicated at the org level. Now a team has to be built
out. The codebase must now mature, and that means that it must gain a firmer
grasp on its purpose (high fidelity architecture and infrastructure) and learn
to interface with the world (be geared towards long-term maintainability).

After you exit that stage, you exit the startup stage entirely. Generally, if
you're a businessperson and it's available to you, having good engineers
(human communication skills above technical skills, understand the holistic
function of engineering within the context of the rest of the company) is the
best solve to this problem. They will have the vision to assess the field and
the communication skills to inform you about it.

You will feel the urge to carve the unpredictability of the outcome down with
measurements, metrics, and calculations but this is mostly a fool's errand. If
you're doing something brand new there is no defined path and it is about
pathfinding, not measuring your performance along a path. There are a ton of
resources that all give opinion on the best way through this beginning patch
of woods, but the true reality is that at the end of the day, getting through
woods that no one has ever gotten through is something that can only be mapped
in hindsight.

------
snarfy
If your startup is moderately successful, is it likely it would be acquired?
For some startups, that's the goal. Is it yours?

I've seen acquisitions fail over code quality.

------
parentheses
to answer this question, simply look at tech debt - the analogy. you take debt
to enable faster access to something.

in keeping with the analogy, every business has a different appetite for debt.
the debt to equity ratio of your current position should keep you able to take
on debt when you need to. the debt should never get so great that it cannot be
paid down. being without debt is holding a position that doesn’t leverage your
ability to take debt.

------
sub7
Until product/market fit only care about your code quality enough to not make
your best engineers quit.

Post product/market fit, care about it deeply and enforce strictly.

------
zepto
Friendster was one of the first social networks, way ahead of Facebook.

My understanding is that a major reason they failed was poor code and an
inability to maintain performance.

------
slifin
The rewrites will continue until your team culture improves

------
sukilot
Successful startup code quality is on par with HR quality and Legal quality.
Far below Marketing quality.

------
UK-Al05
Code quality enables fast iteration.

There not at odds.

------
brentm
Does anyone have any best practices for onboarding new engineers to a
situation like described?

~~~
quantified
Culturally, the engineers and management who created the mess should not be
demonized. The situation is the enemy, not co-workers current and past. (Your
private opinions can be unvarnished.)

Always steer towards how things should be looked at going forward: “we’re
making this better for everyone as we add features/remove bugs”. “It probably
looked like a good idea at the time” is a phrase I use a lot.

Newbies will see all the crap sooner or later. Knowing that they have more-
senior allies in a shared battle, and having some ability to do stuff beyond
fighting the crap, will help keep them on-board and engaged.

------
ebg13
Be bad enough to ship.

Be good enough to not fail an ethics test re customer data.

Everything else is sales.

------
gaogao
a. Sort of. Poor code quality really hampered Netscape, forcing them to pay
down technical debt, while they should have been focusing on fighting on core
features against Microsoft.

------
MarcoSanto
As bad as possible, but not any worse...

------
foobarbazetc
1\. Launch product, make money.

2\. Fix up your code.

------
awinter-py
if you're doing your job as a startup coder you shouldn't have to balance btwn
insecurity bugs + spaghetti

that's a false choice. in reality you can have all three

'get things done pretty fast' is the only red flag in your story -- if you
want your life to be truly worthwhile you must make this codebase unproductive
as well

------
akatechis
As bad as it needs to be.

------
SMFloris
CTO of a small-ish startup here. Here is my take on it:

1) Code doesn't really matter as long as it solves the issue you are trying to
solve. Don't expect your code to be beautiful from day 1. Be responsible and
train your devs to be responsible as well, because in a startup you code, fix
and deploy your own stuff. What does matter, though, is code complexity.
Manage your complexity, don't overcomplicate things if you don't need to. No
need to design a Ferrari when all you need is a horse and carriage.

2) Process matters. From day 1, make code reviews/pull requests the default.
If you are the most senior dev, or a technical founder/CTO in a small startup
be prepared to spend about 50% of your time reviewing code and helping others.
You won't get to code as much, but you'll sleep better at night knowing at
least you've tried to catch some bugs before they reach production. In an
early stage startup, you will not have the time nor the resources to test
everything, but this will give you peace of mind.

3) Tests matter. That being said, in the beginning only test mission critical
stuff. If you find a critical bug, fix it and then write a test for it. If a
new feature breaks something that already works it is a big no-no and might
lose you customers. Testing will change for you as you progress with your
startup. Start by making the process easy for the devs to run the tests
locally. Then, progress in having CI. Then, maybe have CD as well.

4) Worst case scenario: full rewrite. If a 6 to 12 months old startup decides
on a full rewrite. I'll give them the benefit of the doubt, maybe their whole
use-case has changed, maybe they DO need a rewrite. That's fine. But, if you
are a SaaS that is older than that and your dev team is around 10 devs and
they are all busy solving critical bugs and putting out fires, a rewrite might
mean your death.

5) Architecture matters. This matters more than code, in my opinion. Say you
have a horrible piece of mission critical code, it is SLOW and begins
affecting your business. That piece of code will need a rewrite, for sure. But
what would you rather do: spend 30 days to fix it and lose customers, or just
spin up another machine/add CPU/add RAM? This is a good architecture, it
allows you to have time to think things through, allows your code to run well
and perhaps most importantly, allows your developers to actually code.

Bad architecture is the leading cause for rewrites. Is that beautiful
microservice architecture giving your small team headaches? Did you
overcomplicate things, perhaps? You see, bad architecture is very hard to fix.
People seem to underestimate how much a simple API + DB can scale and try to
mitigate the risks by copying whatever FAANG does. Start small, scale later
once you have the resources to do so.

TL/DR: Code quality doesn't matter if you solve your issue. What matters more
is mitigating the risks that come with writing code in general. See above for
some ideas that came from my own personal experience.

