Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How bad should the code be in a startup?
192 points by andy_ppp 38 days ago | hide | past | favorite | 176 comments
Hey Hacker News! I was recently involved in a startup where the CEO had made a crazy complex app with prisma - it did loads of things but it was a mad balancing act of insecurity, bugs, badly mangled code and database design that left a lot to be desired. I think my problem is they were just copying something that already exists rather than making something new that needs extreme user testing for it to become a thing. Obviously on such a codebase the CEO could get things done pretty fast but I couldn’t help feel it was completely hopeless for anyone else trying to make the project work correctly. Of course even with all this brittle code there were no tests.

My question first is

a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.

b) what is the best calculation to make when trading off code quality vs features?

c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?

Should we just be chucking shit at the wall and seeing what sticks? Do most startups bin v1 and jump straight to v2 once they have traction?




In my experience, "code quality" vs "features" is simply not a real tradeoff. Writing clean code with tests, function documentation, a good level of modularity, automated deployments... etc will save you time in the short term. It's pretty simple:

1. Writing quality code is not substantially slower in the first place, especially when you factor in debugging time. You just have to have the right habits from the get-go. People avoid writing quality code because they don't have and don't want to build these habits, not because it's inherently harder.

2. After the initial push, code quality makes it much easier to make broad changes, try new things and add quick features. This is exactly what you need when iterating on a product! Without it, you'll be wasting time dealing with production issues and bugs.

The only reason people say startups don't fail because of code quality is that code quality is never the proximate cause—you run out of funding because you couldn't find product-market fit. But would you have found product-market fit if you had been able to iterate faster, try more ideas out and didn't spend 50% of your time fighting fires? Almost definitely.

Pulling all-nighters dealing with production issues, spending weeks quashing bugs in a new feature and duct-taping hacks with more hacks is not heroic, it's self-sabotaging. Writing good code makes your own life easier, even on startup timeframes. (Hell, it makes your life easier even on hackathon timeframes!)


As a technical cofounder who just finished the YC W20 batch (https://terusama.com), I can agree with some of what you are saying.

At its core, an early stage startup's only goal is to create business value as ruthlessly as possible. Let's talk about how I apply this principle to my testing strategy.

Do automated test suites help create business value? Absolutely, I no longer have to test everything after making a change. Your application is going to be tested, either by you, or by your users.

Does having a well defined layout of UI, service, & integration tests, a-la Martin Fowler add business value? I would argue it does not. I write mostly integration tests, because you get more, 'bang for your buck', or 'tested code per minutes spent writing tests'.

Does this testing strategy create tech debt? Absolutely. I view this as a good thing. I am causing problems for myself in the future, in exchange for expediency in the present. Either my company grows to be successful enough to care about these problems, or we go out of business. If we become successful enough to care about rampant tech debt, hooray! we are successful. If we fail, it does not matter that we leveraged tech debt, we still failed.

Writing good code is an art. There are people out there who are incredibly talented at writing good code that will be battle-tested, and infinitely scalable. These are often not skills that an early staged startup needs, when trying to find product-market fit.


I think I disagree with this. I think the short-term harm of this kind of tech debt is more substantial than you're leading on. "Causing myself problems for the future" might be true, but that future could be in a week when you need to pivot because of user testing, a shift in the market, product market fit etc.

I think the mistake you're making is conflating "getting code written now" with expediency. Adding/removing features and shifting when necessary are "expediency." That's the value of a thorough test suite.


It's not just the test suite that is the subject of tradeoffs. One may write good code that fundamentally doesn't scale beyond a small number of customers e.g. doing everything with postgres and no batching because it's easy. Or building a solution for a demo to an individual customer.

These solutions will break, and if monitoring is skipped will break at 2 AM when customers really start using the product.

These situations can be avoided with better product research and a stronger emphasis on design, but these are also the approaches large established companies take who can't afford to lose customer trust, and will gladly build a product on a 2 year time horizon.

As a startup you need to weigh the risk of failure, the need for direct customer engagement, and limited resources against the risk of losing customer trust. If you're a startup making a new DB, then you're product lifespan is approximately equal to the time until your first high level customer failure or poor jepsen test. A new consumer startup, may simply be able to patch scaling issues as they emerge rather than investing in a billion user infra from the get go.


I don't understand how pivoting is an example of the value of testing. Wouldn't it instead show how investing in tests didn't pay off because the codebase got scrapped for another? Your tests pay themselves each time you adjust code that is tested. But there are many cases where you never end up adjusting the code, such as when that whole service is scrapped, or it was simple enough that it never got touched again.

The value of tests amplify with the number of people adjusting the code, and with the time range over which this is happening. These are both conditions that minimize themselves at early-stage startups.

Now, of course, caveat is, you need to know how to strike the right balances and tradeoffs as an engineer to get the right results for the right amount of effort. But that's what startup engineering is about.


When I write an MVP I usually don't write unit test. What I do is write testable code with dependency injection and whatnots in mind, so when the product is mostly finalized, I can write unit tests with little or no modification to the original code.


Unit tests are a lot easier to write though and run faster. I think the trick is to assess risk and don’t try to get 100% coverage for the sake of it


I think that depends heavily on what sort of tooling you're using. E.g. writing unit tests for django is near impossible while integration tests are much easier. The ORM invariably has its tendrils throughout the entire codebase, and mocking it out is a project unto its own.


ORMs are excepted in my book. Good point. But DB integration tests are possible; slower than unit but still faster than end to end. I created some recently for a side project to make assertions about full text search behaviour, so I could swap out psql for elastc search (for my sins) and have coverage still.


While what you said is true, the problem usually don't manifest in the way you described. Most engineers I know understand all these principles and adhere to them. But the codebase still end up a huge mess.

Code evolves. The technical decision you made when you were serving 30 customers doesn't make sense anymore now that you're serving 30k. The corner case you never thought would happen turns out to be very common. Suddenly your boss decide you should sell on-prem solution whereas you've been building a cloud offering.

You can make the best decision every time these new requirements come along, yet still end up in a disaster, because hill-climbing algorithm can get you stuck in the local maxima.

Oh, you say, you should've refactored your code, or rewrite from the scratch! OK, now you need to choose between spending time refactoring your code or delivering features. That's the trade-off!

Hmm ok, you say, you should've refactored along the way, so you don't need one giant refactor! Great, now you've basically asked me to predict the future.

So, shipping new feature and code quality is definitely a trade off, and is the job of us, software engineers, to make that call appropriately :)


This is the usual response from engineers who love their work :) In my experience there is always a point to NOT doing things "well" the first time round. Read the old article on "programming is terrible": write code that is easy to delete, not easy to extend: https://programmingisterrible.com/post/139222674273/write-co...


This is extremely well said. When I started coding I didn't have these habits and I was convinced it simply wasn't possible to write clean code as fast as I was writing sloppy code. Then, I met the person who is now my Head of Engineering, and he was able to code significantly faster than I was while also writing immaculate, readable, and largely bug-free code.

After spending some time working with him, I was convinced I was simply being lazy and started forcing myself to do all of the "clean code" steps that I was ignoring in lieu of speed. I slowed down for a month, maybe two, but then I was back up to speed and writing code that I could actually feel good about.

I haven't quite caught up to his pace, but I've got the second best thing in having him on my team now.

It's shocking how sure I was that I couldn't write clean code as fast as I do now. I'm lucky to have met someone that was able to teach me by example - I'm not sure I would have ever corrected my habits had I not worked next to someone who had.


Any particular approaches / design philosophies you found useful, either to discard or embrace? ie: Testing, "SOLID" principles, etc.


Sure! I have a ton of thoughts on this, so I'll just touch on a handful of higher level things that I think matter:

- Avoid rules and tools

Until you understand why someone came up with them originally. My biggest blind spots came from reliance on things like React Dev Tools. Every time I had to sort out a bug, I would start poking through React Dev Tools immediately without thinking, find the issue, and then patch it. The closest thing I can equate this to now is using a GPS to navigate. You'll get to your destination, but you won't learn the roads you drive on every day. Throw out the GPS, and you find pretty quickly that you know the roads, which is a faster + easier way to navigate. Same thing with debugging. You can still use the GPS once you know the roads if you need to get somewhere unusual, but you shouldn't usually need it. Same thing with debugging tools. Big time saver.

Rules are in a similar camp. Strict global rules are always bad. "Don't repeat yourself" is a horrible as a strict law in a codebase. I repeat myself all the time intentionally to avoid prematurely abstracting things. That being said, there are times it's absolutely mission critical to abstract something or risk massive difficult-to-unfuck technical debt moving forward. Knowing the "spirit" of these rules (which is difficult to garner via anything other than experience) is incredibly important in using them effectively, which in turn also saves a ton of time.

- Use a tool from your toolbox

As much as you can, don't make/use new tools. The law of the instrument[0] can actually work to your benefit in engineering if you use it correctly. The fewer tools in your coding toolbox, the more proficient you'll be at using those tools. The bugs that arise from that limited set of tools are more predictable and become easier to avoid and easier to diagnose. Your code ends up naturally looking more consistent when you're trying to treat every problem as if it's from the same set of problems (and in my experience, very few problems are not from a very small set of types of problems). The less unique problems you're solving, the less time you're spending learning how to use new tools. This effect compounds as time goes on, and ends up being incredibly powerful over time.

- Be even more explicit than you think you need to be

Implicit functionality is the beginning of the end of any codebase. If you think comments are necessary, the code is too implicit. If you have to ask the author what the code does, the code is too implicit. Code should make intuitive sense like a great UI makes intuitive sense. Non-engineers with an understanding of your product should be able to traverse your folder structure and find something if they want to (most members on our team are able locate and modify copy in emails/notification/interface files without much trouble). But it's not for them, it's for you. Trying to remember what you did on a feature you built 6 months ago is borderline impossible, so don't try. Write it so that you don't have to, and that'll guarantee other engineers working on it at any time can dive in without talking to you about it first. A lot of time is saved in not having to bring yourself or anyone else up to speed on anything.

- "Think slow"[1] about everything

Your brain is going to want to "think fast" all the time. Brains weren't made to code, so they're really bad at knowing when to rely on instinct and when to consider something more thoroughly. It likes to think fast more than it likes to think slow, so you'll end up coding instinctually if you're not intentional about it. That will result in code that looks like the most significant project you worked on before this one, and that code probably doesn't make any sense in whatever you're coding right now. So you have to force yourself to think slow about literally every single little detail of the code you're writing at first. Every styling detail, every filename, every semicolon, literally everything. Consider it, make sure you understand exactly why you're doing it, and make sure it makes sense to you in this specific scenario. Be able to explain why you do everything the way you do.

This is immensely tiring up front, it feels impossibly unsustainable. And it would be if you had to do it every time you wrote code, but you don't. Once you understand exactly why you're doing each thing you're doing, it's incredibly quick to understand if it still makes sense in each following scenario. Once you've gathered most of the reasons for the way you write code the way you do, the process starts to fade into the background - you can start to "think fast" again. And even better, you've trained your brain to stop and "think slow" when you don't have an explicit reason for doing something a particular way, thus preventing any "relapse". It's locked in. Engineers who have this figured out can avoid writing unpredictable/difficult-to-debug code with clinical precision and save themselves days of pain per month.

I think this last step is actually the core of "having the right habits from the get-go". I think every great engineer can explain every tiny little nuance of why their code is the way it is. I don't think that comes from being a great engineer, I actually think that's how you become one in the first place.

My pet theory is that this list is part of how one becomes a so-called "10X engineer". You don't have to code 10X faster, you just have to use clever compounding tricks to spend 10X less time on the noise in between.

[0]https://en.wikipedia.org/wiki/Law_of_the_instrument

[1]https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow


Thank you so much for getting back to me. Awesome reply.

I agree on the pragmatic vs dogmatic approach to "Don't repeat yourself". I have been playing a lot with this at my company right now. It's hard not to reach for an abstraction right away, but if the abstraction muddles the code / implementation, I find myself questioning its value.

I also love your comment "Code should make intuitive sense like a great UI makes intuitive sense". This forces the engineer to avoid being too fancy, and allows future devs to work on it without needing the original (who might be on vacation or moved on).

Kahneman's work is awesome, and I'd never really thought of it in the context of coding. We have been struggling at work with our redux lately, because the developers haven't stopped to question why we were structuring things in a certain way, or how we were dispatching actions and handling async flow. We just kept building things and trying to move quickly. It can be tough when product is asking for things to get released, and you are trying not to bikeshed / overcomplicate things. But a healthy dose of stepping back and rethinking why you are doing things is helpful.


I strongly agree, and have been involved in companies where "move fast and break things" was taken too far, and developer-years of effort was burned due to poor engineering investment. There is a time and a place for quickly hacking stuff together, but it has to be done with consideration. It is a short term gain, with a long term cost. If you are constantly breaking things, then you never get to MOVE ON. Good code is an investment. Invest in it, and let it create value - and build on that value - while you move onto the next thing.


I agree with nearly everything you said except modularity, I don’t find it’s a metric of code quality. At the early stage I fail to see how to properly modularize, most of the time features and customer requirements evolve so much between customers that I usually find it’s more pains than gains for the rare time the initial design hold. In my experience (B2B/high touch sales) I sometimes write a plain and simple hardcoded « if customer == "XYZ" » until the use case is refined and the market for the feature proven with at least a few paying customers.


I think that you’re fast at what you practice - if you usually write clean code, you get better at writing clean code faster. And if you complain that writing clean code takes a long time and don’t usually do it, then yes - it will take time because you never practice doing it.


I agree - it isn't a real trade-off. I always hear people saying this and am puzzled by it. Indeed writing "good" code - modular, clean, tested is a matter of having good habits and acting on them. Sure, it technically takes "longer", but it's in the order of magnitude of several hours more over a course of weeks. It isn't something that I would even bring up with management - whether to do it or not is superficial because of the low cost.

What I do understand is that if you hire someone who doesn't have these habits already, or if you yourself don't have them, then it could take much longer than doing what you're used to. But that's on you.


> But would you have found product-market fit if you had been able to iterate faster, try more ideas out and didn't spend 50% of your time fighting fires? Almost definitely.

I wouldn't say definitely, but your chances would be way better.


Haha, yes, that's exactly what I meant to say, but messed up the wording. There are no guarantees, but you can improve your chances by making it easier to iterate.


This isn't a real tradeoff if you are/have good engineering. "You just have to have the right habits from the get-go" is a pretty big given :)


Can you suggest a way to understand and develop the right habits? Or Any good method to improve code quality at personal and team level? Would be really useful for my case.


I imagine you'll get comments here like "have good code hygiene" and "aim for good test coverage", which are not wrong. However, for me, what really stuck was learning directly from senior developers.

Anecdotally, having at least one senior developer on a team dramatically changes the long-term prospects of a project, even if they are not the ones actually leading the project. I would be curious to hear other's experiences, to see if that generalizes.

(with the caveat that not all senior devs had good habits)


Unfortunately being senior is not a good indicator of someone's ability to design and write good code. Yes, it should be. No, it not always is, as my experience shows.

Now, if you are senior yourself, probably, you can see that. But when you're junior developer it's very easily to misguide yourself by looking at the senior staff without questioning anything.

So, I'd add to your advice this. Yes, learn from senior developers by observing what they do and then read more about the topic to see if they are doing right thing. Also, try to find out what are the other approaches and opinions. Even if you don't agree with them it's good to diversify your knowledge.


> aim for good test coverage

From the beginning? Not. That's completely counterproductive.

The first thing you have to do is be sure that you are testing the correct thing. Only after it you write your tests down.

Specs come before tests, and on most problems you will need to write a lot of code before your get the specs down.


Is the correct thing that doesn’t work correctly under some conditions always the correct thing. Users are polite and will not tell you about the annoying whack a mole bugs that keep cropping up...


For me Clean Code was a bit of an eye opener. I generally found following SOLID principles to be valuable.

It's however important to understand that principles are not unbreakable laws of the universe, but rather things that should guide you but that are also subject to questioning.


They are also principles for the second or third pass. Get it working (rough draft, brain storm) and then refactor.


I think this is the best answer.


What is good code. I prefer DDD

But there's no substitute for quick development and iterations than a simple monolith in 1 project and 1 person, if you know what you are doing ofc.

You'll get the disadvantages with this method as soon as someone joins the team.


Never forget that your startup customers aren't buying your code. They're paying for whatever the product does for them.

They don't care if the code is good or bad, as long as the app does what they need it to do and does it well.

So to answer your question: The code should be bad enough that it allows you to ship as fast as possible, but not so bad that the app doesn't work properly.

This can be a shock if you've been raised on a steady diet of HN posts and comments, Medium articles from opinionated and often highly critical programmers, and open-source projects that only accept the best quality code. No one likes to brag about writing proof-of-concept grade code, so you won't be hearing about it online or in public.

a) Yes, startups have failed because their product doesn't work properly or the product is full of bugs. However, startups don't fail because the codebase is ugly, or convoluted, or not following best practices. You might be surprised at how hacky many early startup codebases are.

b) Regarding the calculation of code quality vs. feature velocity: When in doubt, consult with the senior devs and your manager. Knowing when, where, and how to strike this trade off is one of the defining features of being a senior developer, in my opinion. In most cases, it comes down to estimating the negative impacts on future development. A core component that touches every part of the app should be more carefully designed than a single-use feature only 1% of your customers might ever use.

c) Regarding tests and clean code for V1: In short, the only thing that matters is getting traction in the early stages. Every day you spend writing tests or refactoring code to feel cleaner reduces your chances at getting that next funding round. In the early days, it's all about a proof of concept and getting customers so you can grow the company. You can't grow the company if you don't have investors and/or customers, so that perfect code may be doing more harm than good in the early days.


"However, startups don't fail because the codebase is ugly, or convoluted, or not following best practices."

Yes, they do.

The obvious one is one senior developer who writes a bunch of trash code to get stuff done in a hurry. Later is asked to maintain it and add features. But it's no fun cause it's a pile of poo. New shiny attracts his attention and he moves on (cause, you know, he delivered at his current job!). New developers try to pick it up, including a new hire, try to work with it. Warnings about runway loom. Support is swamped and many of the tickets get kicked up to developers because support can't answer because they're obscure bugs. Most of developers time is spent trying to fix the worst bugs, but things just get worse because each bug fix introduces new bugs, cause the code-base is well neigh incomprehensible. Some developers see the writing on the wall and flee, leaving even more work for the remaining developers. No money for new hires. 3 months later, layoffs. 1 month later, closed. One poor guy is laid off 4 months after being hired.

Lather rinse repeat.

The upshot is crap code makes a crap product. Just like crap engineering makes a crap car. Customer do care about that. They'll get tired of the bugs and the infrequent updates and the poor support and eventually they'll move on.


Having worked on the support side of this dynamic, mostly in problem and incident management, I have honestly seen 12 months of product roadmap completely blown up by stability issue after stability issue that our support staff could not possibly workaround. It's really corrosive to the morale of the support staff to just be unable to do anything to help.

In my case this was a relatively mature company on a version 2 reimplementation of an existing product, so it wasn't a complete death sentence. It eventually led to a complete house cleaning of CTO and product management in the end though. But, the company sure didn't fail!


> In my case this was a relatively mature company on a version 2 reimplementation of an existing product, so it wasn't a complete death sentence.

OP specifically referred to startups, though, not mature companies. Most startups only have 12-24 months' runway to begin with, and lacking clean code/testing/best practices/etc won't (usually) kill them in that timeframe.

Lack of product-market fit, on the other hand, is the one thing guaranteed to kill a startup in that first year or two (lack of financing is the other, but this is usually fixable if you've addressed product-market fit). It's why startups spend every single developer hour on rapid prototyping/iteration/features, rather than refactoring, testing and stability.

It's not black-and-white, though. Some code is so bad that it costs time in the very short-term, because noone can figure out how it works. Some functionality is so central to the product that tests are needed to confirm it's actually functioning as intended.

There's a vast spectrum between "fully tested, clean, reusable code" and "held together with sticky tape and segfaults every other minute".

I'd say that the one key skill for a startup CTO is deciding where the team should be on that spectrum on any given day.


> The obvious one is one senior developer who writes a bunch of trash code to get stuff done in a hurry. Later is asked to maintain it...

There's some survivorship bias at play in this; it disregards all the startups that never reached the "later" point because they were too busy polishing the code.


Someone should publish a series on such startups. Intuitively it seems there might be some. But I haven’t seen any articles naming names, describing details.


1) One SF wifi mgmt. software startup had a 50,000 LOC product with 250,000 lines of test code.

They seemed content, but obviously a lot of resources went into tests.

2) Many late startups in SF spent a lot of time and effort on perfecting CI/CD software (multiple years), or struggling with k8s in the early days (1 year to finish one service.)

3) Often post-founder programmers these days have a lot of process to overcome before shipping. I know one startup that hired dozens of programmers, but the founder (alone) still writes most of the code.


We can debate whether struggling with k8s and CI/CD would fit in the category. These are tooling not product but all part of the use of engineering resources. See the “choose boring technology” posts/articles.


> The upshot is crap code makes a crap product.

You took the sentence out of context. Obviously the product must work, must not have bugs, and must be straightforward enough to be maintainable.

If your code is so bad that the product doesn't work, then obviously no one is going to say you're doing the right thing.

I was speaking to walking the line between good-enough code and perfect-code, not advocating that people write code so bad that the product doesn't even work.


It seems similar to how people usually interpret MVP, which all to often is executed as MP and lacks Viability.


Can you name a company that failed because of a bad codebase as the number one reason?

I feel like people walk through hypotheticals like that, but I've not heard people say "Company X failed because of that scenario."


Friendster imploded during its growth stage due to either bad code or not enough spending on servers.


+1


The software is not really doing what the customer wants if it does not work reliably. The company dies because their the product they sell does not do what their customer wants. If you buy a car, you care about the quality if the parts indirectly. No one wants a car that works 50% of time. Same goes for software. So yes, poor code can kill the company, because the product does not work.

"However, startups don't fail because the codebase is ugly, or convoluted, or not following best practices." Yes and no are both right I suppose, depending on the viewpoint.


I've never worked on a project o bad I could add features fix bugs to it.

And I worked on a project that literally saved the HTML on the page into a varchar, had 15,000 lines of JavaScript, and all the business logic was written in awful stored procedures.

Whatever speed quality trade-offs a good engineer is making will be worth it when a startup is still in the "I don't know if this company will exist in 6 months" phase.


Well when you can't deliver a working product to clients because your code is so bad it can really matter. And the thing is any half decent developer is not going to code that way even within hackathon time scales


Jeff Bezos seems to disagree with you. Ask people about the first 10 years of Amazon. Mark Zuckerberg also disagrees.


Last I heard, Amazon does a 25-way join on their customer tables to print a shipping label. Now that's quite an albatross for developers and DBAs to carry around!

Mark was a perfectionship in releasing a working product. That doesn't mean it was good from a programming standpoint, but it was tested and always worked well.

eBay is the poster child for shitty software. At one point their CTO was requesting one Sun E10k per month ($2 million each fully populated) because of how slow and leaky their Windows code was. The board said no and they had to fix it.


Wouldn't this just be a classic case of survivorship bias? For every Amazon and Facebook there could be a number of companies that did similar things and fell apart.


Good code quality doesn’t guarantee success. However, those two data points indicate it may be necessary if not sufficient. There are only 4-5 tech companies operating at that scale with proven business models. Google and Microsoft being the primary others.


"The code should be bad enough that it allows you to ship as fast as possible" - I agree with the sentiment here, but not the expression. Speed and "validation" is obviously the outcome, and the ends should justify the means. But to say that fast must necessarily equal "bad" I'm not really sold on.

Coupling is an obvious example. If you want to retain agility in your product, the various components inside should be pretty loosely coupled and have restricted domain of responsibility. When you find out the product needs to change - massively - you already have a lot of the bricks you need to make the new product.

On the other hand, it's clearly wrong (in my experience) to go full-bore microservices out of the gate, because you end up spending a lot of time on wiring and infrastructure.

So for me, the question isn't "what's the minimum level of quality?", because at every stage the quality should in practice be pretty good. I know people say "be embarrassed by the first version you put out" - but people also say to be obsessed with the product/problem. Product teams with high standards (in general, but including code) tend to be more successful. If quality work starts taking up substantial effort, that's a pointer to other decisions (choices of technology, for example) being wrong. Again, all IME, YMMV - I don't think there's a clear rule of thumb that applies to all startups.


> But to say that fast must necessarily equal "bad" I'm not really sold on.

In a perfect world, you'd make all of your decisions along the Pareto frontier of speed-vs-quality tradeoffs.

In the real world, you don't have perfect engineers, and the Pareto frontier isn't immediately obvious.

Generally speaking, the more experienced engineers are better at getting closer to that Pareto frontier of speed-quality tradeoffs from the first iteration. It's the less experienced engineers who end up somewhere less efficient on the 2D spectrum of speed/quality. This is where experience pays off the most.


I'm not sure about that either, tbh. Once you're in the process of managing a successful product and have a well-defined value prop; sure - pareto efficiency is interesting. Before that point, I don't think it is - at least, not in any meaningful way.

When you're building out product and figuring out where it fits and what it should do, you want to retain a lot of optionality. If you don't have the ability to change the product quickly in an agile fashion, your development process way well end up being more pareto-efficient overall but your potential customer base would be much smaller. Better to be less efficient and grow a much bigger base.


I think the pareto curve still covers that idea. You want to be farther along the curve towards speed.

Also you may be thinking in terms of a different optimality criterion for quality, which as the above poster noted may be unclear, e.g. a lot of effort in one direction that seems like the best decision in the short term but actually hinders needed flexibility in the long term. This just means the short term improvement wasn't really towards optimal after all.


In the beginning architecture tends to be a form of procrastination for many people. IMO, when you have clear goals for improvement you should aim for somewhat worse than acceptable quality. Not because that’s the most efficient option, but because it for forces you to be solving customer problems.

That itch to improve things is more focused when revising whatever is the worst parts of the entire project rather than various pieces being decent when there are some horrific bits of the code base. Because those horrific bits end up eventually infecting everything else.

This changes a project grows to the point some code is old enough you don’t remember all the details. At that point you need to use best practices simply to make reasonable progress.


If you only use good practices at the end it is too late - you have the anchor of the old, unmaintained and untested code around your neck. Additionally, engineers who know they are doing crap work are generally less motivated.

I'd argue that the difference in time of writing tests is balanced by less time debugging, and the moment you want to re-use the code you know it already works in all the contexts the tests use it in, in fact it was already designed for re-use because the tests and the application use it, so you avoid major refactoring and instead just assemble your Lego blocks in a different shape.


It’s never too late, in the absolute worst case you bin some or all existing code and start over with very clear goals, and an understanding of any needed external API’s etc. Even better you can almost always extract plenty of useful bits here and there. On the other hand if you start every project thinking I need test cases, separation of concerns, continuous integration, scaling etc you’re working without a clear understanding of the end goal or how you’re getting there. Find out some tool your depending on is insufficient and you may need to toss most of what you’ve been working on down the drain with nothing to show for it.

Jumping right into whatever is the most difficult problem also means your model of the best architecture is likely to evolve as your working on the problem.

Granted this should all be on a sliding scale depending on how novel the project is.


I often feel like I should write a "things you can skimp on, and things you can't" for a startup. There are are some things that if you do it right will immediately start paying dividends. For example, I worked at a startup that didn't take the small time to invest in automated deployment. There was one guy that knew how to manually deploy things, with the only copy of the correct prod configs on his machine. And since there were no standards for deployment, the 5 different sub-apps/services were each deployed differently. They could have literally spent less than 1 day building a proper CI/CD system which would have immediately made all their deployments painless and immediate. Instead when I got there it took many months to finally get CI/CD in place because (a) everything was always on fire so it was difficult to carve out the time to do it, and (b) retrofitting all the different systems to use one way of deployment meant I needed to retrofit 5 systems.

More importantly, my time at that startup was some some of the most stressful in my life, and for what, someone else's shitty code? I refuse to be a "code janitor" anymore. Of course all systems build tech debt over time, but it's not that hard to see which systems were built with a modicum of forethought, and which were just slapped together. Imagine if people building a house got a group together, gave everyone some tools and some lumber and said "go!" instead of actually creating a design, laying a foundation, etc. That's what a lot (not all!) of startup code is like, and I for one refuse to touch that from now on.


> However, startups don't fail because the codebase is ugly, or convoluted, or not following best practices.

Startups do fail due to a failure to execute. Best practices aren't always about aesthetics. Ignore them too much and you end up with write-only code. You can compensate in various ways, such as designing the code so that it's possible to throw away the ugly parts later.

You are correct that good senior engineers are apt at striking this balance. I do not agree that the typical senior engineer is competent at this.


Right, which is why my sentence right before what you quoted emphasized that the product must work properly and not be full of bugs. I considered combining them into one run-on sentence so people wouldn’t be tempted to take it out of context.

Obviously, the product must work, must not be buggy, and must be maintainable enough to move forward.

However, there’s nothing wrong with what you call write-only code in certain circumstances, especially at a startup. I’d much rather have some backed together, single-use code that lets us prove demand before investing resources in a proper rewrite. The project goals will likely change after first contact with the customer anyway.

One of the biggest time wasters I’ve seen at startups is engineers who lose track of the goal and instead start writing reusable frameworks, or over-generalized code, or separating architecture into modules so they can open-source part of it on GitHub.

Startups are trying to prove a business model as fast as possible. Once you prove demand, you scale it with the proper code backing it up.


Curious when you've worked with write only code what was the language? I've been on like 30 different projects over the last 10 years and never encountered write only c#. But I have come pretty close to encountering write only javascript.

It seemed like static typing, "find all references" and the limited use of reflection style coding made the difference between bad code being annoying and completely unworkable.


I second this rather strongly.. The harder it gets to "stop" and clean-up shit, the quicker it gets to just lose it.. Especially relevant in environments where the authors of the write-only codebase are not around anymore. Somehow it is hard to blame/hate someone you go out with for lunch or beers.

What really worked great for my teams in the past is to game the transition so that the team as a whole sees it as an internalized mission, usually much stronger than that of the company. Then killing X lines of code from the old mess and beautifying the codebase becomes a shared and extremely satisfying exercise


I've seen write-only Java where the design was fragmented over so many classes that nobody could really make a meaningful design change anymore. Classes with dozens of methods, gross violations of SOLID principles, etc. It seems entirely possible to do the same in C#.

That being said, languages in that space seem generally less prone to write-only code, so I'd agree that language choice has a big impact. Especially when an entirely novel language or framework enters the picture.


Regarding b): Bad code quality tends to lead to bad feature velocity fast, and it gets worse over time. Maintainability and extensibility is a major factor to consider when doing design- and architecture decisions.


Something I’ve been thinking about a lot lately. Bad code has a real world business cost, and velocity is it.

Suddenly you will see you can’t just scale the team up because it takes new team members a long time to grok the codebase and work within it.

Similarly, a disconnect happens between business and engineering where building features that appear similar to already existing features still take longer to build since the existing implementations were never built to be extendable or reusable.

While all this is happening, the team is busy working around the codebase, not with it, to the point where they can’t apply solid refinement to the end product. Adequate time for bug fixing/testing is not allotted since the team is fighting the codebase to just get features out. Your product ends up lacking quality.

Developer quality of life goes down, which believe it or not, impacts velocity. Suddenly it’s not pleasant for the team to casually peruse the codebase. Suddenly those time estimates are all in the mid-high range, they can’t get a breather from the cognitive overload.

Finally, those little tweaks business wants to make become the stuff of nightmare. Stand-up becomes a call for mercy, ‘Making that change is not straightforward’, a constant compromise, ‘Can we just settle for this alternative instead?’.

A shitty codebase has an outsized cost for sure.


B-b-b-ut this is diametrically opposed to other axioms I follow, like "fail fast", and "pivot hard", and "agility"

Are there more colors on the greyscale than 000 and fff?


> However, startups don't fail because the codebase is ugly, or convoluted, or not following best practices.

Could they fail because they fail to meet deadlines due to convoluted or buggy code or having to invest more money and time to fix those issues and bugs introduced by them?


Yes, totally.

They could also fail because code/architecure is overengineered, not invented here syndrome or because of premature optimization.

In the beginning, being able to adopt and/or pivot quickly might be much more important than to have 100% test coverage, keeping up with the latest JS framework or being able to scale to 100M users.


Unit Tests are invaluable in writing more quickly difficult features. You can make sure that your existing behaviour is not broken by fixing a bug or adding more functionality. Every day you spend fixing bugs because you didn’t want to write proper unit tests reduces your chances of getting that next founding round. If you write trivial code the tests are obviously superfluous, but you should ask yourself what is the added value of your product if you only write trivial code.


It’s interesting work for mature companies and seeing something resembling the Big Bang, where you can peer into deep distant code that is of a very different nature to recent code. That distant code being hacked together in startup mode does what the heck it wants and is ugly as f. It might even be in VB!


Let me chime in my (personal) opinions, working at one of the fastest-growing startups at one point: Uber, and the details I gathered from the early times. While today, Uber is big on code quality, engineering best practices, reliability, and many others, as much as us engineers want to take credit for the success of the business via code quality: they are pretty unrelated.

Few people know that when Uber started and the first $1M was raised, the apps were built by contractors. The app was bad, the code terrible - but even with a bad app, customers used it over taxis, that didn't even have a bad app. The business took off, the next round of funding came, as did the first few full-time engineers.

The first thing that the full-timers did was throw away the mess of a code, and rewrite the app. However, moving fast was still more important than quality. Launching in a new city needed to be done in a few weeks - if the ops team could mobilize a whole city in this time, engineering was expected to move fast as well. So while generally, forward-looking decisions were made, still, many-many shortcuts were added, most notably "The Ping", which was the backend sending over all state data to the client in a massive JSON object, ever 10 seconds. This was to speed up development, not having to make backward-compatible state changes all the time. It's something I'd cringe over today, but it did help moving fast, at the expense of loose contracts and lots of bandwidth usage that could have been avoided.

As the business proved to be successful, in year 3 or 4, reliability and quality started to be more of a focus: things like tests, listing, architecture, rollout best practices, and so on. A big push happened when in year 4 or 5 (I can't remember exactly) a sloppy change almost took down all of Uber's core systems at rushour. But for the first few years, quality took a relatively back seat. Was it worth it? I'd definitely say so. As another commenter noted, the customers of a startup do not buy code quality: they buy something that meets their needs and is good enough.

When a startup becomes wildly successful, you'll have the funds to pay off tech debt. Until then just make sure it doesn't suffocate you - otherwise pile it on, and move fast.


I think it's worth noting that Uber has rarely gone down (I can't even remember one example, though I'm sure it's happened). While I'm absolutely sure some parts were downright horrifying at times (we've all been there), someone clearly had a good idea of how to make tradeoffs for development speed without compromising the core bits so much that they couldn't keep up with the rapidly increasing usage.

Huge difference between something like "The Ping" and deciding not to rewrite that original contractor code.


What a brilliant real world example with great advice (not sarcastic) - thanks for sharing. I’ve often wondered what codebases are really like at hot startups so this was fascinating to read.


> When a startup becomes wildly successful, you'll have the funds to pay off tech debt.

I'm against holding this view. Sure, if (and that's a big "if") your startup becomes wildly successful then you'll be able to fix everything or re-build your app from scratch with an army of senior developers backed by millions of dollars of venture capital funding. However, the path that Uber and other tech giants took is the exception, 99% of startups won't experience that.

With that said, I believe a lot of medium/large sized economically viable startup business can benefit from adopting a more balanced approach reagarding code quality, instead of cargo culting tech giants like Uber and friends.

Nice story though, thanks for sharing.


Code quality is all about risk management. You're balancing the risks of:

1) Bugs/outages that affect your customers

2) Hard to grok code that slows down onboarding of new staff

3) Features taking longer time to develop

How you weigh these risks is different from business to business. For a fintech startup a bug in the code could end up bankrupting the company. For a VC backed social network, being able to quickly onboard new hires is really important. For an app that supports say BLM protestors, time-to-market is everything.

In the great scheme of things, having a crappy codebase that makes money is a good problem to have.


Special case of 1: security bugs. If your app's audience is BLM protestors and it leaks their personal information somewhere, it would be better (both for the world in a moral sense and for your business in a pecuniary sense) not to release it at all until it doesn't.


Ideally an app targeting protesters wouldn't collect personal information to begin with.


There's a lot of ways it could do so unintentionally - for instance, if it captures photos, it needs to scrub metadata like geolocation and it needs to allow you to black out portions of the photos before uploading them. But yes.


> what is the best calculation to make when trading off code quality vs features?

> do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?

It only matters when bad code hurts your overall business velocity - what that means, only you can answer.

Nobody's writing tests for their purist aethestics, they're there to let you go faster - but there's an up-front cost you have to pay for them. Sometimes that's worth paying, sometimes the land grab is more important.

There's no single answer to this question.


Tend to agree. Leadership needs to send strong, clear signals about quality and acknowledge existence of potential technical debt well before the team starts feeling crushed by it.


Here's the thing: I know of at least one company that made it big starting with a steaming pile of technical garbage. They used their users as a QA department. The business logic was all in stored procedures in the DB, which was always on fire as a result, and the front-end was bad PHP with so much indentation that it was 25% blank space after rendering the page!

Yet their users loved them, the numbers went up and up, investors lined up to take the founders to dinner and vie for the chance to pound millions of dollars up their asses. It was crazy. They built a half-pipe in the office, you know, for skateboards. They became a household name and IPO'd a few years ago.

The point is this all happened despite their garbage architecture and crappy code. Yet it would have all been much easier and cheaper to do it right the first time.

(Word to the wise, the founder/CEO wound up crying at his desk as the investors wrested the company from him. "Be careful what you wish for: you might get it.")


Thanks for sharing, fascinating to read


One critically important question here I think - has the startup achieved product/market fit, or gotten a strong market signal that you were building the right thing?

If the answer is "no", then the tolerance for bad code goes way up.

Either way, in the early stages of a startup, a great deal of the code will end up being throwaway, and the trick is sometimes knowing which things are important to get right upfront, and which things can be punted on.

Well-defined service boundaries help a lot. This doesn't mean going to microservices, but it does mean keeping things well-isolated and independent even in the same codebase. In effect, you can have "well-architected bad code" which will help you stay flexible even as you move quickly.


My experience as a technical founder:

a) The codebase will always be "bad". There will always be things that need improving, testing, fixing, enhancing, revisiting.

b) The optimization function to determine code quality vs features has "user adoption" as one of the main inputs. If you are going to spend time and money working on code quality, but with no users, then whats the point? Are users interested but leaving your product because of bad quality - then patch the code to bring it right above the threshold of usability and maintenance - but not more. That may seem controversial - but you need to optimize your resources (time+money) in a startup

c) I dont know anything about YC startups. But I can tell you writing tests and bad code are not mutually exclusive. Having said that, you should think about testing - always.

Having been at this startup for 3+ years now - I can tell you we've gone through 3-4 iterations of the same thing already - each time designing for more scale, more developers and more users. So I think that may be just par for the course.


I feel it depends on what you mean by "bad code".

We went through a very tough journey ourselves. When I started the company, I wanted us to just use out of the box Rails. But some senior devs disagreed - we had huge disagreements about it. We ended up spending months building a complex SOA, only to find 3 years later that it wasn't a great implementation and rewriting it (now it's even more complex). Meanwhile, Shopify and others seem to be happily still using mostly stock Rails. And we're in a tough spot where finding developers who can work and be productive with our NIH-stack is quite challenging.

I agree with what the others are saying here. Customers aren't buying our code, they are buying our product/service. Code should not be "bad" (i.e. there should be tests, etc.) but as a startup, I think velocity is more important and we just have to weigh that. We can hack stuff temporarily to ship or do experiments, but we'd have to deal with the debt if we keep that around.

If I had the opportunity to start all over again, I would: - Stick to well-known frameworks. Use "boring" tech. - Outsource as much as possible first and don't reinvent the wheel e.g. don't write your own subscriptions/billing, just use Stripe/Braintree/Recurly/Chargebee, use Algolia (don't write your own Elastic) for search, etc. Move fast until you've figured out product/market fit, then optimize for costs, etc. - Stand your ground on rejecting NIH. Devs will complain because they want space to learn, try new tech, do NIH things (I want to hack stuff to!). IMO it's those NIH things that are often said to be "bad code" - they're not "bad", they were just written in a short amount of time to solve immediate problems and they often don't account for all the strange edge cases, etc.


Unless your startup is building some new tech product (like a new database technology, or a crypto/blockchain system) that is going to be sold to customers, software code quality doesn't matter much, at least initially for sure, and may be ever too.

Most likely you are startup building a software application service that is augmenting or automating or orchestrating some real-world interaction (like an e-commerce shopping or supply chain systems), then you care most about getting your product market fit figured out.

What this means is testing your understanding of the potential customer's needs, selling your product value to those customers (switching them from their existing way of life to your way of life), figuring out the business model (what costs are you optimizing, how much it costs you to run it your way, who will pay for it, can you cross-subsidize something, how does your business scale, at what scale your business becomes viable, at what point do you make profits etc).

This usually requires a lot of experimentation and product iteration. For this, you need to have very high feature developer productivity with very low costs for getting experiments wrong. For the past half decade, this is achieved by not building any IaaS/PaaS stuff in house and using stuff from some public cloud platforms.

Today, a new movement is happening – it is #lesscode or #nocode movement – you use frameworks and rapid application development tools that allow you to write very little or no code to create your applications and iterate quickly with very low software engineering skills. This allows a startup to go very far with very little burn while hunting for product-market fit.

Once you know you have a good product that is on the cusp of scaling, you can revisit your choices and figure out how to optimise costs through in-house software development. The bar is raising every year for what makes sense to build in-house.


It depends how bad it is. I inherited a codebase that had wildly inconsistent data in it's (schemaless) database, because there was no (or very little) input validation. When I joined, the dev team was spending 50% of their time fighting fires and dealing with bugs reported by customers. This was all justified by "move fast and break things", but the reality was that the code quality issues were massively slowing down feature development.


Disclaimer: I've never worked at a startup. However...

Tech debt is like any other kind of debt: a way to increase leverage.

Some tech debt is like a mortgage. You get significant value, immediately, and can keep the payments manageable.

Some tech debt is like a payday loan. You get ahead by days, but behind by weeks.

Some tech debt is like margin trading. You make an educated bet about the future and if you're right, you've multiplied your success, but if you're wrong you've multiplied your failure.

There's a time and a place for each kind of debt, but taking on debt in a haphazard fashion can get you into a situation where you need to chose between putting an inordinate amount of effort into paying off the "interest", declaring bankruptcy, or risk having the "repo agent" come calling when you least expect it.

(And note that even "tech bankruptcy" isn't necessarily a bad thing, if you can do so in a way that limits the blast radius.)


Great answer along the same lines I myself look at tech debt as well.

Another important thing to keep in mind is that while you can leverage tech debt to move the business forward all you want, be extremely aware of your tech debt and reduce it before you go bust. It’s very easy to develop a belief of “this has worked for 4 years so it’s solid and doesn’t need to be looked at anymore” when in fact, you could be teetering on a total collapse of the system within 3 months because some aspect of the system/business started gaining traction non-linearly.

PS: I have worked at very large, medium and small companies that grew big. Haven’t worked at a failed startup so far - so a bit of selection bias in my opinion.


Heyo,

I've worked in very small and very large companies, though never owned a startup myself.

A few things I have seen and experienced, personally or from close friends:

* It's ok to not be scalable from day 1 as long as you're not certain who your customer is. Because you are likely to have to shift a lot left right and center and it might slow you down. But do keep in mind that it will become an objective at some point.

* Your code should be reliable and high quality enough that you can refactor it fast and without headaches. I have lived situations where a change in one part of the application was creating bugs somewhere completely different. I've also been in places where tests were forbidden (bugs never come twice at the same spot , RIGHT?!). Not having tests with f* you hard because you wont be able to move without breaking stuff soon, but also because you won't be able to easily expand your team.

* Tangential to 1 and 2, do try to keep abstractions layers in place. That will make your life easier.

* You shouldn't be afraid to let new employees in the code, and to deploy. Otherwise you're a liability.

* Security is a tough one. It'll never be good enough, and it's usually a cost more than a revenue... Make sure that all the data of your customer is safe though, that should be the hard limit. Because if you're successful and get hacked you might never recover from it.

I have seen a brilliant company that had a nice business model go down not because the code was not high quality, but because lack of tests and lack of design abstractions made every step of the way 100 times harder a few years down the line.

You seem to have a pretty good idea where you're going already :).

All in all you wanna move as fast as possible, while making sure that you're not creating the shit of tomorrow. So if you write crap because reason, make sure it's contained :).


> * Your code should be reliable and high quality enough that you can refactor it fast and without headaches. I have lived situations where a change in one part of the application was creating bugs somewhere completely different. I've also been in places where tests were forbidden (bugs never come twice at the same spot , RIGHT?!). Not having tests with f* you hard because you wont be able to move without breaking stuff soon, but also because you won't be able to easily expand your team.

A cost of sloppy code and move-fast practices & attitudes that's not well-accounted for most places, I think, is that it makes it harder to add people to the project and get them contributing effectively. New hires, contractors, agencies. All will be less effective, longer. This factor gets much worse the longer you operate in that mode, and the more sloppy code goes to prod.

> I have seen a brilliant company that had a nice business model go down not because the code was not high quality, but because lack of tests and lack of design abstractions made every step of the way 100 times harder a few years down the line.

I suspect the "tech choices don't kill companies" wisdom is actually BS and it does happen often enough to worry about, it just doesn't often look like that's what killed them.


> A cost of sloppy code and move-fast practices & attitudes that's not well-accounted for most places, I think, is that it makes it harder to add people to the project and get them contributing effectively. New hires, contractors, agencies. All will be less effective, longer. This factor gets much worse the longer you operate in that mode, and the more sloppy code goes to prod.

Yes definitely! I have seen VERY FEW startups that feel at ease with getting new people onboard the codebase. But as soon as your business model is validated, that's what will most likely happen so you better be ready for it.

> I suspect the "tech choices don't kill companies" wisdom is actually BS and it does happen often enough to worry about, it just doesn't often look like that's what killed them.

I don't know about that. Not that I don't agree, I really just don't know. In that specific case though it seems (though the exact opposite) related. They essentially had rebuilt everything. Their own SOAP layer, their own XML parser, UI framework.... And that was OK when the company was created, because there was no alternatives. But they never made the move to start using mainstream solutions when they appeared. Wait a few years, and what takes you a day of work takes 30 minutes with the current state of OSS in other startups.


Security as in being able to login with any JWT you cared to create... just change the email address to anyone using the app and you’re them, no signature checked whatsoever :-/


Are you referring to something specific that I'm not aware of? :)


All I’m saying is there are levels of bad... I suppose this is covered by your comment! But seriously :-)


Managing technical debt is always a trade off. The company I work at is failing at it. We:

* Bootstrapped a startup, left ourselves tons of tech debt

* Glommed as many features onto the core product as possible to meet enterprise needs

* Got a ton of MRR and are the leader in our corner of the industry

* Never pivoted to being a mature company, never paid off the debt. Now the bugs are pretty unmanageable and the software is too complex. It’s hard enough keeping the service afloat, let alone adding new features.

* About 50% of our customers try the software and churn out within six months. Our client industry is only so big, and we’re actively pissing off a huge chunk of it.

* Now we have a PR problem. Industry people leave us bad Google reviews, which our company owners can usually get deleted. They also warn people in industry Facebook groups not to try our product.

If you don’t pay off tech debt eventually, it will catch up with you in lost growth.


About 50% of our customers try the software and churn out within six months. Our client industry is only so big, and we’re actively pissing off a huge chunk of it.

What that means is that you haven't yet found your product market fit. Better not scale up or you will burn.

Check "sell more faster" by Amos.


There is an easy rule: good enough is always good enough.

41 year old developer here who worked on various projects going from solo to around 50 person teams.

If you want to move fast, you have to hack stuff together. That is exactly what your CEO did.

In the end it all depends on your project. If you make a game, let your users find the bugs. If you make life critical software, you better have some rigurous tests in place. A 1 person project can be really messy, but a 5 person project can't.

Don't put effort into code that might be thrown away.

Most things are an investment, so always question how fast you get the ROI. It's always a balancing act.

But in the end, it always comes down to the same question: is it good enough? If yes, continue. If no, do the investment and move to the next level.


At the risk of sounding like an HN pedant, your questions are backwards. To make sound engineering trade-offs you need to understand the problem. I would start with questions like the following:

1.) What market are you targeting and what are the overall user expectations for features, quality, reliability, etc.?

2.) What is the minimum viable feature set (i.e., product) to get into that market?

3.) Is it more important to be fast to market or the best to market?

Products are built iteratively. Even if it's OK to deliver on the fast and crappy model you still need a path to fix things incrementally. This applies to just about every product I've ever seen.


a) Almost every startup I know of that failed, failed because of business reasons, not tech. Even when it was tech, the reasons were delays in feature delivery and production issues, not code maintainability or tech debt. Some companies paid dearly later on to fix tech debt, but if they hadn't moved fast in the first place, they wouldn't have had customers to lose.

b) This really depends on having a combo of a product manager who appreciates technology and an engineering manager or CTO who appreciates business. You have to weigh the benefit of shipping feature X now vs. later, in favor of tech debt T. Both sides need to be honest about the consequences of delaying X or T.

c) Not a YC startup, but always try to write tests and good code. Never abandon it. But in the early days when you're trying to gain traction, don't feel bad about having to compromise on them during crunch times (which is most of the time).


> a) Almost every startup I know of that failed, failed because of business reasons, not tech. Even when it was tech, the reasons were delays in feature delivery and production issues, not code maintainability or tech debt. Some companies paid dearly later on to fix tech debt, but if they hadn't moved fast in the first place, they wouldn't have had customers to lose.

Delays in feature delivery and production issues are usually symptoms of poor code maintainability and high tech debt. It's a really difficult balance to strike, but it's worth tackling low-hanging fruit as you work on the code, and introduce good practices for new features as you keep going if it doesn't impair development too much.


I know I might be in the minority here, but tech problems that are already affecting users (even indirectly in the form of missing features), I tend to consider as more than tech debt. The payment has already come due. If there are real world impact beyond just standards & best practice compliance, we have to fix right away.


Not really, usually what happens is the guy bringing in the investors overpromises and the feature isn't possible given the resources the startup has.


> Almost every startup I know of that failed, failed because of business reasons, not tech.

My experience is that business promised novel products to investors and customers, but engineers could not deliver any miracles with the time and resources available.

Is that a failure of business or tech?

Well, I have seen others were there was just no market for the product. Like some paid service on the web when users expect it to be free of charge. I guess this qualifies as business failure.


> Is that a failure of business or tech?

See my point (b).


My advice is to not worry too much, but to follow the boy scout rule: leave code in slightly better shape than you found it. Code quality matters to no one - not even other developers - if the code in question is never revisited. The boy scout rule helps ensure that code that doesn't need to be good doesn't have time wasted on making it better, and code that needs to be higher quality naturally becomes higher quality.


This is fine advice, but it's really advice for an individual more than an org, and if you are in leadership you need to consider whether your teams and processes are set up in a way that both leaves room for your engineers to do this and also ideally even actively encourages their doing so.

To phrase it another way: Each engineer might have the best of intentions but if success is measured by new feature velocity, adherence to this rule becomes less likely.


I once worked a php developer for a small e-commerce company situated in India. The codes written by earlier engineers were so poor that it took me 3 weeks to get how the whole site works.

The variable names were random bollywood movie names, there was no class, functions all was hand coded in core PHP and it was too complex to add new codes.


It depends on the definitions of the terms used.

Strictly following MISRA like guidelines while developing web SaaS? Spending day on orchestrating mocks of this and that service so you can test some trivial class and tick the 100% code coverage box?

I think that code quality of individual methods does not matter as much as quality of overall architecture and that requires some design planning and regular refactoring and that I would imagine can hold you back from delivering something in a tight time frame. It does pay in longer term, I am not debating that, but before it pays off it may be too late.


Great qs!

a) A really bad codebase like the one you're describing hasn't been the root cause of any failure that I've seen, and I have seen several companies recover from it and become very successful. Unfortunately what you're describing may be a symptom of a different root cause (poor judgment around what matters most to the business), and that can def kill you.

b) These things don't trade off against one another directly. Code quality helps feature velocity. In the early days the only thing that matters is getting product/market fit as that's an event horizon beyond which the future is unknowable; the way to get there is to iterate fast, which does require things like CI/CD and a coherent/non-spaghetti structure (even if the code itself is ugly).

c) I've seen both modes. My main view: what matters most pre-product/market fit is rapid iteration (see above). Once p/m fit has been achieved you need to be able to add features rapidly, which requires a different level of code quality (comprehensive tests, etc). There's no hard and fast rule here, but most products ultimately throw away most of their pre-product/market fit code within 1-2 years of scaling.

I actually recently wrote a blog post that touches on a lot of this here: https://staysaasy.com/engineering/2020/05/25/engineering-at-...

Good luck with whatever you're up to, whether at this company or elsewhere!


To me, the single biggest distinction between working in a startup versus a traditional organization is that all your work has immediate impact. There are points in the lifetime of a startup where code quality and test coverage have immediate impact, and other points where they don't. In a startup you have to learn how to budget your time and effort to create the most immediate impact to further the mission, so if code quality/testing aligns with that then it makes sense to spend time and effort to do it.

I've worked at a few startups and I'll give you some examples where it matters and where it doesn't.

I was hired as employee #4 at a stealth startup that would turn into a zombie and I was the last non-founder to leave when our runway ran out. At no point in my time there did code quality or test coverage matter even a little bit - our biggest problem was convincing people to pay us, which was particularly difficult because our value to the people we wanted to pay us was intangible (at least to them). This was why the startup failed, we tried to sell to the wrong people for too long (ie, misaligned our values with what our target market actually valued).

Code quality didn't matter because we essentially strung up demo after demo in different contexts, the core technology was basically finished within a few months of founding, and the rest of us worked to put it into different contexts to show people what they could do with it. Those demos would never reach production, and most of them had a single developer. Who cares if there were no tests or it was all spaghetti? We were just trying to show off.

I'm currently an early employee at another startup and spent a lot of time over the last six months developing ci/cd infrastructure and we're going to make a major push for testing/benchmarking coverage in the next month or so. The reason is that we have a tangible and immediate impact to our business because it directly affects our value proposition.

So to answer your question, the answer is it depends. It all matters when it affects the bottom line, because code quality/testing doesn't make you money; it just costs you less money in the future. There is a very definite stage in the life of a startup where that matters, and as a developer in the org you have to budget your time to commit to it when it matters.


Startups tend either to do many things right, or many things wrong. If doing x right were a coin flip, there would be a bell curve with the peak at doing half right. But it is not a coin flip.[0]

Code can be decently sloppy, but there's likely a strong correlation between good code and good startups. Not because the code made them a good startup, but because the good startups are good at most things.

Many startups will do fine with a rough codebase, and obviously you should value the code accordingly (if it's an API as a service, highly, if it's a physical product with no software component, not so much). But be wary of any startup that's close to so bad you worry it might fail. Good founders will rarely let it tip so far to that side of the scale.

Obviously there are loads of exceptions to this rule. But I think if you want to be a founder of a software driven startup or you want to find a great place to work as a software engineer, aim your expectations higher than feels reasonable and you'll probably land at a decent medium.

[0]https://twitter.com/paulg/status/1240308316808626176


My 2c is that it's not only your customers that you need to keep happy, but also your own employees.

Engineers look for projects that don't give them headaches while working on them, and those that help them learn the right things. At the same time, they do look for maintainable and extensible codebases that they can enjoy working on.

The problem with bad code that is a clustermess is that things leak everywhere, and fixing one bug will lead to another. You won't have a product that is stable for your users either. At one point your engineers will make a point of rewriting things from scratch, but management may stop them. This will force them to quit ultimately so you'll have to deal with loss of resources.

On the other hand, using your users for QA is terrible. They do not report bugs at all, they get frustrated and spread the bad word. If they're paying users they will start looking for alternatives at one point.

This is all a part of your business.


I define good code in terms of economics, the whole point of writing code is to generate some sort of benefit. The nature of the utility that is created by code is therefore heavily context dependent. So from this perspective I'd argue that code should never aim to be "bad", if such a situation is coming up it strongly hints that a discussion about the goals of the code and why it exists is badly needed. Also organizations that aim for "bad" in certain departments have a nasty tendency to generate cultural and political issues that become toxic for the organizations as time goes on.

As for some of these questions:

a) Yes I've seen a few companies go under because their code wasn't able to generate profits. A couple of times it's been so bad that customers didn't get what they needed immediately as a result. But usually the sorts of company failure modes from bad code are less dramatic. Sometimes this is like bad debt in that it looks good initially but comes at an existential cost later. Other times it's been more boring like lower velocity making the company uncompetative or too expensive to run.

b) If you are thinking of trading features vs code quality you've already lost because this isn't something that can be traded.

c) Writing some tests tends to be a pareto-optimal choice, in the sense that lower defect counts tend to allow you to create more economic value from the limited software development staff you have in a given time frame. Frequently you'll find that having some tests allows you to deliver things like features more efficiently than you would without them. High defect counts tend to result in not meeting requirements or unnecessary rework. There's a sweet spot here about tests and test coverage, there's definitely diminishing returns and getting to 100% coverage is very expensive because of the last few percent being disproportionately hard to get while not being worth the cost of getting it in many cases.


> b) If you are thinking of trading features vs code quality you've already lost because this isn't something that can be traded.

My enterprise would like to have a word with you. This is a trade they make daily.


What I'm trying to say is that it's not just some simple linear trade you can make where "less quality" implies "more features" or "more quality" implies "less features". Usually when I encounter this line of thinking, especially when it simplistic, it does a lot of damage. The biggest damage tends to be when people who are less familiar with the fundamentals of software construction use this line of thinking when allocating resources or making planning decisions.


If you feel there is a lot of technical debt and the work you are doing never seems to also diminish it then you have a problem. If you still have some parts that could be improved but you are tackling technical debt constantly while revisiting features you're OK.

Sometimes you didn't understand how the feature should be built until it was done. Sometimes you need to live with a suboptimal architecture until it clicks in your mind. Sometimes I read my own code and realize "this is bullshit". It might need some time to rest.

But refactoring is easiest when you just worked on a feature and everything is still completely in your mind, "striking when the iron is hot" as I call it. What you can refactor in minutes after you checked off all of the requirements of a feature can cost hours if you don't have a complete mental model anymore if you revisit months later.


The whole thing has no boundaries and is extremely difficult to add new features, but the CEO is extremely fast!


Some obvious points - each startup is at different stages. Unacceptably bad for a mid-size startup can be good enough for an early startup that barely has revenue or a path to profitability. All decisions are made from a business POV, where bad code is a form of debt - it can be used well or not.

I have some maybe non-obvious thoughts though - some useful questions to ask

1. "how difficult is this bad code going to be to cleanup later?".

For the vast majority of issues, its usually not very difficult to clean up later. Only very few things like e.g an API that many customers use, or the way core data is modeled/accessed are difficult to change later.

2. "how well encapsulated is the badness in this code?"

A shitty function, or a janky microservice with a well thought out API is much better than a sprawling mess. The more you can split your architecture into independent pieces, the less bad code in any one piece matters, and the easier it is to reason about. Horrible code has no clear separation into layers and everything feels like one giant tangle - that genuinely slows down dev speed and makes building stuff feel risky.

Good engineers often write code thats bad but but also encapsulated well enough to change easily.

3. "what are the business consequences if this code fails?"

Code quality on a feature not used by many people matters far less than a core feature. Database code should be more stable than web tier code. Code touching the core of a web server should be reviewed more carefully because it may cause downtime. A bug on a peripheral feature can often be fixed later without much impact to customers.

4. "how quickly and confidently can the people responsible for this code change it?"

Super spaghetti code is hard to change for everyone. In contrast, some code has some historical design baggage or intricate business logic which may be simple enough for experienced devs to change, even if it is hard for newcomers to understand.


As others have echoed, customers don’t buy code. They buy a tool that is reliable and solves their problem.

You could have 100s of tests but server could easily fall over. So it’s always a trade off.

One thing I can say is if you sow the seeds early, it’s easier to add a test with a new feature than add a 100 tests to a 2 year old feature that no one understands and keeps on falling over.

Some companies take this to extreme on both ends. Either no tests at all or everything needs 100% coverage delaying time to get things in the hands of customers.

Most pragmatic places I have worked at invest in test infra once they have good product market fit. Make it easy to write tests and fast to run and debug them. If it’s easy to do the right thing, why not do it ?


> a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.

Yes, but it's more that the programmers are bad instead of the code. Bad code can be patched fast by a good programmer, but becomes rapidly unmaintainable by a bad programmer. A lot of techniques and style guides out there are designed to manage bad programmers.

> b) what is the best calculation to make when trading off code quality vs features?

I have two modes: prototype and production. Prototypes are disposable, and value speed/results above all else. They should be thrown away after. Treat them as a demo to get budget for a feature or a hack to solve a problem right now. Design it to be completely destroyed and replaced, instead of replaced gradually, although you can probably reuse interfaces/contracts in between these modules.

Production code is kept clean and as maintainable as possible, but keep the engineering to a minimum. If you have to ask whether something is overengineering, it probably is.

> c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?

I'm not sure about YC but I don't write automated tests. I have a text file with all the manual tests I need to run. Features are usually scrapped hard in a startup. IMO it's better to release a broken thing to 1000 people who complain it's broken than to release a well built thing to 100 people who think it's nice but won't pay for it.

> Should we just be chucking shit at the wall and seeing what sticks? Do most startups bin v1 and jump straight to v2 once they have traction?

Rule of thumb is you need dozens, if not hundreds of prototypes, so optimize for speed and experimentation quality. You're like a prospector, looking for ore. You don't want to build an entire mine, where there is none, and you don't want to commit too hard until you know there's a sufficient number of it.

But things are different for "ramen profitable" startups, and you should start looking into how to maintain better and add features faster.


In my experience it’s not about “bad” vs. “good” code, and not about a tradeoff between speed and quality.

It’s more about how much abstraction is built into the system. A mature codebase has a clear purpose and therefore can contain durable, high level, even beautiful abstractions. On the other hand, a founder doesn’t always (nor should they) know what their code will need to do in 6 months time, so they typically avoid writing abstractions.

You can still write good code as a founder—it’s just that good founder code looks different than good BigCo code.


A rule I use for testing: If a feature doesn't run correctly the first time it's manually tested, then an automated test should be created which checks if the feature is working. This rule typically means that tests save time, and that tests are created for the code that's likely to break.

I bypass this rule when I think it's obvious I'm going to want automated testing. For example, I needed a customer facing DSL for importing data with a lexer/parser/interpreter; manual testing was bypassed from the start.


This is totally context dependent. If the code is for controlling a nuclear power plant or security for thousands of customers then the core may need to be robust or the enterprise will be doomed. If the code is handling some basic business processes just a bit more reliably and efficiently than some existing but rotted code base then there may be enormously wide bounds for code quality.

And this isn't just startups or side issues. I don't know anyone who has looked seriously at OpenSSL without being completely horrified.


I am part of a startup for the first time and, coming from a project that was a bit messy on its own but had some structural integrity, I feel a bit torn apart with the current code quality.

For starters, the project _must_ be done in a completely serverless manner (AWS was the chosen provider) and _nobody_ in the team had experience making a complete product just using this kind of architecture.

Since performance is the main concern, at the beginning we did a very shallow research on our options for languages and relevant items to the lambda's performance. One of those was cold startup time, which the bundle size has influence in. This led us to split our custom dependencies as much as we could, making the development and testing more painful.

With both previous points presented, I can say our code quality is not good. As for velocity and delivering on time, we have had some issues because of planning mistakes and unforeseen inconveniences while using AWS SAM and AWS CF. Nonetheless, we're "on time".

We have identified some pains that we would like to fix post-launch but that moment seems to never going to happen. I got a feeling we won't have time to do maintenance on the product and we'll just be bombarded with either bugs or new features.

As others have said before, customers will only look at the app's functionality and UX. And in our case the application looks amazing. The backend, not so much.


I have been in the startup world for like 13 years, and have been everything from an IC up to CTO. This is IMHO:

> a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.

No, but I have seen the mass velocity hits from short term decisions living on over the years. Tech Debt is real and can eat into 20 - 60% of a teams output because of bugs/issues/lack of documentation & context. These places are miserable to work at.

> b) what is the best calculation to make when trading off code quality vs features?

Unfortunately this may not be a popular opinion but here is what has worked best for me. You need a sound ARCHITECTURAL base from inception, to do this the person who makes the decisions or is in charge needs to use tools/languages/etc that they are experienced with to develop a clean base to work from. Its not hard to set up CI/CD, unit testing, proper devops, and code decisions like inversion of control, and proper service segregation from the outset IF you use technologies you are strong in. This lets you move quickly if need be but the "bad" code is limited to services/systems. Its easy to fix a single poorly coded rushed class/function/file. Its a nightmare if your entire basis you build off of is crap.

Startups tend to be limited on time... and sadly often startups hire inexperienced people who cant do the above or experienced people who focus more on shiny new technologies then using things that work and and be quickly executed.

c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?

Never been part of a YC startup, but I would say my general experience is that when your still figuring out what your product/market fit is things like scale/code quality/architecture shouldn't matter... however two things need to be kept in mind. The first is having an "escape hatch"... this code is crap we all know it but its the code we need right now, is their a way we could pivot/transition to a new system/architecture in a few weeks when we finally get funded or "grow"/"scale". The second is identifying that pivot point and investing time to create the the first generation foundation (if you go full unicorn/scale again you may need to deal with this yet again).

In conclusion you need to do what gives you the most velocity for your effort, this means when you are super small and still figuring out the basics a costly foundation inst worth much. Then if you survive and shift into growth mode you need to expend some effort/rescourses into a good base to keep that velocity alive.


Personal confession - I've been building something for more than a year. I hope to finally release it before the end of the year, although FYI I pushed back the release date multiple times.

When I started coding I was disciplined and organized, writing tests, etc. And as time has passed I've had to sacrifice those guiding principles. At some point changes to UX and logic to provide a better user experience has taken higher priority to well tested code. I've changed and modified things so frequently that the tests I wrote would break. There are tests I wrote for code that is no longer in my code base. It felt like a complete waste.

If you have a clear vision of your MVP, or you have a designer giving you requirements and wireframes, or you know exactly what you want when you start out early on (waterfall?) maybe you can stay true to all these well established and proven software development principles.

But if you are flying by the seat of your pants and figuring out as you write code I'm not so sure doing all the right things should be your first priority. I feel that building you MDP - minimum DELIGHTFUL product - may be more important than building the MVP. And that might produce substandard code.

It could also be that I am a terrible developer and product manager and designer and entrepreneur.

If you are at all curious what the hell I'm doing, you can see my landing page - https://www.keenforms.com - its a form builder with rules


I think that the issue is not bad code: having to deliver asap, sometimes writing shit happens everywhere, from University homeworks to big fortune 500 companies and everything in between.

The biggest issue is not knowing your problems. If you are aware of your technical debt it means that probably you have a plan, or at least an idea of where to look when shit hits the fan. Otherwise people would run like crazy, deny the problems, miss deadlines and customers expectations and ultimately fail.


a) Yes. All the time. But it has more to do with management than the programmers. If your code is approaching catastrophe, it's time to seriously reassess what it is you're trying to do and if it's feasible for the programmers to understand, not for you to build.

b) The best metric is what is most boring and what is most comfortable. Boring tech is good. Boring code is good. Things are more or less defined by their failures than successes with languages. You want to be defined by what doesn't happen in your code because you made cogent decisions.

c) Do most YC write good code? Yeah but that's not what defines their success. Clean code is presentable. Clean code sets a tone. Tests are sometimes snake oil, sometimes valuable. It's hard to assess how valuable a metric is once you become invested in increasing it. No, writing tests won't save you. But decent DevOps will hopefully reduce cognitive load in managing features. Writing unit tests is in my opinion, a nice reprieve in between coding sessions. I look at it as paid downtime.

d) As someone pointed out, there is survivorship bias to consider. It's pretty common for v1 to a complete disaster where nobody knows what they are doing. Most fail and do not attempt v2. Eventually the to-do's and somedays just pile up and you lose to a competitor.

e) Another perspective, almost everyone's code will be some kind of dumpster fire. You'll realize perfect pipelines will always be desirable, as in nobody has one. The only code that is 'bad' is the code you fail to take accountability for.


There’s 2 ways to approach writing greenfield code in my mind.

The decision between them is simply “once this is shipped, would you accept having to completely re-write it from scratch to add even the smallest feature?”

If the answer is no, decent tests will make you go faster. Your commit volume by SLOC will be roughly:

1. Refactoring (~50%) 2. Tests (~35%) 3. Actual impl code for features (~15%)

That is, you’ll transact more than three times as many lines of code with your VCS repo just re-writing impl code smaller and cleaner and better organised than you will actually writing code to build the functionality.

You’ll spend more than double the adds/deletes/changes to lines of test code than adding features to the product.

You’ll implement new features at roughly the same speed today as tomorrow as next year. You can drip feed more devs into the team every 4 months or so to build out velocity further.

If you’re willing to throw it away after the first release, you’d be silly not to ditch the tests, forget the architecture and just crank out something that works - best done by a solo dev, deploy each dev in a solo fiefdom from the beginning if you want to throw more devs at the problem.

In practice, almost all code is written as a mix between these two views and is slower and more expensive than either approach above because of it.


Never worked in a proper start up, but I've given a hand several times. Usually it starts with textbook code. And that lasts around a month, month and a half. After that deadlines start knocking on the door, as well as patches over patches to cover up things that were either not required in the beginning or extreme edge cases and it's a race to the bottom from that point on, as far as code quality is concerned.


Writing a service thinking that you'll throw everything away is a waste of time. And so is trying to get everything perfect, because you don't yet understand the business correctly.

In my experience you should not treat all parts alike, the more foundational the more time you should dedicate.

It's important to think the db schema properly, anything else will cripple your development, and the longer in the run the harder it will be to fix it. You don't want to sanitize wrong data two years into business.

If there's a library, it's better to spend time on thinking the proper API, the code can be later be improved.

It's ok to have garbage as long as it can be isolated and you can keep on going. For example, we had configuration files that had to be synced with the db. That could have been automated, but it was ok to hardcode them in config files. It was not ok to hardcode them across the whole code. First one could be turned clean in the future, second one would've been a mess.

Invest in tests, specially setting up the process. At the beginning they can be just smoke tests (this API returns success), as the start up grows you'll have more options to add proper tests.


You always test before you write any code. That's the only way to make sure the code does what it is supposed to do.

And that's exactly what good startups do, they do business tests before they write any code. Don't write code at all unless it's providing value to the customers or helping you learn something you need to know to provide value to the customers.

Now that this is taken care of, we come to the problem of the code itself. Each bit of structure you add, whether it's a line of code or a database field on a table, is a bit of infrastructure you may have to maintain, possibly forever.

Some folks want to take their eye off the business tests and move directly to system tests, testing and then coding to make sure everybody can easily understand and maintain any code that's written.

Most startups fail because they never ever got the business tests working right. They either never got around to creating them and making them pass or they came up with something that worked but were unable to flywheel it or lost the plot somewhere. Some startups have almost-perfect code that nobody wants; that's actually one of the most common way of failing.

So the natural state of affairs is to always be experiencing some kind of stress between value discovery and code quality. Personally I believe you solve a lot of this by changing the way you code and the way you look at coding, but there's too much to go into here. The key thing for most programmers to remember is that if you're dying of thirst in a desert, you're not going to care very much if the guy selling glasses of water has glasses that leak or water that's muddy. The value proposition always comes before anything else.


The most stable codebase I've worked on, at a successful startup that does a great job delivering business value to its customers, didn't have a single automated test during the 7 years I was there.


I think with the current style of coding, that doesn't surprise me. Most all of the testing we're doing in code is because we're writing code in far too complex a manner for the value (if any) it is providing.

But that's a tough thing to explain to a person that doesn't know any better. We're teaching coding as if it were a stand-alone thing instead of simply a tool to get us other things we want.


a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.

Yes, I've been seen this happening on some projects that I joined. The most hilarious story I've is when a funded startup spent 12 months with a team of 5 and the app would crash with a single user with little usage. I managed to rewrite a functional prototype/v3 in 3 months that worked much better. Others were so costly to refactor and got shutdown.

More often that not, it is all about the specification and company culture that creates this chaotic outcome.

b) what is the best calculation to make when trading off code quality vs features?

This one, personally, I like to put the responsability on the dev team. Not having an exact spec is far from ideal but the dev team should work with the business team to create a good enough first version. If the code is garbage, you've to question the development team, period. If you take nano refactors (around 20 minutes) every day before you push your code and follow the community guidelines for the stack you're using, technical debt won't become a problem in the first stage.

When you're asking this question you need to ask: Why has the dev team wrote code that lead to this situation? Do we have PR reviews? Coding conventions?

c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?

I don't know about YC startups but I can tell you that I'm yet to know a company that has, at least, 50% code coverage. Any time I mentioned writing tests, the other side looked at it as an unnecessary expense. Personally, I believe it is up to the dev team to identify key code components and write the tests. If you've a function that keeps breaking all the time, that is a great candidate for unit testing.

It is possible to write clean code on V1, this is what I do today. I've faced so many situation where I didn't and it ended up always costing me more time and working too many hours. I would rather to delay the release of v1 and having something stable than trying to please the business team at all costs.

Believe it or not, the business team don't give a f* about your codebase. Many times I reported security vulnerabilities and they thought I was creating problems lol. I've seen devs not reporting bugs because of the company culture.

As a developer, do your best and always keep learning and growing. If you do this, you'll produce better codebases naturally.

Negotiating with the business team is also key to have a sucessful release.


Those last paragraphs hit the nail on the head.

Business often doesn’t seem to treat developers (or any workers) as a first class value. This is true in the small just as much in the large. Hence the grotesque term “Human Resources”.

Writing “clean” aka pragmatic, well abstracted, robust, performant and readable code becomes naturally less “expensive” if practiced regularly (who would have thought).

So it is also an investment in developers: their skill, communication, happiness and engagement.

Disregard of that is short sighted and cynical. Just paying someone (well) instead of investing in them and growing with them leads to unhappy, stressed, uncreative workers, erodes trust and limits engagement.


a) Knight Capital traded themselves to nothing due to a bad deployment. That’s close.

There’s gonna be a big survivor bias here. You won’t hear about most of the startups that collapsed because the product just didn’t work.

b) Keep the bugs non fatal, make sure the features are worth it.

c) I’m not in YC, but yes. There’s a really good reason why tons of startups duct tape shit together with node and ruby, only to rewrite it later in something else.


There is a spectrum:

Quality+Speed+Efficiency cowboy coding |0-----1------2-------3------4----5| perfect iphone

I would never expect a startup to be operating above 4 or 4.5, it might mean you are spending too much time future proofing.

The best teams operate around 3 or above, but they can do so because they are experienced, disciplined, trust eachother, have a set of tools they know very well, and can move at a quick pace because they automated a lot, have code patterns they follow and are not "re-inventing the wheel" or trying new frameworks for fun.

A LOT of startups are being started by inexperienced developers, where they jump onto some new language or framework, and end up doing a lot of non-core work due to inexperience and due to choosing some nascent framework. This immediately puts them at less than 3, probably between 1-2.

If you are at a 2, i would say you are doing OKAY, any less than that, and I would say you probably are suffering from inexperience, bad choice of frameworks, no tests, etc.


> a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.

Yes. Velocity slows, features don't get out, new versions don't get released, investors don't see product progress, funding runs out.

> b) what is the best calculation to make when trading off code quality vs features?

Wrong question. Avoid code. Avoid implementing things at all, use other people's APIs, fake features with manual scripts that you eventually automate.

> c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?

Yes. expect(result).toEqual("hello world"). You don't have to do TDD if you don't want to, but it's not fucking hard to record the output once, save it, make a test and then know what you broke later. Don't be lazy.


I think it should be pretty bad, tbh. a) Not part of yc. Never seen anything fail completely but I have seen projects get major delay over wanting to ensure quality. b) I'm currently leaning towards features = 100. quality = 0. This changes when you can't implement something or it takes too much time because of the state of the code base, at which point you refactor. this is a bit of a judgement call but features should have the clear priority. c) of the startups i have been at one had the cleanest code ever and the other some of the most incomprehensible code. the difference is due to who wrote it and made framework choices. some devs write clean code naturally. some platforms make that easier. unit testing wasnt a thing at either.


The difference between a startup and a big company is not just dollars. Making a product in a startup is sort of a process of discovery. In a big company they will generally have a pretty well defined picture of what they want. When a startup says they want to do "X", I don't think that is where the big tradeof in code quality versus timeline. The problem comes when you decide you want to do "Y", but your codebase does, or is working towards doing "X". In my experience that is where there are a lot of decisions to make about how soon you get something finished. And in a startup there are a lot of these changes in direction.


a) likely a symptom more than the cause

b) system sunset date is the tienbreaker, its hard to justify shit code for a space probe, and its hard to justify perfect code for an email collector

c) automated tests are a development tool, theyre not there to make sure your code works, theyre used to ensure your code is sufficiently decoupled, modular, maintainable, and easily scalable in the future. Theyre also frequently used to spike problems that are otherwise hard to solve.

The level of importance of tests you place on your situation is super dependent on your devs. Some types of project i wouldnt write tests for. Others i do. It depends on scope and experience.

d) yes, maybe


I have seen businesses thrive with bad code. But it's painful to work on that code. Near soul destroying.

I've also seen a good lead come in and rescue the direction of the code. That requires expertise in the language, a good understanding of how to rescue legacy code and political power within the organisation.

If you have to work with bad code make sure you find ways to enjoy work. Also don't allow yourself to think you're a bad Dev because you can't work fast. It's the code not you. If no one will allow you to get tests in place and fix it, it's not your fault.


These comments are really excellent, I would caution this isn’t an unknown industry and the solutions are well known... I think they have had two people now who weren’t fast enough for them at producing features inside the codebase. And their junior is also quite unproductive.

I guess in the end I would still go as fast and make as many mistakes but I tried to encourage them the have clear boundaries around the components so that really bad stuff can be rewritten. I guess they’ll probably be a huge success and that’s the only thing that matters really!


Here's the thing - you can spend forever making the code pretty. It would never end.

I would say writing the code "with care" simply depends on the initial team. There are plenty of startups that take the extra 20% or so time to build it right and with care that are successful.

I worked at one company with a wonderful code base for seven years. They're about to hit 100m ARR. We wrote tests, mostly used Java, and cared about building reusable components and a platform. I would say hitting 100m ARR in that timespan is good.


Due to intense survivorship bias (say 0.01% of code goes viral) I think it is extremely difficult to get the real answer to that question by talking to all of us out here on a message board.


As long as the startup in early stage (e.g. no paying customers, no MVP), the code can even be ugly and not performant.

But as soon as the startup is earning money and winning customers, a rewrite with better code quality standards must be planned. Unfortunately, maintaining high quality standards means also investing tons of time in setting the tools and the development environment and sometimes it is pretty hard (especially when dealing with IDEs like IntelliJ and you want to use your own Checkstyle)


In my last Startup, my cofounder pushend that we should write no tests until we had paying customers. Ultimately I came to respect the level of discipline that this imposed.


> what is the best calculation to make when trading off code quality vs features?

In my opinion it's:

code quality = "How long are we going to need this feature" * "How much money are we getting from people who use this" * "Cognitive load added by the feature to the whole project"

V1 projects don't bring big profits, can be shut down any time and their codebase is relatively simple. I'd keep the code quality low until some of these factors begin to change.


Little learning from google, Singhal rewrote the search algorithm which larry & brin wrote initally. Your app should be able to do the very basic things it's advertised to do and once you get clients and funding, in the initial years you should try to add features and at the same time remove all the inefficient codes.

Great code and no users is a code that never going to run.


There's a big problem with even asking this: it's asked as if there are objective answers, but no single person can possibly have worked in enough startups to have statistically significant knowledge in this area. Unless someone has done a scientific study in this area it's just a bunch of anecdotes and disagreements.


Startup code needs to eventually scale and it needs to be flexible. I have no problem with "crap" code as long as it makes sense in the context and works. In a lot of cases, it would be extremely counterproductive to write "perfect" code that then needed to be thrown away a few months later when your product changes.


a) None that I know.

b) Just take into account these: Will feature introduce major bugs like database corruption, or will it just cause minor bugs like UI bugs? Also consider if this feature is really necessary for MVP and will have a considerable financial return or not. In my startup I definitely don’t deliberately write bad code, but there is a limited time/financial funds, so it is ok for some of the code to be hacky(though never horrendous), I just put a TODO there to remind myself to fix after product release.

c) I favour an agile approach, try to implement the feature first without unit tests and see how it works with the overall architecture. I only unit test code: That can cause major bugs, or code that involves heavy math.


a. No but it has slowed down time to market.

b. It's all about extracting the max value out of your dev time. Will refactoring / improving code quality mean that future features get delivered quicker?

c. Most POC don't have tests from my experience. They are usually added later.


I think there's a non-zero chance I know the exact company/codebase you're talking about as I was involved with the same start-up with the same concerns... South of England? NextJS, Apollo, Prisma, Postgres, Heroku stack?


The way I've heard it is if you're not ashamed of the code quality of your MVP, then you spent too long on it. Until you see traction with a clear willingness to pay, the MVP is a practically a throwaway.


two things of the top of my head:

- the most expressive languages might not be the most readable. This is because a language that can match YOUR way of thinking and MY way of thinking might not lead to you being able to read MY code.

one example is Perl - where you can say:

  if foo { bar;}

  bar if foo;

  bar unless !foo;

  etc...
the takeaway here is: the most efficient way to get an idea out of my head and into code, might be person-specific and hard to maintain.

- Working code can lead to survival. Only survival can lead to the time to do it "right"


The codebase affects the overall outcome of a business venture built on it in the same manner that the car affects the overall outcome of a drive.

The more specific an outcome you're looking for the more factors you'll have to consider. You can loosely think of the relationship between code and companies like the code is "the matrix" and the tangible business world is "the real world". It might help to think of the code as a child being raised in the matrix.

The first product market fit stage is the hardest. Here it befits the code to be maximally extensible such that you can most effectively steer it around the market landscape and most effectively capitalize on any discoveries made. But you also need it to work decently enough to have traction. This stage is like parenting a baby that needs to decide its life mission and begin it during the first few years of its life. Its main purpose is self-discovery, but also it needs to be set up to become whatever it discovers it wants to be. Here, luck is the main name of the game.

The next stage is growth (farming the land you've staked, becoming the thing you've decided your codebase baby's life is about). Here you need less extensibility and more fidelity. You're clear on what your code needs to do, and you just need to make sure you do it well enough to last long term. But also things get more complicated at the org level. Now a team has to be built out. The codebase must now mature, and that means that it must gain a firmer grasp on its purpose (high fidelity architecture and infrastructure) and learn to interface with the world (be geared towards long-term maintainability).

After you exit that stage, you exit the startup stage entirely. Generally, if you're a businessperson and it's available to you, having good engineers (human communication skills above technical skills, understand the holistic function of engineering within the context of the rest of the company) is the best solve to this problem. They will have the vision to assess the field and the communication skills to inform you about it.

You will feel the urge to carve the unpredictability of the outcome down with measurements, metrics, and calculations but this is mostly a fool's errand. If you're doing something brand new there is no defined path and it is about pathfinding, not measuring your performance along a path. There are a ton of resources that all give opinion on the best way through this beginning patch of woods, but the true reality is that at the end of the day, getting through woods that no one has ever gotten through is something that can only be mapped in hindsight.


to answer this question, simply look at tech debt - the analogy. you take debt to enable faster access to something.

in keeping with the analogy, every business has a different appetite for debt. the debt to equity ratio of your current position should keep you able to take on debt when you need to. the debt should never get so great that it cannot be paid down. being without debt is holding a position that doesn’t leverage your ability to take debt.


If your startup is moderately successful, is it likely it would be acquired? For some startups, that's the goal. Is it yours?

I've seen acquisitions fail over code quality.


Until product/market fit only care about your code quality enough to not make your best engineers quit.

Post product/market fit, care about it deeply and enforce strictly.


Friendster was one of the first social networks, way ahead of Facebook.

My understanding is that a major reason they failed was poor code and an inability to maintain performance.


The rewrites will continue until your team culture improves


Successful startup code quality is on par with HR quality and Legal quality. Far below Marketing quality.


Code quality enables fast iteration.

There not at odds.


Does anyone have any best practices for onboarding new engineers to a situation like described?


Culturally, the engineers and management who created the mess should not be demonized. The situation is the enemy, not co-workers current and past. (Your private opinions can be unvarnished.)

Always steer towards how things should be looked at going forward: “we’re making this better for everyone as we add features/remove bugs”. “It probably looked like a good idea at the time” is a phrase I use a lot.

Newbies will see all the crap sooner or later. Knowing that they have more-senior allies in a shared battle, and having some ability to do stuff beyond fighting the crap, will help keep them on-board and engaged.


Be bad enough to ship.

Be good enough to not fail an ethics test re customer data.

Everything else is sales.


a. Sort of. Poor code quality really hampered Netscape, forcing them to pay down technical debt, while they should have been focusing on fighting on core features against Microsoft.


As bad as possible, but not any worse...


1. Launch product, make money.

2. Fix up your code.


if you're doing your job as a startup coder you shouldn't have to balance btwn insecurity bugs + spaghetti

that's a false choice. in reality you can have all three

'get things done pretty fast' is the only red flag in your story -- if you want your life to be truly worthwhile you must make this codebase unproductive as well


As bad as it needs to be.


CTO of a small-ish startup here. Here is my take on it:

1) Code doesn't really matter as long as it solves the issue you are trying to solve. Don't expect your code to be beautiful from day 1. Be responsible and train your devs to be responsible as well, because in a startup you code, fix and deploy your own stuff. What does matter, though, is code complexity. Manage your complexity, don't overcomplicate things if you don't need to. No need to design a Ferrari when all you need is a horse and carriage.

2) Process matters. From day 1, make code reviews/pull requests the default. If you are the most senior dev, or a technical founder/CTO in a small startup be prepared to spend about 50% of your time reviewing code and helping others. You won't get to code as much, but you'll sleep better at night knowing at least you've tried to catch some bugs before they reach production. In an early stage startup, you will not have the time nor the resources to test everything, but this will give you peace of mind.

3) Tests matter. That being said, in the beginning only test mission critical stuff. If you find a critical bug, fix it and then write a test for it. If a new feature breaks something that already works it is a big no-no and might lose you customers. Testing will change for you as you progress with your startup. Start by making the process easy for the devs to run the tests locally. Then, progress in having CI. Then, maybe have CD as well.

4) Worst case scenario: full rewrite. If a 6 to 12 months old startup decides on a full rewrite. I'll give them the benefit of the doubt, maybe their whole use-case has changed, maybe they DO need a rewrite. That's fine. But, if you are a SaaS that is older than that and your dev team is around 10 devs and they are all busy solving critical bugs and putting out fires, a rewrite might mean your death.

5) Architecture matters. This matters more than code, in my opinion. Say you have a horrible piece of mission critical code, it is SLOW and begins affecting your business. That piece of code will need a rewrite, for sure. But what would you rather do: spend 30 days to fix it and lose customers, or just spin up another machine/add CPU/add RAM? This is a good architecture, it allows you to have time to think things through, allows your code to run well and perhaps most importantly, allows your developers to actually code.

Bad architecture is the leading cause for rewrites. Is that beautiful microservice architecture giving your small team headaches? Did you overcomplicate things, perhaps? You see, bad architecture is very hard to fix. People seem to underestimate how much a simple API + DB can scale and try to mitigate the risks by copying whatever FAANG does. Start small, scale later once you have the resources to do so.

TL/DR: Code quality doesn't matter if you solve your issue. What matters more is mitigating the risks that come with writing code in general. See above for some ideas that came from my own personal experience.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: