Hacker Newsnew | comments | show | ask | jobs | submit login

I think the real lesson to learn here is to never assume anything. They assumed performance wouldn't be a problem, so they didn't test for it when prototyping, and they didn't make those tests part of their development cycle.

It's tough, these things, especially if you're talking about a feature you personally appreciate a lot. The article talks about the performance problems leading focus away from actively marketing these solutions, which makes me wonder: if properly marketed, would this have been a killer feature ?

The fact that they decided to get rid of it suggests no. However, are they putting the code in the freezer for a while until they fix these issues and re-release it ? Or is it simply a problem that can only be properly solved by giants like Google ?

They assumed performance wouldn't be a problem

We did pretty extensive performance tests, but not for long enough. We load tested tested for hours at a time, and the problems started to show up after days of production load and really compounded after that. It's probably a topic for another post, but the performance of database tables with a lot of churn (rows being frequently inserted and deleted) really degrades over time.


Guess that's my bad for assuming you thought performance wouldn't be a problem. :-)


> ... never assume anything ...

Maybe you meant "Question your assumptions, often." You can't get anywhere constantly verifying things ahead of time. In the extreme case, imagine writing a test suite for printf just to be sure it works as advertised.

The way forward for a startup is to make only the right assumptions. Failing that, make as many assumptions as possible, but correct the failed ones in a reasonably short amount of time. Some minimal cost/benefit analysis on the assumption would also be good.


Failing that, make as many assumptions as possible, but correct the failed ones in a reasonably short amount of time. Some minimal cost/benefit analysis on the assumption would also be good.

As a corollary to this, try to arrange that any failures will occur as quickly and obviously as possible. For instance: Component X, which is essentially a black box to you, will be a small but critical part of your project. You don't have time to extensively test it, so you make assumptions about its behavior. Ideally, you should start integrating Component X into the full project immediately such that it's heavily used in development and testing environments, so that any violations of your assumptions will show up incidentally to other work.

On at least a couple occasions, I've been bitten by assuming that a Component X (which I thought I understood) would do what I expected and thus leaving the final integration until near the end of the project. This sounds like an obvious, easily avoided mistake, but it's surprisingly easy to make in the heat of the moment.


In my opinion, a lot of The Software Problem(tm) can be traced back to this tendency to underestimate the difficulty of using magic black box X. Maybe this is because most programmers never try to make a real magic black box X that will be used by random person Y in random situation Z themselves. If they did, they might have a little more respect for the fact that it's really fucking hard, the interface will inevitably have all sorts of corners, and the result is never going to be as magic as you would like.


These are some really good points to keep in mind. I'm working on a fairly serious website system for a client right now and just reading over the article and the comments here makes me wonder if I should perhaps think ahead and make sure that I don't have to end up rewriting things.

At the moment I feel that my code is fairly future proof, but the truth of the matter is that when it comes to performance its hard to tell how things will work out in the long run.


Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact