Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would really love to read their codebase.



typically deploying to production around a hundred times per day

That is insane!

We make a release once a week at work and things still go wrong sometimes. I am in awe how they are able to pull this off, especially at their scale.


The more frequently you release, the less, or less serious, bugs there typically are, and the earlier you catch them.

The faster development cycle also helps people invest in testing infrastructure more effectively.


I'm trying to get a sense of magnitude, here.

Let's say 10-20 commits per day per dev. Over 5~10 hours that's what, on the order of 1 commit-test-release cycle every 15 to 60 minutes? (subjectively for each dev)

What do we actually write in that timeframe on average (thus including the ~90% of time we don't type code but think or read or test)? What's the "unit commit" here?

So I'm thinking... let's take an example: today I'll refactor a few functions to update our model handling; I wish to reflect our latest custom types in the code. So it's a lot of in-place changes, e.g. from some list to tuple or dict; and the syntax that goes with it. No external logic change, but new methods mean slight variations in the details of implementation.

- refactor one function: commit every testable change, like list to tuple? At least, I'm sure I'm not breaking other stuff by running the whole test suite every time it "works for me" in my isolated bubble. So I might commit every 5-10 minutes in that case.

- Now I'm touching the API so I can't break promises to clients: I actually need to test more rigorously anyway. I'm probably taking closer to 20-40 minutes per commit, it's more tedious. Assuming I commit every update of the model, even insignificant, I get immediate feedback (e.g. performance dump), so I know when to stop, backtrack, try again? And it's always just one "tiny" step?

- Later I review some code and have to go through all these changes. I assume it's easier to spot elementary mistakes; but what of the big picture? Sure I can show a diff over the whole process — I assume you'd learn to play with git with such an "extreme" approach.

Am I on the right track, here? I totally get your comment but I'm trying to get a feel for how it works. I typically commit-test-release prod 3-4 times a day at most (on simple projects), and typically more like once every 2-3 days, 2-3 times a week. Which is "agile" enough I reckon... So I'm genuinely interested here. I feel there's untapped power in the method I'm just beginning to grasp.


Are you assuming the tests are all 100% automated? If QA needs to take a look, how is it possible to have a commit-test-release cycle every 15-60 mins? I mean, it would take a human few mins to just read and understand what they need to test, isn't it?

The article talks about static analysis, I wonder if they do human code reviews at all?

Any which way we slice this, this is incredible! Sure instagram is not healthcare, transport or banking application - nobody is going to die if the website goes down, it is still an awesome achievement.


Indeed, and not only are their tests automated, they also rely on production traffic to expose failure cases. Since some problems that are exposed are only applicable at scale. They use canary deployments to slowly ramp up traffic to the new version; rolling back if they detect anomalies.

Maybe you'll find this video interesting: https://youtu.be/2mevf60qm60


I think you've got it. Now put 10 people on the project, and have them all working at that pace.


Ah, awesome, thanks for the feedback.


What’s their stack?


Primary I think is a Django project https://www.youtube.com/watch?v=lx5WQjXLlq8&t=10s


I am straining to hide my lack of being impressed




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: