Hacker News new | past | comments | ask | show | jobs | submit login

That is interesting because I am still trying to figure out how people do CD. Of course I know CI, we have it all set up. But we still work in sprints with manual testing and release every 2 weeks.

I could spend time on marking features 'frontend only, low impact' which we could deploy pretty much the same day. Still there are quite some features that need bigger amount of work where they might be 'done' by dev but I am sure they are not actually done, because security, because error checking. It of course is usually that one dev has his not tested feature merged to develop and then also some other dev has production ready one, then if one feature is production ready, I would have to put also some time to make release and pick only changes for accepted change. I am not sure that additional work to check what we can release ' right now' will pay off vs just taking time for fixes on acceptance by people who were working on code and then release it (after 2 weeks o 1 depending how fast it is done in sprint).

So are you having people who work only on picking changes that are low impact, or working on making stuff production ready by picking from develop? Maybe you pick changes by yourself, or you just defer manual testing to end users and rely on automated tests unit/integration?

p.s. Funny thing with automated tests is they are good at keeping old stuff working but not for testing newly developed features where actual tester can test new GUI/ new features. If you have a lot of GUI changes you cannot automate first tests...




A couple obvious things stand out from your situation. First, only allow merges of code that's been reviewed and has enough tests along with it. Second, have a pipeline that automatically runs all the tests whenever you merge, and if they pass, then goes on to automatically deploy. It's really that simple. It's not easy to get to that point, but it's simple.

Mostly it comes down to organizational changes and everybody getting used to what constitutes "enough" tests.


It was pretty important to work with the Project Managers and "scrum-masters" to decouple our release cycle from our sprint cycle. In our case, like yours, they were coupled together for no technical reason. It's hard to sum up all the changes we made to allow it to happen, but mostly it boiled down to a few technical decisions -

* Every PR is treated as "production" ready. This means if it isn't ready for user eyes, it gets feature flagged or dark shipped. Engineers have to assume their commit will go into prod right away. Feature flags become pretty important * Product Owners and QA validate code is "done" in lower environments (acceptance or staging, or even locally during PR phase). This helped us decouple code being in prod and "definition of done" in our sprints. * All API changes and migrations follow "expand and contract" models that makes the code shippable. e.g. even if we are building new features, we can ship our code at anytime cause the public api is only expanding. * More automated quality checks at PR time. Unit tests, integration tests, danger rules, etc. These vary from codebase to codebase. A key part of this is trusting the owners of that code or service. To a degree, if they are happy with their coverage, then they can ship. (Within limits of course. Not having unit tests at least would be a red flag)

Also, we still ship numerous times a day without a full automation test, we just make sure each release is really small (1-3) commits. The smaller the release the easier it is to manually QA it. So nothing fancy is needed, just smaller releases.

So to answer your question, we don't pick and choose work that is "low" impact or "high" impact, all code gets shipped the same and with the same cadence. It is our responsibility to ensure that when it goes to prod, it won't break anything




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: