We used a very similar model to this at my last job, and I'm struggling to get my current team on board with this type of process. I think the main problem is that people don't trust continuously deploying master because there aren't enough tests. In my ideal world, every commit is tested (with Jenkins, Travis, Buildbot, etc), and then if the PR includes tests for the code and the build passes, the reviewer says LGTM and the committer presses the merge button on GitHub. Once the button is pushed, a build of master is triggered. If the build passes, the code is automatically deployed.
My world really changed once I started working with code bases that had excellent test coverage from the get-go.
At my last shop we combined that with pair programming, feature switches, and a few other tricks, and we basically never branched. You'd pull, work for a few hours, push, and 10 minutes later your code would be live. It was in one sense freeing: the release overhead of other shops was gone. And in another, it inspired more discipline. Knowing that everything you were writing would shortly be live kept you on your toes. You could never leave something for later; there was no later. I loved it.
If I understand your question rightly, yes. Looking at the Github history, we did actually have 6 branches over the life of the project. All were extended technical experiments of one sort or another like trying out a new templating approach. 2/6 were merged. But all normal coding was pushed to master with no branching (beyond a local checkout and local commits, which are a sort of branching, but none of those lasted longer than a day). There, any checkin triggered tests, and any build that passed the test was pushed live.
If you want others to see the benefit, I'd encourage you to pick a specific area of the code, test the hell out of it, and make sure that a) tests are easy and quick to run on dev boxes, and b) every checkin is automatically tested. I'd start small, and one good place is a chunk of important business logic. It's even better if you use the tests to support refactoring and general cleanup of an area people know is messy.
If you do this right, then people will have two experiences coding on the project. In the tested code, it's pleasant and safe. In the messy code, it feels dangerous and scary. Over time they may get it.
Note that this is really hard to get off the ground in an established company and in an existing code base. So if they don't catch on, don't feel like it's you. (I generally cheat by being the first person on greenfield projects, so the first line of code written is a line of test code.) Also, if you get stuck while trying to clean up legacy code to make it testable, Michael Feathers' book "Working Effectively with Legacy Code" is very helpful.
Good luck, and feel free to drop me an email if you end up with more questions.
> If you want others to see the benefit, I'd encourage you to pick a specific area of the code, test the hell out of it, and make sure that a) tests are easy and quick to run on dev boxes, and b) every checkin is automatically tested. I'd start small, and one good place is a chunk of important business logic. It's even better if you use the tests to support refactoring and general cleanup of an area people know is messy.
Ah, this is really good advice, thanks. I'll give it a shot.
> In my ideal world, every commit is tested (with Jenkins, Travis, Buildbot, etc), and then if the PR includes tests for the code and the build passes, the reviewer says LGTM and the committer presses the merge button on GitHub. Once the button is pushed, a build of master is triggered. If the build passes, the code is automatically deployed.
This isn't an unattainable utopia. It's what lots of teams are doing now. Try out continuous integration services (Travis or Koality).
And, given you have good instincts, ping me when you're looking for a new team ;)