How often does code go through security audits? Is every feature audited prior to deploying to live?
GitHub is making money by selling private repositories which will often contain very sensitive code, so ensuring nobody can gain unauthorised access to them is presumably one of the top concerns.
I'm interested in seeing how tight security requirements fit in with this almost continuous deployment strategy.
Every commit is reviewed by at least 1 person. Depending on the feature, several people may chime in. I find that reviewing smaller diffs is much easier. We also use Team Mentions (@github/api, for example) liberally to get more eyeballs.
We also have regular audits with external security firms.
Interesting, I saw something like this by looking at the https://github.com/mozilla/pdf.js/ project. Take a look at the closed pull requests. They have some bots listening for commands in comments and do stuff like unit testing and previewing. The result gets posted as comment from the responsible bot. Another one is checking master branch for changes and automatically builds and pushes at gh-pages. Seems to work very well, but don't know how they build/did it.
If you're looking for a less complex model of this, you should try our Continuous Integration and deployment service: https://circleci.com. Over time, we'll be providing the sort of complexity that GitHub provides here, now we do about 70% of it.
Looks interesting. Is it free? It seems like it probably is but I'm not sure. Also when it leaves beta how much will it cost? I don't want to end up depending on something I can't afford.
You look awesome. I think I'll be setting this up on Friday. Do you have webhooks for pass/fail on the horizon? Or better yet, straight up git push with ssh key support when all the tests pass?
I might be wrong but for me this is almost a [Hack] -> [Prod] methodology...
Roll back in 30 seconds, cool but how do you manage data / schema migrations ?
You have a snapshot also to rollback any data corruption the last hacking session could have introduced ?
Heaven (our deployment tool) does deploy GitHub.com directly from the file servers. But, most of our infrastructure directly relies on GitHub too (such as the Merge API from the blog post, service hooks, and a bunch of OAuth minapps).
Encouraging to know this model scales to 100 employees at least.
Purely out of intellectual interest, I wonder if a company the size of Google or Facebook could also ship in this way, or if the whole release manager/team is essential.
Sarbanes-Oxley puts a big damper on production deployments at big companies. I don't fully understand it so I won't try to explain it.
(I will complain though: the law says developers shouldn't have control over production systems. If that's a requirement, who's going to write the software?)
It hasn't slowed us down at Netflix. :) In fact what it has done is caused us to be really good about separating what needs auditing from what doesn't, so that only a very minimal set of services has to have separations and release processes that are in line with SOX controls.
I believe you're somewhat mistaken. SOX generally applies to finance systems and financial reporting at public companies. So if you're publicly traded you couldn't use this process for your accounting system. But if Facebook wants to let a junior engineer push out new code without independent review, SOX isn't stopping them.
I don't know if that's a SOX law. However, I do know that it is a PCI requirement. A single person shouldn't be able to introduce new code and then be able to push their own change out to production.
Can you point to the bit in the PCI spec that says that? My understanding is that people should only have access to the systems they require. But that doesn't stop a developer having access to a continuous deployment server that can push code that meets requirements to production. But that's based on my memories, and may not reflect reality.
Everyone just 'knows' what is in Sarbanes Oxley but when you ask them to point it out to you in the legislation they cannot find what they were so certain about 2 minutes prior. We have compliance people and auditors are always coming in, but when someone claims something is required for Sox compliance, challenge them on it as 99% of the time it is a convention because someone told them, or they did it like that somewhere else once, rather then what is required by law. At the least it will make them justify the compliance/overhead they are causing you to do as an engineer.
Here is the legislation if you want to read through it or use it to challenge someone's assumptions about the Sarbanes-Oxley; http://www.sec.gov/about/laws/soa2002.pdf
A lot of this stuff is open to interpretation by auditors. SOX doesn't literally specify any of this sort of stuff.
In my experience, SOX usually ends up meaning that developers don't have access to production systems, or significantly limited access. However, a continuous deployment system should generally be very much in the spirit of SOX, in that it's pretty hard to do without well-defined, highly-repeatable, automated and auditable processes.
Dogfooding aside, the vast majority of the time is spent running tests and actually deploying the code. The time to hit the API to merge the commit is negligible in comparison.
Also, Janky and Heaven are both tiny apps that don't necessarily have access to the file servers.
Github itself is a Github project, therefore using anything other than the API would be some code duplication. Merging, Pull-Requests, Branches, Issues: all this is already covered when using the normal API.
I'm sure it could be "more efficient" when having code explicitely for this purpose, but then again you have to maintain to different code bases which do the same.
GitHub has lots of things that Git doesn't have. As well as recording pushes and making them available over the API, it records fork information, has concepts of users and organizations which dont exist in git, has pull requests, comments, and post-commit code review, an issue tracker, etc, etc, etc.
We don't do "company hack days". If you feel like hacking on something, hack on it.
We do have days where multiple people will be waiting in line waiting for their chance to deploy their tweak.
That particular day consisted of staff deploys on multiple in-progress branches, some performance tuning, bug fixes, etc. Nothing crazy.
I'm also quite sure the number counts deploys across all of our applications. For instance, deploying a change to github-services counts as two, since I have to deploy changes to GitHub.com also.
Thanks for this, enjoy hearing about Github as a company.
> I'm also quite sure the number counts deploys across all of our applications. For instance, deploying a change to github-services counts as two, since I have to deploy changes to GitHub.com also.
That might explain a lot. Still a lot of deploys, but a more sane count :-)
We did have a pretty amazing week right after the summit, where everyone was on fire to ship things and there were a lot of people "in line" to get things deployed. It was pretty awesome, actually, seeing so many things land within a week of the whole team gathering and discussing the future.
Yes, the testing/deployment cycle for Enterprise is totally different. We usually release a major version with new features every two/three months, and 2 or 3 minor versions with bug fixes in between.
We always keep the version of github synchronized with master for development/testing, although we only release master directly in major releases. For minor releases we avoid to include major features from github to keep it as much stable as possible.
We have a staging environment, but its really only used for really big changes that might need to be experimented with before being deployed. We can also deploy a branch to a single front end to observe how it behaves with a subset of the traffic, and roll it back quickly if needed. Also, most large user-facing features are released as "staff-only" first, so we as GitHub users are able to play around with it for a few days or weeks before enabling it for everyone.
No. There's simple conventions for adding feature flags (user.some_feature_enabled?). Features are enabled and disabled by changing the code and deploying. This works because deploying new code is fast.
We do use Rollout (https://github.com/jamesgolick/rollout) once in awhile. Most of the time, we like having the history of flipped feature flags in the Git code though.
I'm interested in seeing how tight security requirements fit in with this almost continuous deployment strategy.