You say your dev's local development environment is different to prod. Are you guys letting each dev set up their own environment by hand, or have you provided a Puppet or Chef repo that they can clone and have an exact replica up and running within minutes with Vagrant?
It's only somewhat different. We try to keep everything as similar as possible. We use chef to get new and old VMs up to date on our current setup, reusing recipes from prod to dev whenever possible. That being said, every dev is allowed to modify/change their vm in any way they seem fit. It is recommended to speak with us before doing any wild changes of configurations that might cause chef runs to start failing or make your VM not as good of a representation of production.
We also allow our developers to connect to a proxy to our production MySQL shards from their development environments in a read only mode. This allows them to leverage the large data sets that are quite hard to replicate in our development architecture. There is also a limited read/write mode that we are working on (with the proxy filtering dangerous queries). But all that is another blog post for another day.
We also do not use vagrant, opting for QEMU/KVM on physical hardware. The same tooling you saw in part 2 of my blog post also creates our development VMs as well.
It doesn't matter. Developers almost never develop/test in an environment that mimics production - multiple load balancers, multiple app servers, multiple database servers, failover to a second data center, etc.
If your dev environment is not different from prod, you're either insanely rich of your server setup is trivial.
I would argue that a dev environment that is identical to prod, no exceptions, is too constraining. As the OP points out, having root access in the VM to go willy nilly and try out new tools is a must for developers.
I'm kind of surprised they didn't have Jenkins setup from the start; I'm also a bit taken aback that they don't use automated code reviews before accepting patches to their "deploy" branch. Even for a small project, it's not that hard to setup Jenkins+Gerrit to reject patches that break tests (or have to pass whatever other hurdles you want).
We only really have one branch "master". We encourage the engineers to push small changes, behind config flags if necessary, all the time so there is never any huge merge conflicts, etc. This also means you don't push your code to master until you are up in our push queue.
We've also had Jenkins set up for a long time now, we just used LXC to drastically improve our performance and scalability.
We use a review script that creates a temporary branch in github and sends an email to everyone you specify to review it. We then kill that branch when the review is over. Any time you push code, you run our test suite on your changes then create a review. Since these are encouraged to be small and behind config flags to not affect all our users immediately, it happens quite often. Once feedback is taken from the review you enter our push queue yourself and push it out yourself. If the code is possibly dangerous, we recommend those pushes wait from Friday night to Monday morning for safety's sake.
Luckily, most of my projects fall into having production be trivial. Unluckily though, that tends not to stop teams I work with from breaking parity somehow.