In my experience this is a pretty dangerous way to think about it. One of the best reasons for continuous deployment is to allow you to push small changes to production. This is desirable because small changes are easier to debug when things go wrong, and are less likely to make things go wrong overall. But the core assumption behind this is "the system is really complicated and has a ton of moving parts that are virtually impossible to think about at all times." Continuous delivery makes the assumption that a preprod environment is highly likely to be similar to a prod environment, and making that true is usually extremely hard at scale. Couple that with the fact that prod will (especially for large, consumer facing sites) often expose you to many more edge cases, and the fact that a developer who has pushed to preprod may not be there to help debug when his or her code finally gets to prod and breaks things, I see continuous delivery as a pretty suboptimal alternative to continuous deployment.
It also seems rare to find bugs in production that cannot be replicated on developer machines so I don't think getting an exact match on the preprod environment is a huge obstacle.
If you have a small change, it is easily testable, and the chance of it breaking something is small. The goal isn't to discover bugs on production, it's to make it extremely easy to fix bugs on production. Continuous delivery batches up many small changes and thus makes it more difficult to figure out what exactly the issue is when you discover one. I have yet to see a process that is able to release bug free code, so knowing that bugs do get to production as a matter of life, it is highly optimal to make your process make those bugs easier to fix. Ultimately you end up with fewer bugs, when you do have bugs they have smaller impact, and when you need to fix them they are easy to track down and fast to change.
Getting an exact match on preprod is extremely difficult when you have a relatively complex data environment (sharding etc.), a relatively complex build environment (assets built differently in production than dev) etc.
Is continuous deployment just something for teams that don't know how to make a copy of their production server?
At Wealthfront, we prefer to push out everything and releasing features amounts to pushing out a single small change of whether a feature is hidden behind a user flag or experiment.
We've stopped counting how many times per day we push changes at Codeship as the number is meaningless in itself. If you feel you can't push regularly you giving up before starting the fight already.
It's just a very different and way more relaxed method of software development where you can actually focus on pushing the product further without getting stopped by infrastructure all the time.
Immutable Infrastructure will be the next big iteration in getting this 10x productivity I think.
Second, you should completely ignore Continuous Deployment until you have sane systems.
I do contract systems work for a living and have seen hundreds of live production systems. Nearly every system I see is in some kind of serious peril:
- no backups
- if backups, no docs on how to restore
- no monitoring (or very little)
- production passwords in the wild (former employees, etc)
- no configuration management
- no path to scale
- no path to replace defunkt servers
- etc... (I could go on for hours)
Your systems are the foundation of your application and your business. You wouldn't build your office building on top of an active volcano or below sea level on the coast, yet many businesses happily do this for their systems.
You know all those constant news stories about massive outages, security breaches, crippled sites, etc? I'd bet 99% of them are due to them failing at the fundamentals of sane systems. Having seen behind the curtains of so many companies, I'm honestly surprised there isn't more massive systems failure in the news.
This is a critical problem in the tech world. Does it affect you? A few of you will have amazing solid secure systems, but the vast majority of folks reading this won't.
If you're using a PaaS like Heroku or Parse, then you have most of the systems worries taken care of for you. You can breath a bit easier.
But if you are managing physical or virtual servers and aren't even using configuration management (puppet/chef/salt/ansible/etc), then your systems are probably in a very precarious situation. You might not think so, since everything just happens to be working at the moment, but you are essentially coding without using version control.
You would probably think that a developer that used email for his code version control is an idiot. You would be right. But consider that emailing code around for version control is actually better than what most companies do for their systems. Most companies set up their servers manually. They often don't even have any docs or even shell scripts.
At one client, it took them 2 weeks (!!!) to bring up a new server. The engineers thought it would take only 4 hours. Those engineers nearly killed that company.
Do you want to be a 10X or 100X systems engineer? Then use configuration management! After we put that client's systems in Puppet, we could bring up a new server in under 5 minutes. Yep, that's 4000X faster.
If you aren't using configuration management and don't know where to start, just use Ansible. It's by far the simplest and easiest to get going. I've written a quick intro to it here: http://devopsu.com/blog/ansible-vs-shell-scripts/
If you're curious about how the configuration management tools compare, check out my book: http://devopsu.com/books/taste-test-puppet-chef-salt-stack-a...
But I have needed to ensure that a file exists in a particular folder on the server. An example of ensuring that would be more universal. Then you can explain that this vs a shell script is better because it's error prone/difficult to run the shell script twice, etc.
But I did get a lot of value out of this article because now I know that Ansible exists. Thanks
I, for one, would love to read more of these.
If your language has a way of calling an existing block of code from a new location and checking the results, you have enough to write unit tests.
I don't know any languages which are unable of those things.
And when your office has been built around this model for years, decades likely, introducing unit testing for new desktop apps (support apps like I make), is hard because they don't see the value of it. They have a process, I should just stick to it (that's their attitude, not mine).
But hey, most programmers today (wow does that sound like an old fart) don't know there's a difference between event-driven programming and procedural programming, because all they've ever worked on is event-driven.
I am pro continuous integration, continuous delivery to staging. I think it takes extra care to do continuous deployment to production. There needs to be a number of systems in place some of which @mattjaynes mentioned others really depend on having a very solid and accessible deployment pipeline.
Google I heard does an amazing job at this. If I remember everything I heard from a Google talk, before peer review there is a pre-build/testing process. And while waiting to be reviewed, there should be a build triggered (or the build should sits in a CI queue waiting for testing).