DevOps is two things:
1. Applying the methods of modern software development (version control, automation, DSLs...) to operations (provisioning, config, deployment, monitoring, backups...).
2. Reducing silo barriers between devs and ops groups so that everyone is working together as a team, rather than blaming each other for poor communication and the resulting messes.
Then there are all the DevOps hijacking attempts, such as equating it to Agile or Scrum or XP, or insisting that it's a way to stop paying for expensive operations experts by making devs do it, or a way to stop paying for expensive devs by making ops do it, or a way to stop paying for expensive hardware by paying Amazon/Google/$CLOUD to do it.
No matter what your software-as-a-service company actually does, it will need to execute certain things:
- have computers to run software
- have computers to develop software
- have computers to run infrastructure support
You can outsource various aspects of these things to different degrees. Anywhere you need computers, you have a choice of buying computers (and figuring out where to put them and how to run them and maintain them), or leasing computers (just a financing distinction), or renting existing computers (dedicated machines at a datacenter) or renting time on someone else's infrastructure. If you rent time, you can do so via virtual machines (which pretend to be whole servers) or containers (which pretend to be application deployments) or "serverless", which is actually a small auto-scaled container.
Docker is a management scheme for containers. VMWare provides management schemes for virtual machines. Kubernetes is an extensive management scheme for virtual machines or containers.
A continuous integration tool is, essentially, a program that notes that you have committed changes to your version control system and tries to build the resulting program. A continuous deployment system takes the CI's program and tries to put it into production (or, if you're sensible, into a QA deployment first).
When you boil the Cloud, DevOps, CloudOps, SecOps, *Ops, CI, CD, Containers, VMs, and all the other technologies we've devised over the past ten years, you always end up at the basic building blocks.
You eventually come to the conclusion that all we're really doing with all these new tools is adding software layers on top of those building blocks in an attempt to make them easier and faster to consume.
And how have we done overall?
Not bad, if you ask me. Some solutions are overkill for most people (K8s is an example of over kill for a start up and even an SME.) But Terraform, Ansible and GitLab (CI) are something I'm currently developing a highly opinionated video training course on because I believe they strike the right balance of improving on prior experiences without taking the absolute piss.
I did a write-up on how I used it on my blog: https://heuristicservices.co.uk/2019/08/13/staging-and-produ...
The workflow worked really well, provisioning Vagrant servers in staging and Digital Ocean droplets in production.
I moved away from Vagrant in favour of Terraform, but I agree Vagrant still holds its own and is a great choice (HashiCorp really nailed it, eh?)
I believe in one tool to do one job really well.
Terraform is excellent at provisioning and managing infrastructure due in part to its DAG and HCL. On the other hand Ansible has been tuned over the years for managing configuration and the state of anything and everything from the OS upwards.
I also believe in using building blocks to get to where you're going, and these two bad boys click together quite well.
Once your VM or container hits a complexity point above trivial, ansible is very much a useful tool for provisioning container states, and specifically for patching container images to, eg. include security updates.
...beyond that, as in, the intended use case of dynamically updating multiple live machines in parallel... dunno, I don’t use ansible for that... but it beats the hell of out having a single monolithic batch script to setup a container. I use it for that purpose all of the time.
For host based security patches (if I'm in an environment where the servers aren't managed), adding an item to the crontab in user data usually handles that, and again any fleet-wide changes would usually be propagated by updating the user data, pushing out the change and having automation rotate the fleet.
- DevOps is a peer with Agile and Lean. Scrum and XP are Agile implementations. Scrum doesn't prescribe ways to code, XP does.
- 90% of what people develop or run today should be in containers, and not because containers are great, but because of the DevOps patterns of IaC, immutability, reproducibility, homogeneous environments. Whether you run them on your laptop, a VPC, AWS Fargate, a K8s cluster, etc is dependent on your business needs.
- Continuous Integration and Continuous Delivery aren't so much tools as a practice, and they're more complicated to implement at scale than just using a tool. There are some great books on the subject.
Sorry but no. Container is __a__ way of achieving a small part of what you are talking about but not the only way.
Break it down:
- IaC: how do you containerise a load balancer? Terraform gives you infrastructure as code without containers.
- immutability: VMs, AMIs are immutable just like containers are (discounting the entropy that happens in every OS)
- reproducibility: Same, VMs, AMI, Terraform, Ansible all give your that
- homogeneous environments: Not sure what you mean by that, your Cisco or Juniper firewalls are not running in Docker so I am pretty sure you already have "heterogenous" environment if you meant that by what you wrote
I absolutely disagree this approach that we need containers for the reasons you just mentioned.
1) Terraform should be run in a container so that it will actually behave the way you expect, 2) containers are application environments built based on a Dockerfile, which makes it IaC.
> - immutability: VMs, AMIs are immutable just like containers are (discounting the entropy that happens in every OS)
True. But containers are easier and more portable, which is important to supporting the other aspects involved. Containers thus are a better general solution.
> - reproducibility: Same, VMs, AMI, Terraform, Ansible all give your that
Containers and VMs just... work. They're just collections of files. Very reproducible. Not 100% - you may need different guest drivers/kernels, different arguments to run your container in your particular system. But they're conceptually and operationally simple.
Terraform and Ansible are garbage fires of reproducibility and immutability. I could write a book on all the different ways these tools fail (most of it stemming from people trying to use them as interpreted programming languages, but also their designs are crap). There are whole frameworks built around Terraform and Ansible just to make sure they work right. They are overcomplicated, fragile bash scripts, and I'm quite frankly sick of using them. I think their entire existence is evidence of a huge gap in understanding how we should be operating systems today. [/rant]
> - homogeneous environments: Not sure what you mean by that [..] I am pretty sure you already have "heterogenous" environment
Those are opposites; homogeneous means "of uniform structure or composition throughout", heterogeneous means "consisting of dissimilar or diverse ingredients or constituents".
A homogeneous environment in a DevOps sense is when all environments have the same components and are operated the same way, and thus provide the closest results possible. This is incredibly important to prevent the classic "Well, it worked on my machine!" dev->production breakdown.
Homogeneous environments apply to lots of different things, but in the context of containers, they ensure that the environment the dev used to build the app is the same as what is in production. They also ensure that any scripts, tools, etc will use the same environment, if they are run in containers. I've wasted so much time in my career "correcting" heterogeneous environments in a bunch of different ways, whereas with containers the equivalent fix is "Please run the correct container version. Thanks"
The more systems you have, the more important this gets. At a certain point, the best choice is just to use baked VMs or containers for everything, everywhere, and containers are just so much easier, almost exclusively because Docker shoved so much extra useful functionality in. (I'll add that I do not necessarily like containers, but I do find them to be the most useful solution, because they solve the most problems in the most convenient ways)
If it helps, my core point is:
DevOps is the name we give to two philosophical ideas. The first idea is that the tools and methods of software development can be used to improve our ability to do operations work. The second idea is that siloing people with operational skill away from people with development skill is a terrible practice.
Along the way, I specifically denounced the idea that DevOps is a single methodology, or that some tools are more DevOps than others, or that DevOps makes prescriptions about what you should do. Those are all things that you immediately advocated.
Look at it this way: The Toyota Production System isn't about cars. It was developed specifically to produce cars as well as they could be, but it doesn't address "car problems"; it addresses business problems, production problems, workflow problems. It applies methods as practices in ways that are specific to the production of cars, but you can apply the principles of TPS to things other than building cars (as we do with Lean).
DevOps is comparable to TPS (well, Lean), but for software instead of cars, and it borrows from other systems, and it has a few of its own ideas specific to software.
> Along the way, I specifically denounced the idea that DevOps is a single methodology, or that some tools are more DevOps than others, or that DevOps makes prescriptions about what you should do. Those are all things that you immediately advocated.
I advocated using containers because they help reinforce DevOps principles better than alternatives. You don't have to use them, but that doesn't make them un-applicable to DevOps. There are different levels to DevOps, and one of them is "practices": particular ways of doing things that DevOps encourages, such as Infrastructure as Code, Immutable Infrastructure, Heterogeneous Environments, Continuous Integration & Delivery, etc. Things that containers are more useful at accomplishing than, for example, VMs.
You don't have to use Kanban to run a car production line. But it's more TPS than the alternatives.