As an outsider i feel that the whole devops thing has come from a few web people spinning up some slapdash service using AWS etc and call it a day. Then something breaks and it is firefighting from that day onwards.
I wonder if one can draw parallels to the BYOD thing.
In both senses it seems to boil down to someone bypassing the IT admins because their manager wants to see results ASAP, and IT is going "sorry, budget and/or regulations says we can't at this time".
It seems to have stemmed from Ops teams saying "What can we do let dev teams build what they want, but in a way we can support it"
e.g. - a system needs an entry in the /etc/hosts file (a bad example I know, but a nice simple one for the basic use case)
old method: Dev files a Jira ticket, and the Ops team make it part of their golden images for production.
new method: Dev makes a change in the Config Management DB. Gets reviewed by a member of Ops, who either approves, or makes a suggestion to change it. Change gets tested against staging, then merged, and the system gets updated.
Ops know what the system needs, Dev knows how long it will take, and has an understanding of how it is deployed in real life.
My view is slightly different. I see the rise of DevOps as coming from realizations about how modern applications are structured and how that's incompatible with the way operations used to be done. Think standard n-tier architecture vs a micro-services architecture that auto-scales depending on load.
Applications used to run on a fairly static number of instances that were maintained by an operations team. When more capacity was needed, operations would provision new machines. But now, especially with the increasing shift toward the cloud, infrastructure is seen more as something that's created programmatically, or even in response to events (load or otherwise) in the network. It's a powerful conceptual shift to think of infrastructure as something that isn't physical, but is more ephemeral. Just like a process on a server may die, a server on a cloud hosting platform may die. But you can engineer for failure at the infrastructure level the same way that you'd engineer for failure at the system level.
With this change, you can no longer rely on a fleet of sysadmins to manage everything for you. Instead, you rely on scripting everything..."Infrastructure as Code." You create recipes to create each and every part of your infrastructure and tools to orchestrate everything. Gone are the days where you actually ssh to boxes in a terminal...there are far too many boxes to make this practical. Instead, ssh becomes the protocol that your orchestration uses to communicate with its fleet of machines. Things are much more dynamic and you start to view your infrastructure more holistically rather than as a collection of individual machines.
Once you've started to think this way, you realize that neither the traditional development mindset or the traditional operations mindset work in isolation. You need to meld the two together so that operations understands better how to automate everything and development understands more about the environment in which their code will run.
What you describe is coping with situations of high infrastructure complexity and unreliability (many servers, moving and failed servers, etc.) by adopting advanced scripting and more abstract and sophisticated administration tools.
It is an interesting technology trend, but I fail to see any significant social and organizational aspects beyond demanding everyone to be more competent.
I wonder if one can draw parallels to the BYOD thing.
In both senses it seems to boil down to someone bypassing the IT admins because their manager wants to see results ASAP, and IT is going "sorry, budget and/or regulations says we can't at this time".