A DevOps person isn't someone who develops, and who does Ops. It's someone who does only Ops, but through Development.
It's not about start ups vs Enterprise, it's about 1 person writing programs or 5 people doing things by hand.
The older, more foundational problems were getting automated back then. Now that they're solved problems, and combined with more and more people running large and/or virtual infrastructure, a new problem domain exists around spinning up machines and deployment.
The current coding investment is infrastructure because it's the current pain point. In a decade (or whenever permanent solutions exists for infrastructure) the current way will be considered "by hand" and operations coding efforts will just move onto whatever problem is only visible now that infrastructure is no longer a time sink.
You can say that some ops is just admins running already existing software and operating everything by hand, but there will be admins doing exactly that in a decade too.
Then dotcom happened and every kid with a Linux box in their bedroom put themselves about as a SA. And in the 10s people think SAs who code is an amazing new invention.
And back in the 60s, IBM had "systems programmers"... Same thing.
The big difference I see in devops is that people started taking system management seriously enough to do first-class development rather than an afterthought.
I still am, but the "DevOps Movement" is here to point out that this artificial dichotomy is considered harmful.
Generally, a sysadmin has slightly different skills from a developer - they might code in a highly imperative style and always keeping the actual machine/system being targeted in mind, but I've never known a half-decent sysadmin who cannot write code.
If that's the term the market wants to use, fine. As far as I'm concerned, a senior sysadmin who can't write in a couple of scripting languages isn't senior.
I know a reasonable amount of sysadmin (all my computers run Linux primarily, I only keep Windows on for checking hardware issues, and a couple of specific apps I need to run once or twice a year).
I wouldn't apply for sysadmin jobs, because I wouldn't feel my knowledge is enough. I have however seen devops jobs that seem to match my skillset - developer with a bit more. I hadn't really heard of the term until I saw the job ad.
As to DBA; I can't help but feeling that the OP hasn't worked with "real" DBA's. That's a whole different ballpark and I've yet to meet a sysadmin or developer who can make even a passable DBA.
I've always thought the hierarchy goes: DBA -> Ops -> Developers. With the last two really about equals.
When I think about expected earnings, I would say your hierarchy is correct.
Nothing you described is outside of the realm of what your typical linux admin does. I don't have to be a senior python dev to do my job, and I've managed 5500+ virtual machines by myself (puppet/chef, bash, some python, persistent data/object storage).
Agree with the author; just shoving more hats onto less people.
A DevOps can expect a bigger salary, while a company hiring one can expect way more productive candidates than if they asked for only ops.
Oddly this is much more like the 'developers' of old. If you sat down at a workstation you needed to know how to be your own system administrator, and you needed to write code.
Automation has enabled a fairly new class of engineer which I think of as someone who has no idea how the pieces all fit together but they can assemble them with the help of a framework and a toolkit into useful products. They become experts at debugging the toolkit and framework but have little knowledge of how everything else actually works.
The problem with this new type of coder is that they can write syntactically correct impossible programs. I didn't understand that until I taught myself VHDL (a hardware description language). VHDL was the first "language" that I knew where you could write syntactically correct "code" which could not be synthesized into hardware. The language expressiveness exceeded the hardware's capabilities (and sometimes you would need to have a time machine). Imagine a computer language where 1/0 was a legitimate statement, not caught by the compiler, but always blew up in your executable.
So we have folks who can write code that is grossly inefficient or broken on "real" systems.
Google had started a program when I was there to have developers spend time in SRE (their DevOps organization) and it was to invest in them the understanding of what went on in the whole stack so they could write better products. The famous 'times every programmer should know' by Jeff Dean was another such tool. You cannot get too far away from the systems that are going to run your code if you want to write performant code.
In 2009, DevOps seemed like there was finally a reasonable answer to Taylorism. Engineers and Programmers and Hardware Technicians and Support Representatives were not cogs in a machine, but humans that could collaborate outside of rigid boundaries. Even at the lowest levels of the support organization, individual workers along the chain could now design their own tiny factories.
From there, it's just a matter of communicating tolerances properly up and down the chain. I am probably over-romanticising these notions, but it certainly felt exciting at the time. Not at all like the "fire your IT department" worldview it turned into.
... isn't that all languages?
Second, my only point was that the example given was a piss poor example of the difference between hardware and software. Obviously a bad example doesn't disprove the claim it's supposed to support.
There's been an awkward growing phase of the technology industry that has led to technicians that don't have any real understanding of the systems they maintain. Compare and contrast Robert DeNiro's character in Brazil with the repairmen he has to clean up after. We could be training those poor duct maintenance guys better.
Chuck points out that abstracting the Developer's work too far away from the system in question means the Developer doesn't really understand the system as a whole. Jeff refers to "purely development roles" and other "pure" roles that aren't necessarily natural boundaries.
The example of VHDL is not about hardware and software, but about learning that you didn't actually know something you thought you knew.
The repairmen in Brazil do not realize (or necessarily care) what they don't know about duct repair. The system allows them to function and even thrive, despite this false confidence in their understanding.
At one point at least, Google was investing in (metaphorically) having DeNiro cross-train those guys, instead of umm... Well, the rest of the movie.
The initial detail was that VHDL, unlike "software" languages, has very different consequences. Can you imagine a language where (1 / 0) wasn't defined away as a DIVERR, but otherwise managed to remain mostly self-consistent? Where something can be logically / syntactically coherent, but not physically possible?
And if that example didn't hit home for you, so it goes, but there was plenty of detail unrelated to the specific example that I thought was more important / interesting to discuss. :shrug:
And, contrary to the stated intentions, I've directly observed developers making crappy, band-aid fixes to ongoing production problems in the interest of "making the pages stop". This is the mindset when you are on call be being paged at all hours.
In theory, DevOps is supposed to put those that can best fix things closest to the problems, but in reality a slight separation from the firestorm of ops actually produces better, more thoughtful solutions in the long run.
The best balance is to have a first tier Ops on-call, 2nd tier engineering on-call, and any alerting issues get attention within 24 hours, moving to the front of the work-queue. But, indiscriminately assigning everyone "pager-duty" rotations leads to lower quality solutions in the end.
• It increases pager coverage, and reduces any one person's pager obligations. Simply having pager anticipation is a mental burden after a while.
• It creates a stronger incentive for response procedures: what are the expected obligations of response staff, what's considered sufficient effort, what's the escalation policy, who is expected to participate, what are consequences of failure to respond?
• Cross-training. Eng learns ops tasks, ops has a better opportunity for learning what eng is up to and deals with.
• It makes engineering more aware of the consequences of their actions: is insufficient defensive engineering causing outages (say, unlimited remote access to expensive operations), are alerts, notification mails, and/or monitoring/logging obscuring rather than revealing anomalous conditions? Are mechanisms for adjusting, repairing, updating, and/or restarting systems complex and/or failure prone themselves?
My experience at one site, where I was a recent staff member (and hence unfamiliar with policies, procedures, and capabilities), systems went down starting at 2am, I was unable to raise engineering or my manager, and the response the next staff meeting to my observation of this was pretty much "so what" did not endear me to the organization (I left it shortly afterward).
Note that what I'm calling for isn't for eng to be the sole group on pager duty, but for eng and ops to share that responsibility.
Within the right framework, keeping everyone on pager rotation can lead to much smoother operations, because everyone stays familiar with the system as a whole. This was going around recently, and captures the essence of the philosophy: http://catenary.wordpress.com/2011/04/19/naurs-programming-a...
At one place I worked we had a two-person support shop. We would claim time and again that this or that affected customers or made support hard. The devs would pick and choose what was fun to work on. I ended up leaving and the other guy went on a prearranged month-long vacation. Everyone else had to pick up support (~5 devs) for a month, and I'm told that they had so much trouble with the normal support load that development actually stopped for that month. Apparently when the other guy got back, they started listening a bit more to his concerns, having had a taste of what happens on the pointy end.
In a similar vein, there's a wine distributor where all employees spend their first week half on the phones and half in the packing department, to give everyone a feel of what the core function is and what customers complain about. The guy telling me said that everyone gets the treatment, except the new CEO, who got away with only doing a day rather than a whole week.
This is honestly why I've gone with PaaS - mostly Heroku - for several months now when deploying a new application. Why on Earth developers do anything other than working on the core features of their program I don't know. All of the things you need to set up - tesing pipeline, and containerized, automatic deployment, load balancing, databases - are now available as cloud services. There is absolutely no need for the developer to be doing administration and provisioning tasks at this point.
If you think you need to set up your own server infrastructure ask yourself one question: is there any specific technical requirement that my application has that can't be fulfilled by existing cloud services? If there isn't, and there probably won't be, you shouldn't be doing ops yourself, especially not in a startup setting where time is absolutely at a premium and you need to be spending all of it on making the best product you can make.
And before everyone tells me that PaaS is more expensive - it's only more expensive if your time is worth nothing. But your time isn't worth nothing - it's probably worth over $100/hr if you are a developer working in the United States. So Heroku ends up not being more expensive at all - especially not before you have to scale.
Core banking, batch processing, highly sensitive data stores? Probably not great candidates for public cloud consumption.
Web properties and services which don't rely on said functionality? Absolutely great candidates.
And the reality is, whether the IT guys like it or not, developers inside of these orgs are consuming cloud because getting a VM in a traditional sense takes forever (for good reason.)
As a result, we're seeing a shift in the industry where the idea of large corps / financial institutions / government are being pragmatic about the idea of a 'hybrid cloud.'
The reality is, adoption of cloud in the enterprise is growing, not shrinking.
Say you could be paid $100 p/h at some other company, but instead your start up is paying you $5 p/h because that is all it can currently afford.
Say you can save yourself a weeks work by going with one hosting option that is twice as expensive but the total additional cost would allow you to work for $5 p/h for an additional two weeks then that isn't a great trade because the "value" of that labor is both unknown and irrelevant at this point. It might be $0 p/h or it might be $10000+ p/h.
So even at $5/hr I can't justify doing this work myself at this point, since I can get the infrastructure for my minimum viable product going for free. I can and should spend my time focusing on developing the features.
Being an SA, what rankles me most is the attitude (unfortunately common in the industry) that "As a developer, I could naturally be the world's greatest Sysadmin, if only it weren't such a waste of my amazing talents."
I've suffered through enough three week deployment cycles because the prod environment is almost nothing like dev and everything is done manually. I think I know what "doing it wrong" looks like there.
In the old days, it was an Architect/Team Lead/Sr Developer who figured out how to distribute a product, then the maintenance and upkeep of the installation scripts was later handed to developers of “less” capabilities. But the architect still reviewed and was keep abreast of installation changes. An Architect/Team Lead/Sr Developer should setup the intial design, scripts etc for use as “DevOps”, but DevOps is not a new engineering discipline. The DevOps tools are trivial to understand (kinda like InstallShield’s VB scripts) and easy to master. However, it does require an engineering discipline. Kids that are used to pushing buttons in NetOps can’t suddenly become an experienced coder … they know little of classical software development.
I disagree with the author implying that you should hire a DevOps engineer to do this work so that coders can “code”. The economies of scale are way off, this is something a that a junior developer masters so that he can spend more time coding. I wouldn't recommend companies spend so much on an employee performing a task that is trivial at best.
No, this task is a Developer’s job, and as a Developer, you better know about them. I won’t argue that mastering these tools are not time consuming, but if you want to master development, these tools had better be on your roadmap. Software Engineering is a new discipline, what we need to know increases over time and mastering this field is getting harder and harder … kinda like natural selection.
DevOps is about developer empowerment. It's about creating the systems and tools that give developers more control over the operation of their applications. It's about removing Ops as a technical barrier between new code and live code.
At least, (as someone whom most would label as "DevOps") that's what it means to me.
That's not true. There are many scenarios where devs also understand ops and do both. It's getting rid of the "throw it over the wall" mentality and incorporating ops within dev itself IMO. And devs love to automate things so we have created a lot of tools for that.
The OP article is entirely missing the point, and you've set it straight. DevOps and the "Full Stack Developer" are entirely separate problems. DevOps can be specialized as well.
Like, I'm a full stack developer [e.g. I provision my own prod boxes, write the services that run on them] at my $DAY_JOB. I'm not seeing that as a bad thing unless it gets out of hand and I'm doing that for more than a small cluster of backend services.
One solution is to train up your ops guys in basic Ruby & Chef (for example)
It's anti-silo. Instead of group A building software in isolation and then tossing it over wall to group B to deploy. Those groups merge, cross-pollinate or at least communicate frequently. So, the group building software is away of the needs/issues of the group deploying/running the software and vice versa.
extend to other groups functions (QA, maintenance, sales).
It doesn't matter if one person does a bit of each role or if there are persons for each role. As long as the work closely together.
Sure seems that way at many places.
Like, I could use an Op for my stuff, but I also could automate "him" away.
When I was studying CS, there were a bunch of people, who didn't like to code, so they became Ops.
If Ops is now about programming, they can't even resort to this branch of CS...
The problem is that employers demand specialists, especially for senior positions. At the same time, once they've acquired an employee, they refuse to respect specialties from that point on. Machine learning expert? Sorry, but we need a ScrumDrone over at desk 21-B. Being a software engineer means resolving the fight between your job and your career, which is probably a big part of why this industry is so political.
Employers are remarkably inconsistent in this regard. They want sharp people who can interview like real computer scientists, but get in the way of their continuing sharpness (by assigning smart people to dumb work) as soon as they're on board.
The insight that the OP has is that employers over-hire for crappy work, and he's completely right, but DevOps didn't do it.
5-10 years ago a full-stack developer was a very meaningful distinction. Today, every hacker wannabe Uber driver that went to a dev bootcamp for 3 months calls themselves a full-stack developer. "DevOps" avoids this fate only because the subject matter is slightly heavier and harder to fake.
That's when DevOps gets really helpful and valuable. But if you haven't worked in environments like that, you have no idea what it's like.
Yes, we had automation in the '90s — I wrote quite a bit of it myself! — but the landscape has drastically improved. For one thing, the industry is now embracing it, and with the embrace, a name. The name is not being "sold" in any way that I can see — no one is getting rich by bandying around the buzzword. It's not being sold by anyone I'm aware of as some kind of silver bullet, and anyone who believes in an IT panacea deserves what he gets. It is however being used to sell an idea, that automation in the '90s and before was a good thing, and that we should probably do more of it. DevOps means more than just automation, and in large part, these are also improvements in the industry. We're better now, partly because we have to be.
For that matter, the cloud is just a name that describes the commodification of computing resources (whether that be actual compute, storage, whatever). Yes, yes, the marketing blowhards of the world have misused and bastardized the word, but that doesn't mean they've ruined it, or that it never meant anything.
I have no idea why you think Chef and Puppet are broken piles of ruby scripts, but for the record, they're free. Also, in neither case is anyone implying or saying that they invented automation. Having a nice framework to use is a definite improvement, though.