Was (Unix) ops ever not coding? I honestly don't know, I haven't been around that long. But all the old guys I know were "perl is unix duct tape" ops guys.
The older, more foundational problems were getting automated back then. Now that they're solved problems, and combined with more and more people running large and/or virtual infrastructure, a new problem domain exists around spinning up machines and deployment.
The current coding investment is infrastructure because it's the current pain point. In a decade (or whenever permanent solutions exists for infrastructure) the current way will be considered "by hand" and operations coding efforts will just move onto whatever problem is only visible now that infrastructure is no longer a time sink.
You can say that some ops is just admins running already existing software and operating everything by hand, but there will be admins doing exactly that in a decade too.
You're leaving out the other 80% of the industry – yes, IBM shops had systems programmers but every single one of them also had operators who were following a big run-book of canned procedures and diagnostic trees which sometimes ended with “Call <sysprog>”. Most Unix shops I've seen had a few decent developers and a bunch of people who used tools written by the first group.
The big difference I see in devops is that people started taking system management seriously enough to do first-class development rather than an afterthought.
It wasn't really that clear-cut. I started in Ops in the '90s, too, in SV, and there were plenty of SAs I knew who were proud of the fact that they weren't coders. Yes, they knew the shell, and maybe they knew a tiny bit of Perl. But as a guy who was an SA and a coder (Perl, C) I was a rarity.
I still am, but the "DevOps Movement" is here to point out that this artificial dichotomy is considered harmful.
Yeah, I was never aware of a sysadmin who couldn't code.
Generally, a sysadmin has slightly different skills from a developer - they might code in a highly imperative style and always keeping the actual machine/system being targeted in mind, but I've never known a half-decent sysadmin who cannot write code.
I consider myself a developer (though I call myself software engineer, due to the incompetence of other "developers" I work with).
I know a reasonable amount of sysadmin (all my computers run Linux primarily, I only keep Windows on for checking hardware issues, and a couple of specific apps I need to run once or twice a year).
I wouldn't apply for sysadmin jobs, because I wouldn't feel my knowledge is enough. I have however seen devops jobs that seem to match my skillset - developer with a bit more. I hadn't really heard of the term until I saw the job ad.
It's about more than just "unix duct tape". It's about 'Infrastructure as Code', treating servers like programming objects.
It's about using configuration management tools like Chef and Puppet instead of writing bash scripts which only work on one system.
"DevOps" here, by which I mean an IT Ops Manager/Linux Admin/Network Admin doing this for more than a decade.
Nothing you described is outside of the realm of what your typical linux admin does. I don't have to be a senior python dev to do my job, and I've managed 5500+ virtual machines by myself (puppet/chef, bash, some python, persistent data/object storage).
Agree with the author; just shoving more hats onto less people.
It's a nice, and really needed reaction to the Microsoft view that ops people only need be able to set options on a GUI. Obviously that that never was the case, on Unix or Windows, but their marketing tried to make it look like so, and lots of people hiring and looking for a job believed it.
A DevOps can expect a bigger salary, while a company hiring one can expect way more productive candidates than if they asked for only ops.
At least in my world view this is a much better definition of DevOps. Folks who make the world run, and through automation can keep a larger portion of the world spinning. It requires someone who can analyze failures, figure out how to predict and mitigate them, and then code automation to do so.
Oddly this is much more like the 'developers' of old. If you sat down at a workstation you needed to know how to be your own system administrator, and you needed to write code.
Automation has enabled a fairly new class of engineer which I think of as someone who has no idea how the pieces all fit together but they can assemble them with the help of a framework and a toolkit into useful products. They become experts at debugging the toolkit and framework but have little knowledge of how everything else actually works.
The problem with this new type of coder is that they can write syntactically correct impossible programs. I didn't understand that until I taught myself VHDL (a hardware description language). VHDL was the first "language" that I knew where you could write syntactically correct "code" which could not be synthesized into hardware. The language expressiveness exceeded the hardware's capabilities (and sometimes you would need to have a time machine). Imagine a computer language where 1/0 was a legitimate statement, not caught by the compiler, but always blew up in your executable.
So we have folks who can write code that is grossly inefficient or broken on "real" systems.
Google had started a program when I was there to have developers spend time in SRE (their DevOps organization) and it was to invest in them the understanding of what went on in the whole stack so they could write better products. The famous 'times every programmer should know' by Jeff Dean was another such tool. You cannot get too far away from the systems that are going to run your code if you want to write performant code.
When Flickr did their DevOps talk in 2009, most of the infrastructure engineers I worked with at the time saw the trend in reverse. The people wearing the Developer hat were relying on our team's ability to automate anything, so the Ops team ended up being the team that best understood how the whole system worked.
In 2009, DevOps seemed like there was finally a reasonable answer to Taylorism. Engineers and Programmers and Hardware Technicians and Support Representatives were not cogs in a machine, but humans that could collaborate outside of rigid boundaries. Even at the lowest levels of the support organization, individual workers along the chain could now design their own tiny factories.
From there, it's just a matter of communicating tolerances properly up and down the chain. I am probably over-romanticising these notions, but it certainly felt exciting at the time. Not at all like the "fire your IT department" worldview it turned into.
Of course it is, and there are collections of letters that are pronounceable words, but it doesn't give them meaning. The equivalent in English would be a spell checker that didn't flag "douberness" and passed it along. Sure you can pronounce if if you look at it phonetically but it doesn't mean anything. It is syntactically correct but broken. VHDL has a lot of things that can be written but not actually expressed in hardware.
Sure, I've no doubt it's more common there - that's very much my understanding. The wording of the above just struck me very much as if it were meant to be hypothetical, which I found amusing given that it's nothing like.
In this case, a 'run-time failure' would be completely unacceptable, as the 'run-time' environment is your $X000 hardware manufacturing run. Hardware development isn't in the same league as software. It's not even the same sport. Like comparing football to rugby. Both played on a gridiron, but entirely differently.
First, there exist software environments where errors cost significantly more than a hardware run. Obviously, those environments contain hardware as well, but "cost of a runtime error" is clearly not the only important thing here.
Second, my only point was that the example given was a piss poor example of the difference between hardware and software. Obviously a bad example doesn't disprove the claim it's supposed to support.
Everyone's piling on you because that wasn't the point of the example. Automation grants humans extraordinary powers, as long as humans aren't simply steps within the automatic system.
There's been an awkward growing phase of the technology industry that has led to technicians that don't have any real understanding of the systems they maintain. Compare and contrast Robert DeNiro's character in Brazil with the repairmen he has to clean up after. We could be training those poor duct maintenance guys better.
The article is about how DevOps is killing the role of the Developer by making the Developer be a SysAdmin.
Chuck points out that abstracting the Developer's work too far away from the system in question means the Developer doesn't really understand the system as a whole. Jeff refers to "purely development roles" and other "pure" roles that aren't necessarily natural boundaries.
The example of VHDL is not about hardware and software, but about learning that you didn't actually know something you thought you knew.
The repairmen in Brazil do not realize (or necessarily care) what they don't know about duct repair. The system allows them to function and even thrive, despite this false confidence in their understanding.
At one point at least, Google was investing in (metaphorically) having DeNiro cross-train those guys, instead of umm... Well, the rest of the movie.
I've read this a few times and it still doesn't really have any bearing on the aside I was making, which was that something was presented as a hypothetical (Imagine ...) that is the overwhelmingly typical case, and in some measure that amused and confused me.
Well, it helped that I'd been discussing the topic out of band not that long prior to the original comments...
The initial detail was that VHDL, unlike "software" languages, has very different consequences. Can you imagine a language where (1 / 0) wasn't defined away as a DIVERR, but otherwise managed to remain mostly self-consistent? Where something can be logically / syntactically coherent, but not physically possible?
And if that example didn't hit home for you, so it goes, but there was plenty of detail unrelated to the specific example that I thought was more important / interesting to discuss. :shrug:
Yes, "every language" was glib. In any language we could avoid it, actually, by hiding division behind something that gave a Maybe or Option or similar. My point, though, was that his "Imagine..." was actually representative of virtually all of the languages that virtually all of us work in virtually all of the time. It is therefore a poor example of a way in which HW is different.
well, in many places DevOps is implemented as "developers on PagerDuty". When I (the developer) have to be on-call for 7 day rotations, phone by bedside, paged at all hours, then I'm most definitely acting as operations - probably NOT what I signed up for.
And, contrary to the stated intentions, I've directly observed developers making crappy, band-aid fixes to ongoing production problems in the interest of "making the pages stop". This is the mindset when you are on call be being paged at all hours.
In theory, DevOps is supposed to put those that can best fix things closest to the problems, but in reality a slight separation from the firestorm of ops actually produces better, more thoughtful solutions in the long run.
The best balance is to have a first tier Ops on-call, 2nd tier engineering on-call, and any alerting issues get attention within 24 hours, moving to the front of the work-queue. But, indiscriminately assigning everyone "pager-duty" rotations leads to lower quality solutions in the end.
As the guy who's usually on pager rotation (and too often with far too bodies to share it), I disagree. I wrote a detailed comment a few days ago explaining my rational here, in conjunction with overtime / off-the-clock responsibilities:
• It increases pager coverage, and reduces any one person's pager obligations. Simply having pager anticipation is a mental burden after a while.
• It creates a stronger incentive for response procedures: what are the expected obligations of response staff, what's considered sufficient effort, what's the escalation policy, who is expected to participate, what are consequences of failure to respond?
• Cross-training. Eng learns ops tasks, ops has a better opportunity for learning what eng is up to and deals with.
• It makes engineering more aware of the consequences of their actions: is insufficient defensive engineering causing outages (say, unlimited remote access to expensive operations), are alerts, notification mails, and/or monitoring/logging obscuring rather than revealing anomalous conditions? Are mechanisms for adjusting, repairing, updating, and/or restarting systems complex and/or failure prone themselves?
My experience at one site, where I was a recent staff member (and hence unfamiliar with policies, procedures, and capabilities), systems went down starting at 2am, I was unable to raise engineering or my manager, and the response the next staff meeting to my observation of this was pretty much "so what" did not endear me to the organization (I left it shortly afterward).
Note that what I'm calling for isn't for eng to be the sole group on pager duty, but for eng and ops to share that responsibility.
In my experience, it also leads to better solutions because devs who don't get woken by issues with their own code are people who don't particularly care about such faults. I've done on-call before where I've begged the devs to fix issues because they were waking me up needlessly. The devs were nice, but somewhat lazy, and my fix wasn't on their radar. Stick them on on-call, and all of a sudden it's more important to fix.
At one place I worked we had a two-person support shop. We would claim time and again that this or that affected customers or made support hard. The devs would pick and choose what was fun to work on. I ended up leaving and the other guy went on a prearranged month-long vacation. Everyone else had to pick up support (~5 devs) for a month, and I'm told that they had so much trouble with the normal support load that development actually stopped for that month. Apparently when the other guy got back, they started listening a bit more to his concerns, having had a taste of what happens on the pointy end.
In a similar vein, there's a wine distributor where all employees spend their first week half on the phones and half in the packing department, to give everyone a feel of what the core function is and what customers complain about. The guy telling me said that everyone gets the treatment, except the new CEO, who got away with only doing a day rather than a whole week.
Right - and I guess the point is that the person who is working on features ends up also being that one person who does the automatic provisioning and testing pipeline administration work, as well.
This is honestly why I've gone with PaaS - mostly Heroku - for several months now when deploying a new application. Why on Earth developers do anything other than working on the core features of their program I don't know. All of the things you need to set up - tesing pipeline, and containerized, automatic deployment, load balancing, databases - are now available as cloud services. There is absolutely no need for the developer to be doing administration and provisioning tasks at this point.
If you think you need to set up your own server infrastructure ask yourself one question: is there any specific technical requirement that my application has that can't be fulfilled by existing cloud services? If there isn't, and there probably won't be, you shouldn't be doing ops yourself, especially not in a startup setting where time is absolutely at a premium and you need to be spending all of it on making the best product you can make.
And before everyone tells me that PaaS is more expensive - it's only more expensive if your time is worth nothing. But your time isn't worth nothing - it's probably worth over $100/hr if you are a developer working in the United States. So Heroku ends up not being more expensive at all - especially not before you have to scale.
Banks are leery of the cloud for anything they do because even a basic web site can have sensitive (hackable) links to logins for account data. Healthcare companies have the same concerns around private health data.
If you're working on a startup that is not yet profitable (or at least compensating you for your time), your time is currently worth close to nothing. That same time may or may not have a higher value later.
That's only true if you have no money and have the means to live completely for free. Otherwise, if like most human beings you have some expenses, than regardless of whether your time is producing value it certainly has a cost, and that can be directly compared against the cost of a PaaS.
That depends though on the current value of the start up, which could very well be close to zero.
Say you could be paid $100 p/h at some other company, but instead your start up is paying you $5 p/h because that is all it can currently afford.
Say you can save yourself a weeks work by going with one hosting option that is twice as expensive but the total additional cost would allow you to work for $5 p/h for an additional two weeks then that isn't a great trade because the "value" of that labor is both unknown and irrelevant at this point. It might be $0 p/h or it might be $10000+ p/h.
1:1X dyno on Heroku is free, with a postgres server, and a free redis instance.
So even at $5/hr I can't justify doing this work myself at this point, since I can get the infrastructure for my minimum viable product going for free. I can and should spend my time focusing on developing the features.
Actually finally created a Hacker News account to come here and say something similar to this. The article is remarkably misguided in several ways, including a fundamental misunderstanding of what "DevOps" means.
Being an SA, what rankles me most is the attitude (unfortunately common in the industry) that "As a developer, I could naturally be the world's greatest Sysadmin, if only it weren't such a waste of my amazing talents."
Yeah. As a developer I used to have a similar attitude, until I saw really good sysadmins (and DBAs) up close. Most developers are kidding themselves if they think they could replace either without at least as much extensive training as it would take one of them to replace the developer.
I can't imagine an environment, other than say a juggernaut company (Google, Amazon, RackSpace, etc) that would require a full-time DevOps engineer. Most companies are delivering a simple service with modest needs. The reality is in these small environments is that their world is fairly small. These small environments rarely go over 50 machines and but a handful of services … so how many times can you automate something? How many new DevOps tasks can be created daily? How many different patterns can a DevOps engineer come across? I’ve done all of these and I quite frankly don’t see a great challenge in this field.
In the old days, it was an Architect/Team Lead/Sr Developer who figured out how to distribute a product, then the maintenance and upkeep of the installation scripts was later handed to developers of “less” capabilities. But the architect still reviewed and was keep abreast of installation changes. An Architect/Team Lead/Sr Developer should setup the intial design, scripts etc for use as “DevOps”, but DevOps is not a new engineering discipline. The DevOps tools are trivial to understand (kinda like InstallShield’s VB scripts) and easy to master. However, it does require an engineering discipline. Kids that are used to pushing buttons in NetOps can’t suddenly become an experienced coder … they know little of classical software development.
I disagree with the author implying that you should hire a DevOps engineer to do this work so that coders can “code”. The economies of scale are way off, this is something a that a junior developer masters so that he can spend more time coding. I wouldn't recommend companies spend so much on an employee performing a task that is trivial at best.
No, this task is a Developer’s job, and as a Developer, you better know about them. I won’t argue that mastering these tools are not time consuming, but if you want to master development, these tools had better be on your roadmap. Software Engineering is a new discipline, what we need to know increases over time and mastering this field is getting harder and harder … kinda like natural selection.
This is not the only variant of DevOps. There are companies where the developers are responsible for creating the automation scripts to deploy their code into production as services and expected to keep in running in production.
If your production deployments are fundamentally different than your dev deployments, you're doing it wrong. For the most part, you should only be localizing a common deployment pattern.
I've suffered through enough three week deployment cycles because the prod environment is almost nothing like dev and everything is done manually. I think I know what "doing it wrong" looks like there.
Strongly disagree. DevOps is not the simple automation of operational infrastructure. That has been done for decades.
DevOps is about developer empowerment. It's about creating the systems and tools that give developers more control over the operation of their applications. It's about removing Ops as a technical barrier between new code and live code.
At least, (as someone whom most would label as "DevOps") that's what it means to me.
"A DevOps person isn't someone who develops, and who does Ops. It's someone who does only Ops, but through Development."
That's not true. There are many scenarios where devs also understand ops and do both. It's getting rid of the "throw it over the wall" mentality and incorporating ops within dev itself IMO. And devs love to automate things so we have created a lot of tools for that.
Not really. Devops isn't a role or a person or even a process. It's a way to structure releasing software.
It's anti-silo. Instead of group A building software in isolation and then tossing it over wall to group B to deploy. Those groups merge, cross-pollinate or at least communicate frequently. So, the group building software is away of the needs/issues of the group deploying/running the software and vice versa.
extend to other groups functions (QA, maintenance, sales).
It doesn't matter if one person does a bit of each role or if there are persons for each role. As long as the work closely together.
DevOps means different things to different people it seems. I've spoken with dozens of companies and each one seems to have a different definition. Some view it as an SA with a brain. Others view it as a full stack developer, still others view it as guys focused on developing configuration management and just that. Others consider DevOps tools developers. Some just say DevOps and they mean "guy we can ask to just do whatever needs to get done, helpdesk, networking, configuration management, training, etc"
The OP has a point, but his choice of DevOps as the bugbear is clumsy. Maybe the bastardization of it by the business is what he's mad about. "Full stack" is, perhaps, a better target. It's a completely meaningless, useless phrase.
The problem is that employers demand specialists, especially for senior positions. At the same time, once they've acquired an employee, they refuse to respect specialties from that point on. Machine learning expert? Sorry, but we need a ScrumDrone over at desk 21-B. Being a software engineer means resolving the fight between your job and your career, which is probably a big part of why this industry is so political.
Employers are remarkably inconsistent in this regard. They want sharp people who can interview like real computer scientists, but get in the way of their continuing sharpness (by assigning smart people to dumb work) as soon as they're on board.
The insight that the OP has is that employers over-hire for crappy work, and he's completely right, but DevOps didn't do it.
"Full stack" is useless in the same way that "Agile" is useless. Specifically, it's useless because it was hopelessly cargo-culted and overused due to the original power of the idea.
5-10 years ago a full-stack developer was a very meaningful distinction. Today, every hacker wannabe Uber driver that went to a dev bootcamp for 3 months calls themselves a full-stack developer. "DevOps" avoids this fate only because the subject matter is slightly heavier and harder to fake.
I think a lot of people in this thread have never actually worked on a Really Big Project. Once you have two or three offshore teams, a hundred developers, associated support staff, multiple product teams, competing customer voices, multiple production environments in different locations with different support, "standards" imposed from external orgs that make no sense for the project at hand...
That's when DevOps gets really helpful and valuable. But if you haven't worked in environments like that, you have no idea what it's like.
Saying "DevOps person" is like saying "Agile person". I think it's a fundamental misunderstanding. DevOps is about getting development and operations to communicate directly and work cooperatively, without impediments beyond inherent complexity.
Sounds like DevOps is new and like Agile isn't always implemented as intended once it gets into the wild at companies. I've experience Agile and Enterprise Agile at companies where meetings are called scrums but are 50 minutes long. Sound like there is DevOps and Enterprise DevOps, or some other bastardization occurring out in the wild. Oh process, oh process, save me from these hardships.
That's completely inaccurate, DevOps is most definitely about getting Developers to think about their software being Operated, and it is not about 'Ops Guys' coding for the first time ever - we always wrote automation tools.
Ya, it would have been better if the title reflected the true target of the rant [Full Stack Developers]. However, I kind of think this reflects a local culture issue and not really a broad one.
Like, I'm a full stack developer [e.g. I provision my own prod boxes, write the services that run on them] at my $DAY_JOB. I'm not seeing that as a bad thing unless it gets out of hand and I'm doing that for more than a small cluster of backend services.
The problem is there's as many ideas of what "devops" means as there are people saying "devops". The concept of devops as you just described it makes absolutely no sense to me for example. Operations has always been about automation. CFengine has been around for a long time. So to me, your version of devops just seems like a fad buzzword applied to the same old same old.
Usually worse, it seems most people use chef or puppet. I understand what you are saying, but I am saying I believe otherwise. Operations has always relied heavily on automation. We managed several thousand unix machines with 3 or 4 people back in the 90s and it was totally normal. Devops started for the same reason cloud started: to put a new buzzword on what people have done forever, so that it can be sold as a new silver bullet.
I'm just not that cynical. We're all tempted to grow cynical after a number of years in this business, but you don't have to let it color your whole world. :-)
Yes, we had automation in the '90s — I wrote quite a bit of it myself! — but the landscape has drastically improved. For one thing, the industry is now embracing it, and with the embrace, a name. The name is not being "sold" in any way that I can see — no one is getting rich by bandying around the buzzword. It's not being sold by anyone I'm aware of as some kind of silver bullet, and anyone who believes in an IT panacea deserves what he gets. It is however being used to sell an idea, that automation in the '90s and before was a good thing, and that we should probably do more of it. DevOps means more than just automation, and in large part, these are also improvements in the industry. We're better now, partly because we have to be.
For that matter, the cloud is just a name that describes the commodification of computing resources (whether that be actual compute, storage, whatever). Yes, yes, the marketing blowhards of the world have misused and bastardized the word, but that doesn't mean they've ruined it, or that it never meant anything.
It has nothing to do with cynicism. This is literally taking an existing industry, and slapping a new name on it to sell products, consulting, etc. Go look at any of the current product's websites. Which ones demonstrate an understanding that automation is as old as computers? Cause they certainly all look to me like they want to give the impression that they invented the concept, so you should buy their broken pile of ruby scripts instead of the other guy's broken pile of ruby scripts.
The IT operations industry is not dying. Automation is old, but this isn't being billed as "new." What's new is 1) it being widespread, which despite your contention, has not been the case, and 2) automation via open source frameworks and tools, of which there has previously been a dearth.
I have no idea why you think Chef and Puppet are broken piles of ruby scripts, but for the record, they're free. Also, in neither case is anyone implying or saying that they invented automation. Having a nice framework to use is a definite improvement, though.
The market is maturing. Take a look at a market that is similarly structured. Look at construction.
You have general contractors and then you have subs that work under them. A general contractor is a jack of all trades, master of none. Exactly what a full stack developer is.
This isn't the end of specialization. It's the beginning of project management steered by developers who intimately understand all of the work involved, even if they aren't as competent as the specialists.
Having a team consist of all full-stack developers is just stupid. Having a full-stack developer as the head on a project, with specialists on the team, is a great idea.
A software lead is one a type generalist, as they have to manage resources that can X, Y and Z. The generalist position described in this article is someone who can and will dive in and do X, Y and Z themselves. Very different roles.
It's the beginning of project management steered by developers who intimately understand all of the work involved, even if they aren't as competent as the specialists.
People with a wide breadth of general development knowledge, once employed as developers but now managing development? I wouldn't exactly call that a new idea. Or are you talking management that also develops across all components of the software? If management takes up so little time of a single developer's time, then you're just using different words to describe the small team constraints that the article does.
I don't think this is the best analogy. Understanding a "full stack" isn't so you can manage specialists, it's necessary to _be_ a specialist so you can build something that you know doesn't have an obviously inherent pitfall. It's so you can prototype something without needing the time and attention of another specialist.
You can't really draw a hard line between administration and development, in the end you are just building a system and the more you know about it from all angles the better design decisions you can make and the easier it is to fix issues.
I diagnosed a few problems over the years that arose as apparent issues with a web application but that I gradually narrowed down to things like network issues, or kernel bugs, or system misconfiguration, or database issues etc. Modern stacks are very complicated and the interactions can get really messy, it is close impossible for someone who doesn't understand the whole thing to find issues that aren't neatly isolated. I perfectly know that I do not have the full qualifications of a sys-admin proper, and would not like to do a sys-admin job full time, but in those particular cases a pure sys-admin would not (and often actually could not) find those issues. As an example, I can remember many situations where the application showed different behaviour depending on which application server you hit, and typically both "pure" developers and "pure" sys-admins were having a hard time finding the issue.
Good sys-admins anyway have to learn, at least, C programming, shell scripting, and network protocols and programming, so it's should not be a big deal to add some Rails/Django/Node to their skillset. Good developers anyway have to know things about hardware, networks, protocols and so forth. You do want to have people that are specialized in one or the other area and focus on it on a day to day basis, but you also do want to have people that can understand a particular aspect of the system top to bottom when such a need occurs, and it does happen quite often.
I don't know - I need to know the kernel, the shell, the hardware, networking, programming, all of the services that are in prod and automation tools and how to manage a code base. Now I need to learn how to write production quality code in Node?
I'm all for a tighter integration between Ops and Devs, and infrastructure-as-code can help bridge that gap, but I don't know that doing each others jobs is the solution.
Thanks for voicing this so well, this has been my experience as well and it's probably safe to say our systems will continue to evolve to be more complex as our tools which enable us to deal with the complexities co-evolve. Having a full picture of things will stay a requirement.
But there is a difference between "having a full picture of things" and actually painting the picture yourself. Here I mean that to have a general overview of the different parts of the system is beneficial for everyone involved. However when you need to set up and interact with all components of the system on a daily basis it becomes a very time consuming task.
Pure developers are a problem because they will the information do their job well.
I go back a few years, to an old, waterfall-like job. I was handed work by an analyst, that was handed a task to analyze by an engagement lead, who might at some point talk to someone using the application. The work was always handed out on time, but the product often failed, not because it was buggy, but because nobody actually had much of an idea of what we were really trying to solve.
So us developers got much work done, but the work didn't actually solve real problems: The force is applied to the wrong vector. Then the product fails, and the blame game begins: Changes are too expensive, because the developers didn't know what the real invariants are. Queries are slow, because the database architect wasn't told about the questions that the database had to answer. The business analysis just wrote documents. It was all a big catastrophe.
That company moved to Scrum, the terrible way: Here, have this self organizing team full of specialists that don't know anything outside of their domain. They are still failing to this day, but now they blame each other in retrospectives.
So I'd much rather be stuck coding less, but then being aware that my code is actually solving a problem for someone, than just writing castles in the sky, because everything I've been told about what my userbase needs comes from a game of telephone.
I think the idea is not necessarily to have developers run production systems, but they still should know what production looks like and at least have basic knowledge on how to configure all of the moving parts of the system.
Having developers be 'full stack' imho reduces the amount of "works on my machine". How would a developer test the software he/she is developing on if she can't at least get close to a production environment.
Automated provisioning is just one of the usual 'devops' things that I can't imagine how a proper software engineering process would work without.
I would say that at least 20% of the people I graduated with can create software that works mostly ok when they hit the little green "run" icon in Eclipse. They were however incapable of figuring out why their jar file doesn't work in tomcat on a linux server somewhere.
Usually it was because they're using a local database with root credentials instead of a remote Database with multiple users, they have some file stashed away somewhere in their classpath, they have some binary installed in $PATH that makes the whole thing work.
I think just wanting to be a developer and not know about the stack that your application runs on is like being a painter but refusing to buy paint because you can't see what going to the store has to do with painting.
There is also an aspect that I enforced in my devop team: you built it, you will deploy it and you will keep it online and running. Don't develop something that's a pain to deploy, because you'll be the one deploying it. Don't develop something that's crazy on the database side, because you'll write the database script. As the devop, you'll have to balance the ease of development, deployment and maintenance yourself, there's no "someone else will deal with it".
That limits the crazy stuff like esoteric package function for the database, crazy port opening on the machines, esoteric daemons, or tools that can only be deployed from sources. Because the guy deciding that will be the one updating and debugging the VM creations script (for 2 different cloud provider).
But at the same times, they have access to all the tools they want, they can evaluate if something is best done on the admin or development side, they use mostly the same tools, code conventions and languages for administration and product development, and they have an immediate feedback sur how much logging to send from the application.
> Having developers be 'full stack' imho reduces the amount of "works on my machine".
Not necessarily, it can also be worsened. I think Conway's Law is especially appropriate here: "[O]rganizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."
When everybody shares IT responsibilities "because they can", it leads to each of those developers to run their own little customized fiefdom, with subtle (or not-so-subtle) version mismatches, mysterious lurking cronjobs, and mystery-scripts that do per-app housekeeping.
That, right there, is the attitude that leads to unmaintainable software. If you want the position of general contractor, then you'd better know how to interface with zoning authorities, draw architecture, and perform maintenance on your creation over time, or at least work very closely with the people who do know and do care about that stuff. You are not above any of those things. Ever.
All too often I see feature developers say "well, I have an operations team, I'll let them figure it out," and they (a) never leverage said operations team for advice during development, (b) don't consider operational concerns such as sharding, deployment, logging, and monitoring at all during development, (c) file a ticket against operations with three weeks to go until their deadline to perform all of those things, and (d) call for rolling operational heads when their service does not perform to their expectations (using the author's "totem pole" as their rationale). As an operations engineer, I can count way too many fucking times I've been on the other end of that from developers with attitudes like yours. It is the absolute worst part of my job.
As someone who has been doing DevOps for 20 years, since long before it was called "DevOps"...
First, DevOps has degenerated into a meaningless buzzword to rival "Agile", despite the good ideas and good intentions. Every day, I have recruiters looking for "DevOps". A couple of years ago, they'd never heard the word.
Second, DevOps is actually getting strongly biased toward Ops, often to the exclusion of Dev. In the eyes of recruiters and much of the industry, it's become synonymous with "Chef/Puppet/Ansible automation", a set of automation tools. That's stupid.
Third, and this is what matters to me... DevOps is (or was meant to be/should be) more about organizational structure than skills. As the author points out here, specialization is good and necessary. But specialization comes with bureaucratic compartmentalization that makes working across org boundaries very difficult. When you have to climb four or five (or more) layers up the org chart to find common management for both the dev and ops sides of a project, then the dev team has no authority over and very little way to communicate with ops, and vice versa. For most large organizations, the dev/ops separation is necessary - developers get locked out of production systems to keep them from legal exposure to customer data (HIPAA, PII, etc), and to keep them from accidentally or intentionally altering production in a way that it might break.
Read Gene Kim's excellent quasi-fiction book, The Phoenix Project. It covers a lot of the issues of DevOps as fixing communication patterns in large organizations. You'll see how little of it is about tooling or "full-stack", and how much is about clearing bureaucratic obstacles to effective communication.
You have to ask whether you're talking about a senior executive ("Chef de Cuisine", "Head Chef", "Executive Chef"), a subject matter expert or mid-level professional ("Saucier", "Pastry Chef", "Line Cook"), a junior person ("Prep Chef"), or a low man on the totem pole ("Busboy", "Dishwasher").
"Chef" is a very vague term when applied to large restaurants. Obviously, in a little family owned place, a lot of these roles would collapse.
I found it very interesting that Facebook apparently hired programmers for all its roles in the early days - even e.g. the receptionist. I think the point that this article misses is that a 'devops' person - that is to say, someone with both sysadmin and development skills, whichever side of the fence they originated on - can do the job better than someone who is "just a sysadmin" and incapable of programming. When you look at modern ops infrastructure like Puppet, you're looking at programs, written in programming languages, and it's foolish to pretend otherwise. So like it or not, you need to hire someone who can program to manage it. If you imagine you can get a cheaper non-technical ops person to handle this and save money, you're going to get inferior results.
I think this is going to happen to more and more careers. Already a profession like surgery or piloting a modern airliner is starting to require some of the skills we think of as programming. Software is eating the world - that doesn't make domain expertise irrelevant, but it means you need people with domain expertise and programming skills. That applies to non-programming roles in the software industry just as it applies to other industries.
It's definitely more complicated than the post implies, and it most definitely is NOT only for startups. Soon after I started out - at a mature company already making plenty of money - I was a full-stack engineer. There were a number of reasons:
- New development happened sporadically; day-to-day work was a mixture of maintenance development and admin work
- Culture. They started with a small team, and never grew it. Having more people didn't fit with the way the company saw itself.
- Difficulty hiring specialists. Various reasons for it, but still valid.
At another company I worked, there was a lot of "integration development", where your time was spent connecting various internal and external systems together, software-wise, developing tools that support systems work (i.e., tools for sysadmins), and developing other tools that are for end-users, but have a heavy systems component (management software for DNS, for example.) That meant understanding each part of the stack from both a development and system perspective. Another is interest level. A few of us were full-stack developers because we were studied more than just development in our free time, and we took that with us to work. This wound up benefiting everyone. This also led to us being the go-to people (that is, the top level of internal support) for both more specialist internal developers and sysadmins, as we had deep knowledge of the internal systems from the bottom to the top the stack, and the knowledge and experience to explain and troubleshoot problems to people in those other roles.
The author is correct in that this may be more /common/ at startups (the previous startup I worked at did in fact operate as the post describes), and is sometimes done out of necessity. It is by no means limited to those environments, however.
Edit: I'd also separate DevOps from full-stack engineer. They sound like the same thing, and if you squint from far enough away, they look like the same thing. The terminology may be fluid, but I think (as some other comments state), that DevOps is more centered around "coding for systems automation", whereas "full-stack engineering" is a much more general term which can encompass a variety of different types of tasks in different environments with varying levels of knowledge/experience in the different parts of the stack/tools.
Developing systems, programs, and quality assurance all are all fairly different skill sets. A gross, possibly inaccurate simplification could be:
- systems is about thinking how different things fit together
- development is about building something
- quality assurance is about figuring out how to blow something something up
The thing is, I know technical people that I wouldn't trust to do one of the above, let alone all three at once. And I know people who can rock it in multiple disciplines. When it all comes down to it, focusing on the quality of the people you work with and helping them thrive is a solid plan.
Because of an abnormal learning style, severely dyslexic, I have never fit in to corporate environments. Looking past the egregious spelling errors, being a slow learner isn't a winning talent in a job interview. As a result, I've fallen in to the trap of full stack (jack of all trades) developer consultant for a little over a decade now. I never got very good at anything in particular. Thus, I have battled with burn out for many years now, and am passively seeking another careers outside of Internet technologies. Point of the article is close to home.
The burnout aside, there is a plus to someone being proficient at many related tasks; having a somewhat in-depth knowledge of how all these technologies come together. The point is not all jobs require the best, most expert techniques. As in the case of the jack-of-all-trades carpenter, as long as he knows when to call the specialist, he is still getting the jobs, as am I.
This is an interesting rant. I had never seen DevOps as being "for" developers. My impression has always been that it is sysadmins quest for a high degree of automation and streamlining that allows them to manage hundreds of systems without waking up in the middle of the night sweating. And when you're looking for a sophisticated tool to control something, you inevitably find yourself writing software.
The author is missing the fact that good developers can actually automate away a lot of those "lower on the totem pole" roles, or at least reduce the amount of repetitive stuff down to the point where the remaining work is quite abstract and basically just more programming.
This isn't counter to specialization -- in a big organization, people are certainly still going to specialize. But the "DBA" equivalent people are just programmers who have fresh expertise on the storage layer, and the "QA" people are just programmers who have expertise on the automated build and test systems.
The dentist analogy doesn't hold in software. A dentist handling secretarial work is just an expensive waste of time, due to comparative advantage. But a programmer replacing secretarial work with automation often reaps big long-term dividends.
I don't really think a good developer can replace a good sysadmin. The reverse is true too, this is not a flamebait! :P
I don't see "DevOps" as a way to replace some roles - but as a way to make everyone work better together. Instead of living each in their own bubble (and in my - pretty limited I admit - experience it always) everyone has to know, at least a little, what someone else does. It really helps everyone at the end of the day.
And the developer can keep coding without me screaming at him because he placed the database connection string in a configuration file that sits inside a .jar that sits inside a .war and so on.
It really helpful to have developers know at least a little systems administration, and vice versa. When doing web development at least there's a fair number of problems that you should just let a web server, caching server, database server or even the operation system handle for you. If developers no nothing about systems administration they sometimes solve non-problems.
I'm just a guilty as anyone else in try to write code to fix a problem that's better handled by existing infrastructure and servers. I worked on a project to deliver invoices to customers in downloadable form. In the end pretty much very thing was thrown out and I just need to write the authentication part, because the sys admins pointed out that the existing StringRay boxes could handle everything else (http://www.riverbed.com/products-solutions/products/applicat...).
It's not that it doesn't make sense to have dedicated developers and systems administrators, as the developer you just need to know enough to be able to talk to and understand the admins position and thoughts.
this is pretty much spot on. when you have strict separation of dev & "ops", you get what I would argue is bad service+stack design and wasted resources.
"devops" is having developers sitting with, understanding, architecting, and in the end programming solutions to what were traditionally ops/sysadmin problems. and operators sitting with, understanding and participating in service architecture, teaching about livesite realities, coding where possible, and appropriately buffering devs from noise on the livesite.
the unfortunate thing is that many companies swing the pendulum too far one way or the other. neither all devs nor traditional dev + IT/ops orgs are the best way to build a great product and run a world class service.
... this is right on. I've seen my own role in "DevOps" as being one that is less task-oriented and more toward bridging skill sets. The drive toward specialization (mentioned by the author) is leading us toward having "Ops" administrators that are completely incapable of understanding how an object-oriented system is constructed and "Devs" who seem almost oblivious to how computers (web servers, middleware containers, databases, etc.) actually work.
I don't think that's completely true. A good developer must have a good knowledge of the stack he's working on. So he should be capable of managing that stack, shall the necessity arise.
However, it's obviously better to separate matters and offload management/administration tasks to a separate team/person. Thus, a good developer in a good company (which has a separate sysadmin roles) indeed can't really replace good sysadmin because the latter has niche practical knowledge on handling various situations (especially emergencies) quickly.
Nonetheless, one can be both a good developer and a good sysadmin at the same time.
In my own experience I don't think developers were ever pushed to become devops (as the article asserts).
Instead, about 40% what was called 'sys admins' were pushed to become devops. The 'sys admin who knew cfengine' became a 'devops person who knew ansible'. Deploys and cloud APIs just became another thing to automate.
The bottom 60% - the shit ones who got paid 120,000GBP to copy paste commands they didn't understand from word documents into Solaris 8 boxes in 2010 because they couldn't actually automate anything - left the industry.
from what I've seen, the term 'devops' is generally used to pay a developer less than you would otherwise while getting more from them. I'm not sure how the math of that works out, but based on what I've seen, that's what happens.
I wish. They are still here and still copy+pasting commands they don't understand. The entire linux world is still dominated by these people. The whole "howto" culture is still very much alive and kicking.
I'm a terrible system administrator. Everything I've learned about it has come from necessity because startups. I don't want to be a system administrator and have no desire to be good at it. So I learn the minimum I need in order to get it to do what I need to do and hope that I've done it right.
I might only be slightly better than someone who's new to system administration only because I've written system-level code and understand operating systems and things of that nature.
However a good system administrator understands the entire architecture from a holistic point of view. They know the compiler switches to use, the run-time switches to tweak, the security implications of various configurations and all of the other details it takes to keep a cluster secure.
I often work well with a good system administrator to debug and optimize workloads due to the overlap in our skills. I find this to be the optimal relationship.
Learning and practicing system administration takes away from my ability to learn and be a better programmer (and the opposite is true as well). I don't know about most people but I find I can't be good at both. And I know which one I'd rather be better at (programming).
I don't think the author has hit the nail on the head but I agree that effective teams can't expect one person to manage an entire application from code to managing a secure deployment.
DevOps is a rather overloaded term at the moment. I've seen it refer to any of the following:
- Encouraging collaboration between your Dev, Ops, and QA teams, with some cross-training so they can work together better
- Merging those teams under the same manager to try to improve that collaboration
- Making your developers responsible for all those roles, and never hiring a dedicated sysadmin or QA engineer
I personally think any of those is fine. Startups will err toward having fewer people and all of them be developers, while in a larger company it probably makes sense to specialize more and make "DevOps" mean close collaboration between those teams.
Of course, I've also seen "DevOps" as a job title for what would have previously been a "system administrator" or "site reliability engineer", and I have much less patience for that. :) Occasionally I see a job posting for a role that is actually dev + ops, but most often a "DevOps" posting means "we need a sysadmin, but we don't think sysadmins are cool enough to work here."
I work at a large enterprise company and for a while I was part of the DevOps team as a software engineer.
Some of our goals included:
- Building the continuous integration/delivery pipeline
- Moving codebases from one source control system to another
- Creating programs/systems to automate tagging of builds
- Automating the deployment processes of multiple applications onto non-production servers
- Implementing and maintaining the functional testing frameworks and server grids
The more I look at these goals, the more I realize that the developers who work on feature delivery should not worry about these anyway. So I disagree that DevOps is killing the developer. In fact, DevOps is helping the developers focus on what's important.
I'm not so sure your usage of the term "full-stack engineer" is accurate here. I consider myself full-stack, but I don't know half the stuff about Chef that our DevOps guy does and I'm ok with that. To me, a full-stack engineer means that I'm capable of coding both things that make magic happen in the browser and things that make magic happen on the server side of the application. It doesn't mean I'm a jack of all trades.
That said, I don't think that the increased prevalence of DevOps is bad. And I don't think it means "everyone is doing everything" either. It's a new role that is borrowing elements from both development and operations. Not one person doing both roles.
I think DevOps is very much a web-application thing (where web-application includes intranets, ... basically anything that speaks tcp). I seriously see the need there. I still remember the days when developers would build an application that worked on their system and then handed it off to Ops, hoping to never hear back from it. I interviewed developers that could not tell me which webserver or application server their company was running in production, even though capabilities and performance characteristics differ wildly. The DevOps role is trying to bridge the gap, it's the jack of all trades, that knows enough of every piece of the system to debug issues that happen at those boundaries. Is this DB problem a machine issue, do we just need new hardware? Is it an application problem (n+1 queries) and where could those be? How can I structure my stack in a way to hand of tasks to the place where they can be solved efficiently. The implementation of those solutions can be handled by domain experts, but someone needs to keep all those pieces from breaking apart at the seams. In the web world, that's the DevOps.
I personally think DevOps is terribly misunderstood. I think the best way to describe DevOps to that it broke down the traditional Ops/QA/Developer roles into different roles, namely SRE, Platform Engineer, and Developer.
Developers take on the new responsibilities of being able to independently deploy their code, instrument and monitor stability and own test/QA.
Platform Engineering is about building a robust infrastructure and the tooling needed for Developers to handle the new responsibilities. This includes packaging, monitoring, deployment, AB testing, etc.
Site Reliability Engineering is about dealing with fires outside of the codebase. Hardware failures, network connectivity issues, etc.
I don't think any of these roles becomes a "Jack of all trades, master at none" situation. It does, however, cut out some of the more typical engineering roles. While developers just took on additional responsibilities, QA engineers and traditional Ops are forced to repurpose their skill set.
"The underlying cause of my pain? This fact: not every company is a start-up, though it appears that every company must act as though they were."
DevOps is not about startups, DevOps is about avoiding the pitfalls of big companies who completely fail and leave all of their employees jobless by focusing on all of the wrong decisions and initiatives.
It's about outlawing cowboy coding and other bad habits that people pick up as hobbyists, and intertwining business and technical objectives reasonably.
Why is a full-stack developer important? Why is eroding the difference in responsibility between Dev, Ops, and QA important? Because traditionally along these boundaries have been opportunities for individuals to absolve themselves of responsibility. More than anything, DevOps is about not living in that world anymore.
Some people won't survive outside that world. Those who want to will read "The Phoenix Project" by Gene Kim.
Before you start to complain, I am a fan of collaboration but Devops might just be the best joke ever! The truth is it means something different to every person. For years I have defined Devops as Engineers trying get Ops out of the way and pushing forward with out those pesky sys admins. Your think I am over blowing it? I have been in the Silicon Valley for the boom of Devops and I hear it all the time “We dont need ops, we can just have a developer do it”. The number of new startups who use AWS thus allowing them to forgo a system administrator never ceases to amaze me. My biggest problem with this is your cutting the legs out from yourself, but your assuring me job security so maybe I should keep my mouth shut.
I have been a a operations engineer for over ten years now, and honestly developers and ops engineers have different ways of functioning. To me a good software engineer has long term focus, can get deep into a project and crunch on the same code for extended durations. Give a good coder a project that will take weeks or even months and they will put there head down and solve your problem. As a generalization these people do not handle interrupt driven work well, they also often do not handle high pressure situations well.
Operations people on the other hand do the majority of their work under massive interruption and constant pressure. Tell a operations engineer the site is down and they will not focus on what the origin of the problem is, they will focus on getting the product back online and come back to fully understand why. This does not mean they do not troubleshoot but they are trying to identify the immediate cause not the who or root. One might argue this is short sited but when your stuck waiting for someone to figure out why the web severs where started your killing your customer experience. I would argue restart the web pool get the product back online and then start to look at root cause once you have identified the customer impact problem and completed the shortest path solution.
When you start off by having your engineers run operations you never allow new ops people to start from ground up and develop their skills, learning the pain points as the system grows thus ensuring when you grow to the point that you need a operations engineer the is a shortage of trained people available. One might argue that some of the developers that started the company by running operations will become your operations engineers and will cover this but to me thats like using a vice grips to remove a bolt.
From my point of view, this is due to lack of tech education. There just are not enough people graduating/learning the technical skills necessary for medium to large size software companies to employ.
I am a manager/developer/architect at a relatively large software company, and we have to task our developers with devops-type tasks constantly. Not because we want our developers spending time outside of coding, but because for lack of ability to hire the competency needed.
As you stated, good developers can generally perform these tasks so when you have nobody lower to perform them they become a weight on the developers' shoulders.
No it isn't necessarily fair, and yes, I believe in the future specialization will come back as the education system starts to realize there are many jobs in tech, not just a Comp Sci degree jobs.
>>> Not because we want our developers spending time outside of coding, but because for lack of ability to hire the competency needed.
This is known as the "odd man out" syndrome. I currently work in a medium sized company who are doing a huge ERP switch over. I'm a front-end developer by trade, but know .Net as well. One part of the contract stated our company needed to have X amount of company resources (people) to have on the project.
The downside is I hate my job now and am actively looking to get out of here. They told me recently after the release, I'll be one of the ongoing "resources" to help manage post-release defects.
So I agree on your last point as well. It's not fair and unfortunately, it's a no win situation for the developers. If I do a shitty job as a JDE developer, they get pissed and might evenutally fire me. If I do a good job, then I get tasked with all kinds of stuff I have neither the want or desire to do.
Six months in and I hate working in JDE but all the contractors think after two weeks I should be a pro with it since I'm a "developer".
Full-stack doesn't mean being a 'god of all things', except that it does in fact mean exactly that.
It means that no part of the stack goes not understood, or .. all parts of the stack should be understood and controllable by the developer.
Guess what - this doesn't produce 'worse developers' .. it produces better stacks. The fact is that the fracture and delineation between the cultures of code, rather than the actual code itself, is the true danger. Getting 'the db guy' to talk to 'the front-end guy' is a posers game. Get rid of it.
Instead, get your guys to move across the tree of responsibility that a full-stack approach requires. In truly professional development, there is always going to be new things to learn and new things to use to manipulate the machines - this is turned to an advantage in the full-stack approach since it requires an adherence to a real policy: you just don't care about 'the culture of the tech', you read the docs, you write the code, you read a hell of a lot of code, and you don't really put limits on what you can and cannot understand; those limits are instead expressed in working code, at any layer of the stack. The 'cultural excuses' for why things are borked 'over there' are no longer relevant in this approach; if you're a real full-stack guy, you'll get along - source or no source, but hopefully: mostly always with the source.
It is a political approach, but it works - especially in industry. There are a few other principle-based disciplines in the world where an 'all-embracing' privilege exists, in this case we are lucky that computers, as grand engines of word and significance, are a form of literature. Study well, and study all .. to the end!
DevOps, at least imo, is not about technology. It is about culture, and applying practices to speed up the various loops across organizational groups (marketing, sales, developers, ops). Of course there will always be trade-offs, if you don't have the budget to hire both an expert in the technologies that, say for example, speed up configuration management, and prevent snowflake servers AND someone to develop the code for the product, the person you do hire, will have to either pull double duty, or the org will have to plan for the fact that it is probably going to be doing "stuff" slower.
"If you are a developer of moderately sized software, you need a deployment system in place. Quick, what are the benefits and drawbacks of the following such systems: Puppet, Chef, Salt, Ansible, Vagrant, Docker. Now implement your deployment solution! Did you even realize which systems had no business being in that list?"
I'm not understanding this, you can deploy with Puppet, Chef, Salt, Ansible, Vagrant and Docker. With Vagrant you can deploy a bare image and use Chef (or one of the others) or you can just deploy a fully setup box file (like with Docker).
If you're a developer and want to stay relevant I suggest you read up.
With distributed systems becoming the norm rather than the exception, developers will need to understand how and where their code runs in production (and how it gets there) to be able to debug issues or write better behaving code.
This may be a simple oversight and I hope I don't sound too pedantic but you may need to broaden your definition of developer a bit. Developers who work on OS, games, embarked systems or professional applications (think CAD) are not very likely to need these anytime soon. More knowledge is always good so I'll check some out anyway!
I would say there's a strong tendency towards that, depending on how one exactly defines 'average' and 'distributed'. Using a distributed datastore and/or a messaging queue in ones app is pretty common already, logical next step is for app components to follow the trend.
Are any other companies besides (well funded) startups actually hiring people as "full stack developers"? I mean, yeah, it's normal to look for candidates with full stack experience, but not to hire them in an actual job position that requires them to do full-stack work... it's a big difference.
(sorry if the q is off topic, I don't really understand what OP is ranting about with the devops problems, so I'm referring to the only part of the article that makes any sense to me, that about the full-stack devs...)
+1 I don't agree, but an interesting article anyways.
I have worked with DBAs who had PhDs and could have still done development, but they moved past that to concentrate on schema development, scaling, etc. Toss into the mix modern programs of master data development inside organizations and people who are characterized as DBAs have a very sophisticated role.
Also, for small projects, devops makes all the sense in the world to me. Deeply understanding how an entire system works is valuable.
It's not so much that DevOps is killing the developer as it's the expectation that you can have your regular general purpose developers do your DevOps on the side.
I can relate to the downsides pretty well - I'm the only developer in my group and my job is mostly to develop web apps, but the IT side doesn't have much knowledge of modern tools - they live in the era 'just use Drupal and Apache' so I'm often the one who ends up having to figure out the deployment of the applications I work on (and also help with random problems from their OTB apps) and such.
To be honest, I don't mind when it's DB stuff because I'm pretty comfortable with it and have plenty of background with various SQL DBs, and it's not a time black hole, but when it comes to configuring servers and deployment I hate having to deal with the DevOps because there are so many pieces I never have the time to really become comfortable with them all and I feel very inefficient. Accomplishing something doesn't always take long itself, but it can require spending a day of reading wikis and documentation to accomplish something simple when you've got a lot of moving parts. And the worst part is that you have to deal with the DevOps bits so infrequently it's like you have to relearn them each time.
Agreed. The concept of "waterfall model" was given form by Winston Royce as a strawman for an argument he was making in an article, and it's disingenuous to say that it was ever a promoted model on its own merits.
I prefer to think of it as the consequence of software development being tacked onto most other existing processes, without regards for the practical needs of software development.
You're talking about the original waterfall model. But waterfall as a way of developing software in phases that tended to be long and were not amenable to changing requirements was in fact practiced in most enterprises for many years. And, while not the most efficient process, it certainly did deliver good, working software. Just not efficiently. Phone switches, manned space flight, most businesses all ran on software developed this way.
Overspecialization is the source of a organizational smells in a lot of medium-sized engineering companies - a lot of times it's better to have generalist engineers with some specializations in what you need to do than a bunch of specialists for a bunch of reasons, among them:
- (pure) Specialists often don't understand how their decisions affect other systems (and middle management or communication isn't always a solution)
- (pure) Specialists tie you to a particular technology when in reality you may need to evolve to use other technologies.
- If you need a bunch of different specialists to get something simple done (perhaps something you don't do all the time so don't have a process in place), just because they are siloed, it's a lot more complex and usually ends up badly designed (because it's harder to be iterative across teams). Generalists can get simple things done that require different skill sets to accomplish.
I will disagree with you on your first point. One of the characteristics of specialists is that they know in practice how the software in which they specialize interacts with other software, where a generalist might not. For example, most specialists in any performance-critical software are pretty intimately familiar with the behavior of the Linux kernel when it comes to things like I/O scheduling and cache eviction, because of how it affects their program of choice. Generalists, on the contrary, rarely know any part of the system in enough depth to be able to quickly diagnose such problems. Often companies without suitable onsite expertise will reach out to specialists in these situations to resolve such problems.
I'll agree with your second point, to some extent. Generalists are rarely tied to any one technology and therefore can be very good at getting your organization to use the right tool (avoiding the common "hammer-nail" problem). However, just as frequently, I see generalists picking the wrong tool for the job, because they again aren't intimately familiar enough with the tools already at their disposal to understand all their capabilities or be able to make an informed decision about whether the gains of the new tool are worth the added complexity of introducing another layer to their stack. And, of course, nobody feels the pain of adding extra layers to the stack quite like DevOps do.
I'm not sure what to make of your third point. Isolated processes talking to each other is just a different strategy from a monolithic design. There are advantages and disadvantages to each. It's unclear to me that monolithic means "better designed," and in fact there are good security arguments to the contrary. But maybe I'm misreading what you're saying.
It seems like the OP is advocating the surgical team  approach to software development. This seems very consistent with DevOps. Have a group of specialists that are good at automating operations surround the key developers.
I think there is a lot miss understanding here, to me DevOps is not just automation (we've had that for a long time, Perl, cron, cfengine etc).
It's much about applying the same processes you would apply in development to Ops. For example committing changes into version control and only using that, not live patching things, much like you wouldn't live edit a website.
Also, it can be about letting developers get the exact same environment for development/testing at no additional time cost - which in turn makes it more like that code changes can go live without problems or delays.
IMO, Amazon gets 'DevOps' right. It's mostly just called 'ownership' over in Amazon. (source: I used to work in Amazon as a systems engineer)
You still have specializations - SDE's, systems engineers, DBA's, etc. However, if you write code and it ends up in production, you are responsible for the proper function of that code in production. As a friend of mine put it in terms of developers who don't want to be on-call: 'what, you don't trust the quality of your code?'
DevOps is simply a nicer way of just saying, "own your damn code." The corollary to this is that the organization must help you in getting to that state where you can effectively own your code - this means collaboration (so that you build maintainable systems) and building tools that enable fairly frictionless code ownership.
I've worked among devs who don't want to own their code in production, they'd rather just code and then throw new code over the wall for the sys admins to deploy.
The anti-DevOps developers don't understand how databases work so they want an ORM to make it easy for them, and they don't know how to configure a web server so they want PaaS solutions to let them do 1-click deploys, and system command prompts are scary to them. Frankly such developers just plain suck. They don't like DevOps because they don't have the skills to be DevOps.
I disagree with the central thesis of his argument that being generalized is a detriment and that operations and other factors should remain siloed at your average large company. I've worked at both large companies (10k+ employees) and small companies and many things in between.
In general a Full Stack DevOps oriented approach always tends to be more efficient. You have less monolithic hard to maintain applications because you force the teams to be small and agile. People will have their specialties (operations, backend, frontend, etc.) but still remain generalized enough to have an idea of the big picture. If your application has issues where the frontend developer doesn't know the general idea of how Varnish and Nginx in your stack are setup then perhaps your application is too big and complex.
I couldn't disagree more about his portrayal of DevOps. There are companies misusing any and all paradigms of development. Google "cowboy coding agile" to see what I mean.
When I think of DevOps, I don't think of having everyone know everything. Ops staff have to know enough code to write deployment automation scripts and dev staff need to know enough system administration to step up and help when the monitoring or deployment automation breaks.
It's meant to be a partnership to maintain a system rather than the old practice of throwing code over the wall. It really harms morale to have the developers all enjoying wonderful weekends while ops is on red alert because app changes they don't understand broke everything in production.
The description of DevOps from the article describes what I do at a large multinational software company really well. In our project we have 5-7 developers who test each others code and functionality and one DevOps who does build/test environment, databases, release management, change management, impact analysis with regression testing, and fixes bugs, but rarely develops new features. It's being done not because of the startup culture, which we do not have. It's done for efficiency. Every request to the DB team, even minor, will take at least three days to process. We do not have so much time to waste, so we have to do everything ourselves, unless it's something that requires an actual expert in the particular topic to accomplish.
The guys is missing the point by 10000 miles. DevOps is about getting together with devs and focus on best practices from day one. Keep in mind that you need to deploy your software in a timely, reliable manner, that is going to run on a network of computers, where part of your system might be down or showing elevated latency. I could not believe how non-trivial were these things until I have seen it with my own eyes that most of the software out there still has the following assumptions: zero latency network with unlimited bandwidth, uptime for servers is 100%, memory and CPU is something you can keep adding to computers. My experience is that when people are talking about DevOps what they really mean is site reliability or systems engineers, people who understand networks and operating systems in depth and can read and write code yet they primary focus is not deliver customer facing services, more like develop tools which can improve deployments, automate error prone processes and optimize/tune operating systems for better performance. In my humble opinion is that developers should be aware of the architecture of the system they are writing software for, but it seems we need another breed of engineers who are more focused on that as of today. Lets call them DevOps... :)
I love this article and couldn't agree with it's central premise more. I can think of no other industry that demands an individual wear as many brain intensive hats as that of the developer today. These jobs which used to be distributed are quickly becoming the baseline for how an individual applicant is judged. I for one believe that if we focus, we can become a true master of skill AND compile that with understanding of the "whole stack" but never being forced to maintain more than our fair share of that stack.
In the same way that being able to cook dinner doesn't make me a chef, while it's true that a developer can be a sysadmin, QA or DBA, they won't do a very good job.
To suggest otherwise shows a complete lack of understanding of the nuance of those roles.
As for suggesting that "DevOps" is killing the developer - the only thing "DevOps" is doing is polluting our common language with a term that doesn't actually mean anything concrete. It's perfect consultant speak.
As a developer of course it's tempting to agree with the author's hierarchy. Masters of the IT world! But really it's over-simplified. As a dev with many years of experience there's no part of the stack I can't work in and figure out what I need to do. But that doesn't replace actual operational experience and oversight. You make do in startup or small team because you have to, so I guess ultimately I agree with the piece.
While I get the need for page views, I really wish problematic aspects of any tech movement could be discussed in a way that actually improves things rather than tears them down.
You hate xyz? OK, but apparently xyz has enough merit to get the attention of quite a few people, so let's identify the problem areas and make xyz better rather than resorting to hyperbole and melodrama.
The role of DevOps is to help developers work more efficiently, not give developers more work to do. An example of this could be a TFS administrator who works on TFS build template changes and configuration to make the build and deploy process as automated as possible. Nothing to do with being a startup, or trying to get more work done with fewer people.
Every place I've seen DevOps, seems that developers bear the brunt of the work - learning the infrastructure and understanding deployments and such. I've never seen Ops people learning the codebase or even the software architecture / data structures.
"As a sysadmin, I would like developers to pay any damn attention to what happens in live before deploy without me having to cattle-prod them into doing so after deploy, so I don't have due cause to set them on fire."
The author couldn't be more off-base is his understanding of how devops came to be, and his attitude is exactly the kind of cost-ineffective developer behavior that led to the partial unification of development and operations to begin with.
It has nothing to do with limited startup resources, and everything to do with managing externalities.
Specifically, developers have an enormous amount of control over the stability and deployability of their software: technical decisions made at almost all levels of the stack directly and significantly impact the costs of QA and Operations.
The people best suited to automating deployment and ensuring code quality are the people writing the code.
If you entirely externalize the costs of those two things , natural human laziness takes over, and developers punt bugs and deployment costs to the external QA and operations teams, ballooning the overall cost to the company.
The OP makes a relatively uncontroversial point (that people will be specialized, and better, at a finite set of skills)...so I think "killing the developer" is a little dramatic.
However, I think as with most things that involve computational thinking and automation, this is not a zero-sum game. A developer who can apply deterministic, testable processes to server-ops may be able to reap an adequate amount of benefit for significantly lower cost than a specialized sysadmin. In addition, the developer is augmenting his/her own skills in the process. Yes, that dev was not able to focus all of their time on...whatever part of the stack they are meant to specialize in...on the other hand, the time spent studying dev ops is not necessarily a sunk cost.
For my own part, I've tried to stay away from sys-admin as much as possible...but when I've been pushed into it, I've gotten something out of it beyond just getting the damn server up. For example, better familiarity with UNIX tools and the importance of "text-as-an-interface"...which does apply to high-level web development...nevermind the efficiency you gain by being able to stay in the shell when most appropriate (rather than, say, figure out how to wrangle server commands in a brittle capistrano script).
But hell, even the end product itself, just being able to deploy a server with some confidence...is kind of empowering. For me, it opens up new ways to run scripts and jobs...It sounds dumb and maybe it's just the way my brain poorly functions, but the concepts of server-oriented architecture become so much clearer when you can spin up different machines to play with and experiment with delegation.
The author needs to read or reread The Mythical Man Month. Even in a large organization there are important benefits to having fewer people on a team. Even if this means that someone is sometimes doing work that they are overqualified for.
He makes some good points but he misses the value of needing fewer people to accomplish the same thing.
It seems to me that the OP's real objection isn't to "DevOps" but with the reality of the software industry. He's upset that developers often are asked to do "lower" work. I find that a bit simplistic on his part. If anything, DevOps at its best is about elevating the ops work (by recognizing automation possibilities).
The issue is that employers are horribly inconsistent. They demand specialism in hiring, but refuse to respect specialties once they've pulled people in. Thus, you end up having to interview like a real computer scientist, only to find that most of the work is mind-numbing for a serious programmer, but that there's no one around at-level for it because "we only hire A players".
DevOps didn't do this. The problem is the industry, not one concept.
The problem with DevOps is that it's a meaningless term. Look at all the comments here, all starting off with what "DevOps is," or "Devops isn't." Instances of people arguing past each other based on different interpretations.
You can't have a fruitful discussion when everybody uses it differently.
This is a really badly written article but I know the point he's trying to make.
DevOps is stupid because it fractures expertise and makes it more difficult to get work done. By splitting up roles you get more domain-specific knowledge, have more time to work on a single problem, and provide support for your co-workers who also have different specific roles. I would much prefer to work with specialists than generalists.
YCombinator is like a reverse link aggregation for businesses. Instead of readers coming to this site for information, people troll with their latest business bullshit and expect free solutions in the comments.