Hacker News new | comments | show | ask | jobs | submit login

DevOps isn't amount making Developers be Ops guys. It's about the fact that automation eats everything, and a significant part of 'ops' is now coding.

A DevOps person isn't someone who develops, and who does Ops. It's someone who does only Ops, but through Development.

It's not about start ups vs Enterprise, it's about 1 person writing programs or 5 people doing things by hand.

Was (Unix) ops ever not coding? I honestly don't know, I haven't been around that long. But all the old guys I know were "perl is unix duct tape" ops guys.

The older, more foundational problems were getting automated back then. Now that they're solved problems, and combined with more and more people running large and/or virtual infrastructure, a new problem domain exists around spinning up machines and deployment.

The current coding investment is infrastructure because it's the current pain point. In a decade (or whenever permanent solutions exists for infrastructure) the current way will be considered "by hand" and operations coding efforts will just move onto whatever problem is only visible now that infrastructure is no longer a time sink.

You can say that some ops is just admins running already existing software and operating everything by hand, but there will be admins doing exactly that in a decade too.

Yes, the devops movement is silly. In the 90s when I started, every SA knew, and used, Perl and C in their daily jobs.

Then dotcom happened and every kid with a Linux box in their bedroom put themselves about as a SA. And in the 10s people think SAs who code is an amazing new invention.

And back in the 60s, IBM had "systems programmers"... Same thing.

You're leaving out the other 80% of the industry – yes, IBM shops had systems programmers but every single one of them also had operators who were following a big run-book of canned procedures and diagnostic trees which sometimes ended with “Call <sysprog>”. Most Unix shops I've seen had a few decent developers and a bunch of people who used tools written by the first group.

The big difference I see in devops is that people started taking system management seriously enough to do first-class development rather than an afterthought.

It wasn't really that clear-cut. I started in Ops in the '90s, too, in SV, and there were plenty of SAs I knew who were proud of the fact that they weren't coders. Yes, they knew the shell, and maybe they knew a tiny bit of Perl. But as a guy who was an SA and a coder (Perl, C) I was a rarity.

I still am, but the "DevOps Movement" is here to point out that this artificial dichotomy is considered harmful.

Back in the early to mid 90s most Unix sysadmins I knew started out as computer science students, so they could code (the most practical language being C) but ended up coding less over time.

Yeah, I was never aware of a sysadmin who couldn't code.

Generally, a sysadmin has slightly different skills from a developer - they might code in a highly imperative style and always keeping the actual machine/system being targeted in mind, but I've never known a half-decent sysadmin who cannot write code.

The last time I looked for a senior sysadmin -- less than a year ago -- I didn't get anyone who was comfortable programming in Perl/Python/Ruby until I started using the term DevOps.

If that's the term the market wants to use, fine. As far as I'm concerned, a senior sysadmin who can't write in a couple of scripting languages isn't senior.

I consider myself a developer (though I call myself software engineer, due to the incompetence of other "developers" I work with).

I know a reasonable amount of sysadmin (all my computers run Linux primarily, I only keep Windows on for checking hardware issues, and a couple of specific apps I need to run once or twice a year).

I wouldn't apply for sysadmin jobs, because I wouldn't feel my knowledge is enough. I have however seen devops jobs that seem to match my skillset - developer with a bit more. I hadn't really heard of the term until I saw the job ad.

There's a difference between "Can cobble together some python" and "knows how to use a python package from pypi"

Agreed. In my experience a senior sysadmin can work as a above-passable developer. But a senior developer can rarely function as a sysadmin.

As to DBA; I can't help but feeling that the OP hasn't worked with "real" DBA's. That's a whole different ballpark and I've yet to meet a sysadmin or developer who can make even a passable DBA.

I've always thought the hierarchy goes: DBA -> Ops -> Developers. With the last two really about equals.

Thing is a good DBA is probably better than a crap developer. And a good developer is probably better than a crap DBA. Likewise with Sysadmins.

When I think about expected earnings, I would say your hierarchy is correct.

...you've never met a sysadmin that took an algebra class in college?

It's about more than just "unix duct tape". It's about 'Infrastructure as Code', treating servers like programming objects. It's about using configuration management tools like Chef and Puppet instead of writing bash scripts which only work on one system.

"DevOps" here, by which I mean an IT Ops Manager/Linux Admin/Network Admin doing this for more than a decade.

Nothing you described is outside of the realm of what your typical linux admin does. I don't have to be a senior python dev to do my job, and I've managed 5500+ virtual machines by myself (puppet/chef, bash, some python, persistent data/object storage).

Agree with the author; just shoving more hats onto less people.

It's a nice, and really needed reaction to the Microsoft view that ops people only need be able to set options on a GUI. Obviously that that never was the case, on Unix or Windows, but their marketing tried to make it look like so, and lots of people hiring and looking for a job believed it.

A DevOps can expect a bigger salary, while a company hiring one can expect way more productive candidates than if they asked for only ops.

At least in my world view this is a much better definition of DevOps. Folks who make the world run, and through automation can keep a larger portion of the world spinning. It requires someone who can analyze failures, figure out how to predict and mitigate them, and then code automation to do so.

Oddly this is much more like the 'developers' of old. If you sat down at a workstation you needed to know how to be your own system administrator, and you needed to write code.

Automation has enabled a fairly new class of engineer which I think of as someone who has no idea how the pieces all fit together but they can assemble them with the help of a framework and a toolkit into useful products. They become experts at debugging the toolkit and framework but have little knowledge of how everything else actually works.

The problem with this new type of coder is that they can write syntactically correct impossible programs. I didn't understand that until I taught myself VHDL (a hardware description language). VHDL was the first "language" that I knew where you could write syntactically correct "code" which could not be synthesized into hardware. The language expressiveness exceeded the hardware's capabilities (and sometimes you would need to have a time machine). Imagine a computer language where 1/0 was a legitimate statement, not caught by the compiler, but always blew up in your executable.

So we have folks who can write code that is grossly inefficient or broken on "real" systems.

Google had started a program when I was there to have developers spend time in SRE (their DevOps organization) and it was to invest in them the understanding of what went on in the whole stack so they could write better products. The famous 'times every programmer should know' by Jeff Dean was another such tool. You cannot get too far away from the systems that are going to run your code if you want to write performant code.

When Flickr did their DevOps talk in 2009, most of the infrastructure engineers I worked with at the time saw the trend in reverse. The people wearing the Developer hat were relying on our team's ability to automate anything, so the Ops team ended up being the team that best understood how the whole system worked.

In 2009, DevOps seemed like there was finally a reasonable answer to Taylorism. Engineers and Programmers and Hardware Technicians and Support Representatives were not cogs in a machine, but humans that could collaborate outside of rigid boundaries. Even at the lowest levels of the support organization, individual workers along the chain could now design their own tiny factories.

From there, it's just a matter of communicating tolerances properly up and down the chain. I am probably over-romanticising these notions, but it certainly felt exciting at the time. Not at all like the "fire your IT department" worldview it turned into.

"Imagine a computer language where 1/0 was a legitimate statement, not caught by the compiler, but always blew up in your executable."

... isn't that all languages?

Of course it is, and there are collections of letters that are pronounceable words, but it doesn't give them meaning. The equivalent in English would be a spell checker that didn't flag "douberness" and passed it along. Sure you can pronounce if if you look at it phonetically but it doesn't mean anything. It is syntactically correct but broken. VHDL has a lot of things that can be written but not actually expressed in hardware.

Sure, I've no doubt it's more common there - that's very much my understanding. The wording of the above just struck me very much as if it were meant to be hypothetical, which I found amusing given that it's nothing like.

Whether it's detected at compile time or runtime, a statement that evaluates to DIVBYZERO can be handled. Taking the result as an ordinary value that blows up your program, on the other hand...

"All languages" was a bit tongue in cheek, but even in Haskell, (div 1 0) gives a run-time failure.

In this case, a 'run-time failure' would be completely unacceptable, as the 'run-time' environment is your $X000 hardware manufacturing run. Hardware development isn't in the same league as software. It's not even the same sport. Like comparing football to rugby. Both played on a gridiron, but entirely differently.

First, there exist software environments where errors cost significantly more than a hardware run. Obviously, those environments contain hardware as well, but "cost of a runtime error" is clearly not the only important thing here.

Second, my only point was that the example given was a piss poor example of the difference between hardware and software. Obviously a bad example doesn't disprove the claim it's supposed to support.

Everyone's piling on you because that wasn't the point of the example. Automation grants humans extraordinary powers, as long as humans aren't simply steps within the automatic system.

There's been an awkward growing phase of the technology industry that has led to technicians that don't have any real understanding of the systems they maintain. Compare and contrast Robert DeNiro's character in Brazil with the repairmen he has to clean up after. We could be training those poor duct maintenance guys better.

... what?

If you haven't seen Brazil, you can safely ignore that part of the post. But you should see it.

I love Brazil, I'm just not tracking how all of that fits into the above.

The article is about how DevOps is killing the role of the Developer by making the Developer be a SysAdmin.

Chuck points out that abstracting the Developer's work too far away from the system in question means the Developer doesn't really understand the system as a whole. Jeff refers to "purely development roles" and other "pure" roles that aren't necessarily natural boundaries.

The example of VHDL is not about hardware and software, but about learning that you didn't actually know something you thought you knew.

The repairmen in Brazil do not realize (or necessarily care) what they don't know about duct repair. The system allows them to function and even thrive, despite this false confidence in their understanding.

At one point at least, Google was investing in (metaphorically) having DeNiro cross-train those guys, instead of umm... Well, the rest of the movie.

I've read this a few times and it still doesn't really have any bearing on the aside I was making, which was that something was presented as a hypothetical (Imagine ...) that is the overwhelmingly typical case, and in some measure that amused and confused me.

Well, it helped that I'd been discussing the topic out of band not that long prior to the original comments...

The initial detail was that VHDL, unlike "software" languages, has very different consequences. Can you imagine a language where (1 / 0) wasn't defined away as a DIVERR, but otherwise managed to remain mostly self-consistent? Where something can be logically / syntactically coherent, but not physically possible?

And if that example didn't hit home for you, so it goes, but there was plenty of detail unrelated to the specific example that I thought was more important / interesting to discuss. :shrug:

Nah, in dependently typed functional programming languages you can prevent this at compile-time.

Yes, "every language" was glib. In any language we could avoid it, actually, by hiding division behind something that gave a Maybe or Option or similar. My point, though, was that his "Imagine..." was actually representative of virtually all of the languages that virtually all of us work in virtually all of the time. It is therefore a poor example of a way in which HW is different.

I went to dependent types specifically because I figured we meant static avoidance without resorting to checked arithmetic. (better performance)

Sure, that would be a good reason to go there. I didn't mean to cast aspersions at dependent types. I was just confused/amused at the typical case being cast as a hypothetical.

well, in many places DevOps is implemented as "developers on PagerDuty". When I (the developer) have to be on-call for 7 day rotations, phone by bedside, paged at all hours, then I'm most definitely acting as operations - probably NOT what I signed up for.

And, contrary to the stated intentions, I've directly observed developers making crappy, band-aid fixes to ongoing production problems in the interest of "making the pages stop". This is the mindset when you are on call be being paged at all hours.

In theory, DevOps is supposed to put those that can best fix things closest to the problems, but in reality a slight separation from the firestorm of ops actually produces better, more thoughtful solutions in the long run.

The best balance is to have a first tier Ops on-call, 2nd tier engineering on-call, and any alerting issues get attention within 24 hours, moving to the front of the work-queue. But, indiscriminately assigning everyone "pager-duty" rotations leads to lower quality solutions in the end.

As the guy who's usually on pager rotation (and too often with far too bodies to share it), I disagree. I wrote a detailed comment a few days ago explaining my rational here, in conjunction with overtime / off-the-clock responsibilities:


• It increases pager coverage, and reduces any one person's pager obligations. Simply having pager anticipation is a mental burden after a while.

• It creates a stronger incentive for response procedures: what are the expected obligations of response staff, what's considered sufficient effort, what's the escalation policy, who is expected to participate, what are consequences of failure to respond?

• Cross-training. Eng learns ops tasks, ops has a better opportunity for learning what eng is up to and deals with.

• It makes engineering more aware of the consequences of their actions: is insufficient defensive engineering causing outages (say, unlimited remote access to expensive operations), are alerts, notification mails, and/or monitoring/logging obscuring rather than revealing anomalous conditions? Are mechanisms for adjusting, repairing, updating, and/or restarting systems complex and/or failure prone themselves?

My experience at one site, where I was a recent staff member (and hence unfamiliar with policies, procedures, and capabilities), systems went down starting at 2am, I was unable to raise engineering or my manager, and the response the next staff meeting to my observation of this was pretty much "so what" did not endear me to the organization (I left it shortly afterward).

Note that what I'm calling for isn't for eng to be the sole group on pager duty, but for eng and ops to share that responsibility.

I'm glad you have had a positive experience, but, it feels like your outlooks is unique among many of the developers I talk to daily. Could be selection bias, though! Good things to think about.

To be clear: I'm generally on the ops/systems side, not engineering / development.

Giving everyone pager duty can lead to higher quality solutions. The band-aid fixes crop up when ownership of a whole system eventually spreads too thinly.

Within the right framework, keeping everyone on pager rotation can lead to much smoother operations, because everyone stays familiar with the system as a whole. This was going around recently, and captures the essence of the philosophy: http://catenary.wordpress.com/2011/04/19/naurs-programming-a...

In my experience, it also leads to better solutions because devs who don't get woken by issues with their own code are people who don't particularly care about such faults. I've done on-call before where I've begged the devs to fix issues because they were waking me up needlessly. The devs were nice, but somewhat lazy, and my fix wasn't on their radar. Stick them on on-call, and all of a sudden it's more important to fix.

At one place I worked we had a two-person support shop. We would claim time and again that this or that affected customers or made support hard. The devs would pick and choose what was fun to work on. I ended up leaving and the other guy went on a prearranged month-long vacation. Everyone else had to pick up support (~5 devs) for a month, and I'm told that they had so much trouble with the normal support load that development actually stopped for that month. Apparently when the other guy got back, they started listening a bit more to his concerns, having had a taste of what happens on the pointy end.

In a similar vein, there's a wine distributor where all employees spend their first week half on the phones and half in the packing department, to give everyone a feel of what the core function is and what customers complain about. The guy telling me said that everyone gets the treatment, except the new CEO, who got away with only doing a day rather than a whole week.

Sounds like somebody in the hierarchy doesn't quite "get it".

Right - and I guess the point is that the person who is working on features ends up also being that one person who does the automatic provisioning and testing pipeline administration work, as well.

This is honestly why I've gone with PaaS - mostly Heroku - for several months now when deploying a new application. Why on Earth developers do anything other than working on the core features of their program I don't know. All of the things you need to set up - tesing pipeline, and containerized, automatic deployment, load balancing, databases - are now available as cloud services. There is absolutely no need for the developer to be doing administration and provisioning tasks at this point.

If you think you need to set up your own server infrastructure ask yourself one question: is there any specific technical requirement that my application has that can't be fulfilled by existing cloud services? If there isn't, and there probably won't be, you shouldn't be doing ops yourself, especially not in a startup setting where time is absolutely at a premium and you need to be spending all of it on making the best product you can make.

And before everyone tells me that PaaS is more expensive - it's only more expensive if your time is worth nothing. But your time isn't worth nothing - it's probably worth over $100/hr if you are a developer working in the United States. So Heroku ends up not being more expensive at all - especially not before you have to scale.

Try convincing the ops at a large bank, insurance company or government that they can run their infrastructure in the public cloud, and watch as you get laughed out of the building.

I think it depends on your definition of infrastructure. Let me illustrate..

Core banking, batch processing, highly sensitive data stores? Probably not great candidates for public cloud consumption.

Web properties and services which don't rely on said functionality? Absolutely great candidates.

And the reality is, whether the IT guys like it or not, developers inside of these orgs are consuming cloud because getting a VM in a traditional sense takes forever (for good reason.)

As a result, we're seeing a shift in the industry where the idea of large corps / financial institutions / government are being pragmatic about the idea of a 'hybrid cloud.'

Banks are leery of the cloud for anything they do because even a basic web site can have sensitive (hackable) links to logins for account data. Healthcare companies have the same concerns around private health data.

I fully understand that, having worked in that world for the better part of the last decade.

The reality is, adoption of cloud in the enterprise is growing, not shrinking.

In Australia gov (state & Federal) is now "Cloud First".

Two, three years ago you'd be right. Rightly or wrongly more and more people are thinking differently about that. The CIA for instance.

If you're working on a startup that is not yet profitable (or at least compensating you for your time), your time is currently worth close to nothing. That same time may or may not have a higher value later.

That's only true if you have no money and have the means to live completely for free. Otherwise, if like most human beings you have some expenses, than regardless of whether your time is producing value it certainly has a cost, and that can be directly compared against the cost of a PaaS.

Your time is priceless to a startup - it can't afford to pay you your value and yet entirely depends on your output. The startup literally can't exist without you.

That depends though on the current value of the start up, which could very well be close to zero.

Say you could be paid $100 p/h at some other company, but instead your start up is paying you $5 p/h because that is all it can currently afford.

Say you can save yourself a weeks work by going with one hosting option that is twice as expensive but the total additional cost would allow you to work for $5 p/h for an additional two weeks then that isn't a great trade because the "value" of that labor is both unknown and irrelevant at this point. It might be $0 p/h or it might be $10000+ p/h.

1:1X dyno on Heroku is free, with a postgres server, and a free redis instance.

So even at $5/hr I can't justify doing this work myself at this point, since I can get the infrastructure for my minimum viable product going for free. I can and should spend my time focusing on developing the features.

+1 :)

Actually finally created a Hacker News account to come here and say something similar to this. The article is remarkably misguided in several ways, including a fundamental misunderstanding of what "DevOps" means.

Being an SA, what rankles me most is the attitude (unfortunately common in the industry) that "As a developer, I could naturally be the world's greatest Sysadmin, if only it weren't such a waste of my amazing talents."

Yeah. As a developer I used to have a similar attitude, until I saw really good sysadmins (and DBAs) up close. Most developers are kidding themselves if they think they could replace either without at least as much extensive training as it would take one of them to replace the developer.

I have a somewhat different mindset. "As a developer, I am a demonstrably capable sysadmin, but the organization's discrete silos have made it impossible for me to contribute to ops"

This is not the only variant of DevOps. There are companies where the developers are responsible for creating the automation scripts to deploy their code into production as services and expected to keep in running in production.

I'm inclined to think they're 'doing it wrong', especially when you cross in to the "keep it running in production" territory.

If your production deployments are fundamentally different than your dev deployments, you're doing it wrong. For the most part, you should only be localizing a common deployment pattern.

I've suffered through enough three week deployment cycles because the prod environment is almost nothing like dev and everything is done manually. I think I know what "doing it wrong" looks like there.

I can't imagine an environment, other than say a juggernaut company (Google, Amazon, RackSpace, etc) that would require a full-time DevOps engineer. Most companies are delivering a simple service with modest needs. The reality is in these small environments is that their world is fairly small. These small environments rarely go over 50 machines and but a handful of services … so how many times can you automate something? How many new DevOps tasks can be created daily? How many different patterns can a DevOps engineer come across? I’ve done all of these and I quite frankly don’t see a great challenge in this field.

In the old days, it was an Architect/Team Lead/Sr Developer who figured out how to distribute a product, then the maintenance and upkeep of the installation scripts was later handed to developers of “less” capabilities. But the architect still reviewed and was keep abreast of installation changes. An Architect/Team Lead/Sr Developer should setup the intial design, scripts etc for use as “DevOps”, but DevOps is not a new engineering discipline. The DevOps tools are trivial to understand (kinda like InstallShield’s VB scripts) and easy to master. However, it does require an engineering discipline. Kids that are used to pushing buttons in NetOps can’t suddenly become an experienced coder … they know little of classical software development.

I disagree with the author implying that you should hire a DevOps engineer to do this work so that coders can “code”. The economies of scale are way off, this is something a that a junior developer masters so that he can spend more time coding. I wouldn't recommend companies spend so much on an employee performing a task that is trivial at best.

No, this task is a Developer’s job, and as a Developer, you better know about them. I won’t argue that mastering these tools are not time consuming, but if you want to master development, these tools had better be on your roadmap. Software Engineering is a new discipline, what we need to know increases over time and mastering this field is getting harder and harder … kinda like natural selection.

Strongly disagree. DevOps is not the simple automation of operational infrastructure. That has been done for decades.

DevOps is about developer empowerment. It's about creating the systems and tools that give developers more control over the operation of their applications. It's about removing Ops as a technical barrier between new code and live code.

At least, (as someone whom most would label as "DevOps") that's what it means to me.

"A DevOps person isn't someone who develops, and who does Ops. It's someone who does only Ops, but through Development."

That's not true. There are many scenarios where devs also understand ops and do both. It's getting rid of the "throw it over the wall" mentality and incorporating ops within dev itself IMO. And devs love to automate things so we have created a lot of tools for that.

DevOps is about improving the systems surrounding the work of software development, so that you (and the whole organization) may produce at a higher level of quality.

The OP article is entirely missing the point, and you've set it straight. DevOps and the "Full Stack Developer" are entirely separate problems. DevOps can be specialized as well.

Ya, it would have been better if the title reflected the true target of the rant [Full Stack Developers]. However, I kind of think this reflects a local culture issue and not really a broad one.

Like, I'm a full stack developer [e.g. I provision my own prod boxes, write the services that run on them] at my $DAY_JOB. I'm not seeing that as a bad thing unless it gets out of hand and I'm doing that for more than a small cluster of backend services.


One solution is to train up your ops guys in basic Ruby & Chef (for example)

Not really. Devops isn't a role or a person or even a process. It's a way to structure releasing software.

It's anti-silo. Instead of group A building software in isolation and then tossing it over wall to group B to deploy. Those groups merge, cross-pollinate or at least communicate frequently. So, the group building software is away of the needs/issues of the group deploying/running the software and vice versa.

extend to other groups functions (QA, maintenance, sales).

It doesn't matter if one person does a bit of each role or if there are persons for each role. As long as the work closely together.

DevOps means different things to different people it seems. I've spoken with dozens of companies and each one seems to have a different definition. Some view it as an SA with a brain. Others view it as a full stack developer, still others view it as guys focused on developing configuration management and just that. Others consider DevOps tools developers. Some just say DevOps and they mean "guy we can ask to just do whatever needs to get done, helpdesk, networking, configuration management, training, etc"

Saying "DevOps person" is like saying "Agile person". I think it's a fundamental misunderstanding. DevOps is about getting development and operations to communicate directly and work cooperatively, without impediments beyond inherent complexity.

Sounds like DevOps is new and like Agile isn't always implemented as intended once it gets into the wild at companies. I've experience Agile and Enterprise Agile at companies where meetings are called scrums but are 50 minutes long. Sound like there is DevOps and Enterprise DevOps, or some other bastardization occurring out in the wild. Oh process, oh process, save me from these hardships.

That's completely inaccurate, DevOps is most definitely about getting Developers to think about their software being Operated, and it is not about 'Ops Guys' coding for the first time ever - we always wrote automation tools.

Agreed. And as far as I know, 'fullstack' developer stands for being able to write high level as well as low level code. Hence the word 'stack'.

"fullstack" seems to mean server as well as client these days, since that's such a typical architecture to have. I think the high level+low level thing was the original meaning, though.

> DevOps isn't amount making Developers be Ops guys

Sure seems that way at many places.

To me it always seemed like DevOps is about Devs who do Ops.

Like, I could use an Op for my stuff, but I also could automate "him" away.

When I was studying CS, there were a bunch of people, who didn't like to code, so they became Ops.

If Ops is now about programming, they can't even resort to this branch of CS...

The OP has a point, but his choice of DevOps as the bugbear is clumsy. Maybe the bastardization of it by the business is what he's mad about. "Full stack" is, perhaps, a better target. It's a completely meaningless, useless phrase.

The problem is that employers demand specialists, especially for senior positions. At the same time, once they've acquired an employee, they refuse to respect specialties from that point on. Machine learning expert? Sorry, but we need a ScrumDrone over at desk 21-B. Being a software engineer means resolving the fight between your job and your career, which is probably a big part of why this industry is so political.

Employers are remarkably inconsistent in this regard. They want sharp people who can interview like real computer scientists, but get in the way of their continuing sharpness (by assigning smart people to dumb work) as soon as they're on board.

The insight that the OP has is that employers over-hire for crappy work, and he's completely right, but DevOps didn't do it.

"Full stack" is useless in the same way that "Agile" is useless. Specifically, it's useless because it was hopelessly cargo-culted and overused due to the original power of the idea.

The genesis of full-stack developer is that in the early days of the web you had a long history of programmers, and you had a budding community of web designers and javascript developers homesteading the new medium. For many years, there was an awkward gap in skills in that a good programmer would probably not be able to build a decent HTML/CSS website, and a good DHTML developer or web designer would be completely lost on the server-side.

5-10 years ago a full-stack developer was a very meaningful distinction. Today, every hacker wannabe Uber driver that went to a dev bootcamp for 3 months calls themselves a full-stack developer. "DevOps" avoids this fate only because the subject matter is slightly heavier and harder to fake.

I think a lot of people in this thread have never actually worked on a Really Big Project. Once you have two or three offshore teams, a hundred developers, associated support staff, multiple product teams, competing customer voices, multiple production environments in different locations with different support, "standards" imposed from external orgs that make no sense for the project at hand...

That's when DevOps gets really helpful and valuable. But if you haven't worked in environments like that, you have no idea what it's like.

"Full stack" in the entire 6 years I've been developing has meant front and backend. Not, "I can spawn VM instances and code."

The problem is there's as many ideas of what "devops" means as there are people saying "devops". The concept of devops as you just described it makes absolutely no sense to me for example. Operations has always been about automation. CFengine has been around for a long time. So to me, your version of devops just seems like a fad buzzword applied to the same old same old.

Better automation than cfengine, with any luck. But in most companies, Operations does not mean automation; that's the very reason the DevOps movement started.

Usually worse, it seems most people use chef or puppet. I understand what you are saying, but I am saying I believe otherwise. Operations has always relied heavily on automation. We managed several thousand unix machines with 3 or 4 people back in the 90s and it was totally normal. Devops started for the same reason cloud started: to put a new buzzword on what people have done forever, so that it can be sold as a new silver bullet.

I'm just not that cynical. We're all tempted to grow cynical after a number of years in this business, but you don't have to let it color your whole world. :-)

Yes, we had automation in the '90s — I wrote quite a bit of it myself! — but the landscape has drastically improved. For one thing, the industry is now embracing it, and with the embrace, a name. The name is not being "sold" in any way that I can see — no one is getting rich by bandying around the buzzword. It's not being sold by anyone I'm aware of as some kind of silver bullet, and anyone who believes in an IT panacea deserves what he gets. It is however being used to sell an idea, that automation in the '90s and before was a good thing, and that we should probably do more of it. DevOps means more than just automation, and in large part, these are also improvements in the industry. We're better now, partly because we have to be.

For that matter, the cloud is just a name that describes the commodification of computing resources (whether that be actual compute, storage, whatever). Yes, yes, the marketing blowhards of the world have misused and bastardized the word, but that doesn't mean they've ruined it, or that it never meant anything.

It has nothing to do with cynicism. This is literally taking an existing industry, and slapping a new name on it to sell products, consulting, etc. Go look at any of the current product's websites. Which ones demonstrate an understanding that automation is as old as computers? Cause they certainly all look to me like they want to give the impression that they invented the concept, so you should buy their broken pile of ruby scripts instead of the other guy's broken pile of ruby scripts.

The IT operations industry is not dying. Automation is old, but this isn't being billed as "new." What's new is 1) it being widespread, which despite your contention, has not been the case, and 2) automation via open source frameworks and tools, of which there has previously been a dearth.

I have no idea why you think Chef and Puppet are broken piles of ruby scripts, but for the record, they're free. Also, in neither case is anyone implying or saying that they invented automation. Having a nice framework to use is a definite improvement, though.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact