I don't understand. I want to develop code. I don't want to become an AWS/S3/Github/Jenkins/Action/terraform/etc expert. I know enough of this to be dangerous but not at a level that passes as professional. Yet I am regularly tasked with maintaining the full deployment of code.
There's a reason to have a team of people doing this "DevOps" work. Just like we have a team of people who do SRE. It creates a standard and a single point which all work flows through. Then you don't wander onto a new project only to realize they use $BESPOKE_DEPLOYMENT_METHOD because "it's what we used 6 months ago". Or worse, you don't have a developer playing with a massive, nuclear powered, foot gun like Terraform and accidentally destroying infrastructure.
Making DevOps/DevSecOps/$BUZZWORD the responsibility of developers is a cost-cutting measure not a responsibility measure.
It just doesn't work for the most part. Maybe as an Ops person, I just want to do Ops, I don't want to have to understand your code, but Lambda has specific limitations on how long code can run for. I can't allocate CPU and Memory resources in Kubernetes without a deep understanding of the application. S3 has limitations on how files can be distributed and accessed. Integrating with CI gets complicated quickly and requires understanding the code being integrated. A lot of the time building things in terraform I spend three times as much time getting the information out of a developer as they would doing it themselves. I mean yes, there's 60% or so of my job that involves working on lower level infrastructure that doesn't touch a dev, and we do need ops engineers for that, but the other 40%? It works both ways, a good "devops engineer" needs to understand code, but if we don't have a shared language we both just end up banging our heads against the wall.
If you're building a crud-app in a common framework with low volume, sure you can toss that over the wall.
At companies I've worked for where they actually do this the DevOps people are generally assigned to a team. So you might have 10 developers on a project, and one guy managing just the devops exactly to solve this problem. In smaller companies/etc I can see where this is a problem. I agree, and there's nothing wrong with knowledge sharing. This could even be as simple as having PR descriptions including benchmarks/code/etc and making sure the devops people make their points clear during design/planning.
In theory, devops shouldn't be responsible for maintaining the performance of code. The specification should say what it should run on, the devops guys set up a pipeline and manage that thing, and the developers are the ones taking heat for not hitting that goal. If devops guys are taking the heat for that it sounds more like cost-cutting measures flowing the other direction.
Yeah and this is a great solution (aside from the bus factor), maybe the person I responded to was really concerned less about knowledge and more about expertise (the word they did use in their was expert) At a company I worked with we called this T-shaped engineers. Deep in one thing, but broadly knowledgeable. Devs have to have knowledge of ops, but not "expertise" that is ultimately what ops is for, we may just ultimately be fighting over where the knowledge line is sufficient and what constitutes "expertise" :-). I for instance think terraform is not that much of a footgun and provides good rails to be used by developers.
I think you make a strong case for ops who can dev, but a fairly weak one for devs who can ops so I think you and the parent are actually agreeing. And this mirrors my experience pretty well, I need to know the code to be able to ops effectively but it's much rarer that devs need to know how to ops to dev effectively.
And in some ways this is by design, I want to have some distance between dev and ops because it gives me the freedom to rearrange infrastructure transparently. I can move workloads between Lambda, ECS, and EC2 based on the observed performance characteristics without anyone being the wiser.
This is going to depend on a lot of things. Size of company, cloud native or not, org structure. I mean everyone would love to live in the google world where a team of SRE's run everything. But even in the world where a Devops engineer is embedded on a team there's bus factor to consider.
I think most modern AWS services are more the equivalent of an API or a microservice than they are a server, and you need to understand the limitations of the services you integrate with. If you're a cloud native company and don't have a mature platform engineering team, devs are going to have to know alot about AWS.
If the developer has all the information necessary to create an S3 bucket, lambda function, kinesis stream, etc does it make sense for them to offload 10 pieces of information to me, or for them to learn HCL and interact with it themselves, especially if it is something they do often. Especially if there's a central dev/ops team and they're the limiting factor.
Devops taking over every infrastructure change for a broad team of devs is inefficient and expensive. It's also probably frustrating for devs that are aware of the above factors. Lots of devs I know would prefer to do it themselves.
And all of this is to mention that infrastructure and integrations with infrastructure are not static. Should I be reviewing all PR's to a system that touch the client code for an AWS service, because the dev team doesn't want to learn how that service works? Maybe, but in the end I don't know if this makes anyone happy.
Platform engineering is certainly the goal. Where ops creates a platform and dev just consumes that platform. But I don't know if it is realistic. Every system on rails is great until you try to take the rails off, and most devs I know hate rails :-) Fundamentally "devops" is all meant to solve human problems, not tech problems so it will have to be dynamic.
This makes me wonder if ops allocating compute resources is really a good use of time if you're needing precise details of an app (which can and do evolve). This isn't a slam against ops, either, it's a knock against the tech itself that it forces all this incidental complexity on you.
Yeah, I mean, fundamentally it's so complex because you have to make tradeoffs and people hate tradeoffs
"I don't want to have to worry about what machine my app runs on" vs "kubernetes is to complex"
"Dependencies change to often" vs "I don't have time to maintain this thing I wrote myself"
"I just want the infrastructure to figure out what I need" vs "I want to be able to build whatever I want with a bespoke language/framework/database/architecture"
> it's a knock against the tech itself that it forces all this incidental complexity on you.
If I were to say that Kubernetes is the magic secret sauce that fixed all the incidental complexity I would get laughed out of the room. There is no magic secret sauce to the incidental complexity, the more we try to fix it the more we create (or the cheap-fast-good problem. This is probably easy if there are no cost limitations)
Our devops runs the infrastructure, but details like "what resources do we allocate where" are done primarily by the software devs. I don't really see the conflict here. I don't know and I don't care about how to put together the infrastructure required so I can change the CPU allocation on a Kubernetes pod, but I also don't expect devops to know jackshit about our code.
> I don't understand. I want to develop code. I don't want to become an AWS/S3/Github/Jenkins/Action/terraform/etc expert.
They’re tools for the job, like your compiler and the language you use to program.
That’s like saying “I like to use Python and couldn’t care less for Java”. It’s fine to disagree with the team’s choice of the tools, but one needs to eventually commit to the choice, even it’s not your preferred one!
There’s an old adage that “if you’re writing clever code then you may not be clever enough to debug it”. It’s true! Operations requires deep understanding of the code running in production, the business rules and the customer. The person who will operate your code will eventually be smart enough to develop it entirely too, eventually cutting out the “dev”. I’ve personally seen this happen over and over.
DevOps came about because of developers who "just wanted to write code". They would write something, then throw the dead cat over the wall and say "you figure out how to run it". That... doesn't really work. Somebody needs to explain to the Ops people how to run the code. Hence: DevOps... a way to get Dev and Ops to avoid throwing dead cats over walls.
If you don't want to think about AWS S3, GitHub Actions/Jenkins, Terraform, etc, .... then we need to work together. All those tools and services exist because all the software developers are sitting in their sandbox, and don't want to come out and play. The systems and tools that we run your code with... suck. A lot. We need programmers to make the systems better. We (in Ops) are a little busy with trying to just figure out how to run your apps without them falling down. We don't have a lot of time to reinvent the state of the art of computer systems.
For example, we need a distributed operating system. Not some fucked-up kludge of a monolith of microservices overseen by a company that has more engineers than brains... but an honest to god, stable-ABI, simple, composeable, stable, general operating system. We need Linux to come out of the box, ready to run distributed applications, in a way that doesn't require a PhD. Once we have that, then you - yes, you, the developer! - will be able to make applications that automatically scale so easily that we will never need to utter the phrase "container" ever again. You will rarely ever need us again, because the system will just be so simple, so general, that anybody who can use the terminal can build and deploy applications without ever learning anything outside of your programming framework.
But we need you to make that distributed operating system. Until you do, we will just have more stupid kludges, more bizarre unnecessary complexity, in the futile attempt to constrain all the crazy shit we want to do with technology, while trying to run your apps for you. Please, I'm begging you - put me out of a job.
> Yet I am regularly tasked with maintaining the full deployment of code.
Same shoes, but I have different perspectives. I can figure out where it's not working and if it's my/our team's area of responsibility then we go fix it.
Since we handle infrastructure (as code) and deployments in team, along with all the development, most of the problems are handled by us. Unless it's clear that it isn't, e.g. some API that we consume that keeps throwing 500, can't fix that.
Our operations is helped by 100s of automated tests and 1000s of metrics. I always thought that this is DevOps, but it sounds different from what most of the people here are alluding to.
> Making DevOps/DevSecOps/$BUZZWORD the responsibility of developers is a cost-cutting measure not a responsibility measure.
My background is large multinationals so my view here is a bit bias but i don't think cost cutting is the driver.
Large orgs get large change management processes and procedures. Over time, these change management teams become overwhelming behemoths with minds of their own.
I think "DevOps" was designed as a way to "bypass" the bureaucracy?
"We just use this CI/CD pipeline and no need to sit on a 3 hour change management review call..."
Almost everywhere I've worked that I helped run software in production for would have a step in CD which filed an automatically-approvable change req with automation; just like the automated deploys.
It becomes just robots pushing around paper for compliance.
Alan Kay said that “Everyone who is serious about software should make their own hardware.” How can you be a good developer if you don’t understand the architectural limitations and choices?
When I design a backend system, I need to think about how the front end developers are going to interact with it. My data storage characteristics and scaling. I need to know am I designing anything that’s hard to deploy. How will logging work and be aggregated. I have to be able to think about the entire system.
It’s not just “cost cutting” at a certain point in your career you are expected to know more than just “how to code”. I’m not saying learn AWS. But I would expect any senior developer to know about what their code runs on top of .
I don’t think you’re disagreeing with me. Developers should know what their code runs on. They shouldn’t have to add managing that to an already full schedule of work. That’s the difference.
Back when I was in the real world [1] working for a startup, I would do your typical serviceless solution with Lambdas, S3,SQS, etc. I couldn’t just use ClickOps and create everything on the console and expect someone else to recreate everything with IAC. I had to know how to do it.
I think to push back is rightfully coming from the “ops” part. I consider “creating the CloudFormation/CDK/Terraform” code as part of “development” as part of coding.
If you use Docker, wouldn’t you consider creating the Dockerfile as part of development?
Yes I knew AWS pretty well by the time I left and I needed to know it to be a good developer in that context and designed most of the processes around it. But I refused to do “operations” - ie “infrastructure babysitting”
There is a huge distinction between “I don’t think I should have to know how everything works” and “don’t call me in the middle of the night when something goes down “.
[1] I’m the first to admit that I left the “real world” once I started working in the cloud consulting department at $BigTech
> I think to push back is rightfully coming from the “ops” part. I consider “creating the CloudFormation/CDK/Terraform” code as part of “development” as part of coding.
> If you use Docker, wouldn’t you consider creating the Dockerfile as part of development?
Sure you could argue a developer could, or even should create these things in theory. The problem is then when it goes down I have made two problems out of one. Now I have to manage the infrastructure of a system and what is running on it. Realistically, and even in my current job, it's actually several systems. Now when something breaks I have to pray I can fix it. Instead of allowing a team of infrastructure professionals to at least insure the hardware is working my 8 hour day turns into 14 or 16 very quickly the second one thing goes wrong.
So if I need to create a bunch of Lambdas, queues, sns topics a few dynamodb tables an S3 bucket, etc and tie it all together, are you proposing that the developer should just create everything in the console and then call over someone else to go behind me and write the infrastructure as code?
There's a reason to have a team of people doing this "DevOps" work. Just like we have a team of people who do SRE. It creates a standard and a single point which all work flows through. Then you don't wander onto a new project only to realize they use $BESPOKE_DEPLOYMENT_METHOD because "it's what we used 6 months ago". Or worse, you don't have a developer playing with a massive, nuclear powered, foot gun like Terraform and accidentally destroying infrastructure.
Making DevOps/DevSecOps/$BUZZWORD the responsibility of developers is a cost-cutting measure not a responsibility measure.