It's becoming hard for me to justify staying in the 'DevOps' space with amazing solutions like this coming out regularly from AWS. I can hitch myself to the AWS wagon for a while, but eventually, it seems, dedicated operators just won't be needed for most small to medium deployments. It's tough, because I can see that this is genuinely moving the industry forward, but it's also negating the need for skills I've worked for over a decade to build. I guess this is what carriage builders felt like 100 years ago.
Sure, I just created a CodeStar project which I guess hooked me into 5 or more of AWS services but now what? What if I want to create a DynamoDB or move off of CodeCommit? How do I know if it scales or is redundant? How much does this beast I just created in a few clicks cost / month - can it be cheaper? etc. etc.
It may make DevOps jobs easier or help people get into DevOps but it certainly doesn't replace a trained AWS architect.
But on my last project (much bigger) Elastic Beanstalk started to really show its rough edges, and I was pretty close to tearing it down and setting up by hand, which gets in to becoming the guy who knows AWS Good Enough™.
But I could've easily seen that project scaling way up and being tempted to go the Kubernetes route, which would've required a true DevOps person to maintain.
Just based on my anecdotal experience, I don't think either of the previous projects were paying for the DevOps person in the first place, and I don't think CodeStar is going to replace them now either. This is just low-hanging fruit.
It was a node.js project and the db was separate, and keep in mind my knowledge of AWS is/was limited. But then, this seems to be the target audience for EB, yes? I'm sure a lot of this could be fixed but...
It somehow took me significant time to figure out how I could send meaningful logs anywhere but poorly organized flat text files in S3 without adding some kind of logging sink to the code itself. Sure, I could've just plugged in Winston and spat out everything to MongoDB but the team couldn't understand why we couldn't just get logs out of EB. And yeah, we could could grep through the S3 bucket but that meant syncing the whole bucket and trawling through an absolute ton of log files with no more precision than your knowledge of grep and regex. Not the best thing.
I eventually dumped all the logs in to CloudWatch, but the documentation to do so was severely lacking (from my perspective) that dealing with CloudFormation template files made me consider ditching EB entirely -- If I'm dealing with this garbage anyway why don't I just go whole hog and get more control anyway?
Even after dumping the logs to CloudWatch, I was still "the guy" for doing any good searching of these logs. Maybe this was just a team quirk, but what they really wanted was logs in MongoDB or something they know, rather than something AWS specific. Again, a logging sink would've fixed this, but there were concerns about uncaught exceptions and attempting to send to MongoDB in an unknown state.
2) Deployment Time & "Health Checks"
The team was used to pulling the latest changes manually and restarting the node.js process. Introducing CI ironically increased friction as our deploys were taking an upwards of 10 minutes (20 if immutable). This doesn't sound like much of a problem in a traditional environment, but going from a "cowboy coder" environment to "why the hell does it take 20 minutes to deploy!?" was a stretch. My initial answer was "health checks!" but then we discovered that the health checks weren't actually doing much for us... I had erroneously assumed that part of the health check meant monitoring the responses of the web service and aborting deployment upon massive failure. We had a deploy that crashed and returned 500 on every request, yet somehow didn't abort. Part of this may have been because the developer involved in the bug did everything possible to prevent a "crash", instead of letting the process die upon unrecoverable error (part of a frontend JavaScrtip mindset I assume), but trying to sort out how to make the health check look for 100% 500's brought me even further in to "why not set it up myself?"
3) Deployment Failure
We hit a hard limit of deployments and had to clean out a list of artifacts we didn't even know existed before being allowed to deploy again. WTF? This is when the team started to really question the value EB was providing.
4) I ended up "the guy" anyway, so what does it matter?
The documentation was too dense for my team, and the quirks were plentiful enough that I ended up "the guy" anyway. One day I came in late and a deployment had gone wrong. EB had been rolling out bad code for 20 minutes and they didn't know how to fix it. I had to clean up the mess. That was really the "fuckit" moment for me. Hand managed EC2 instances wouldn't have caused this problem. Regardless of knowledge problems within the team itself, the whole point of adopting EB was to avoid needing infrastructure specific knowledge to scale & deploy. Now EB was doing more harm than good.
These are just the things I remember off-hand. I'm sure there's solutions to these problems, but it just wasn't worth it for me to investigate when really my purpose was just to write and ship code, not futz with the infrastructure. We would've run in to less problems with a naive, manual setup in the end.
You can ping me at rtannerf dot com if you want to know more. I'm sure I can dredge up a few more annoyances and clarification from an old coworker. Hope that helps.
More and more work can either be outsourced to services like AWS is building or cheaper IT professionals anywhere in the world.
The only question here is when taking the lead outweighs the costs.
If this statement is not true, then Amazon would say that AWS is failing to fulfill its mission.
I could definitely see less ops engineers needed in some companies that move to AWS from some in-house platform but overall "devops" is rapidly increasing in demand.
Bezos is maniacal about winning, and I'm sure this will win where they want it to win.
Much of my value as a consultant is that I can drop into a shop, understand the tooling and the stack, and develop a solution that makes sense, using AWS tools where appropriate or general-purpose ones where the AWS ones don't fly, and I can make it approachable for developers (not operators, but developers) with less experience and make it easy for them to hammer on it and expand it as needed.
That's not going away--if anything, the steadfast drive towards understanding less of how anything you do works makes people who do understand it more valuable when the shit hits the fan. And it always does.
I should mention it's a very small team and the option to split the DevOps work does not exist. I'm still evaluating GCE and friends, because the amount we pay for Heroku is ridiculous.
Edit: Also I feel like a need an AWS <-> English dictionary
The set of shops that actually benefit from k8s or the like are a rounding error and so many person-hours are burned building stuff like this that it hurts. Simplicity is better than complexity until your needs require complexity.
If you'd like to chat further, feel free to reach out - email's in my profile.
I'd be interested to hear about experiences using CodeCommit, CodeDeploy, etc. but I doubt AWS found the One True Workflow.
I admit, it wasn't the easiest thing to setup. But its nice. When I commit my changes to my mainline, it deploys the code to the host and configures everything for me, and keeps all my hosts up to date.
Coworker uses it all to kick off lambda functions.
CodeCommit is just a basic Git setup, so nothing special. But working with CodeDeploy and Pipelines is ok. I wouldn't say its much better or worst than other products, and you can use them to deploy to non-aws environments also.
I don't know, I can't really sell it well. It does what it does, and it works well enough for me.
Is this because the further they move up the stack, the more UX matters? That's Amazon's perennial weakness.
Remember that Bezos says, "Your margins are our opportunity." They will get around to gobbling your market, unless it requires something that doesn't scale for them.
AWS's strategy is not looking for the one true workflow, its to envelop enough common usecases that it dominates nearly the whole field, giving you as custom a service as you want.
I call it the "Atlassian strategy": be mediocre at enough things that you become enormously successful.
It's a long way from a death knell to competitors though.
AWS needs these to prevent other players encroach their ecosystem. The higher the upper-stack offerings in the tech stack, the lower the quality is requires to be dominant. But the ecosystem will secure the customers.
It's not that you cannot beat AWS in one or two offerings, but you cannot beat them as an ecosystem.
Very true! If you only offer one piece, then you have to
1. integrate with them and other providers
2. Be substantially better (2x, 5x?) in order to beat the default provided by AWS
3. Don't forget you are competing with open source options as well (where the cost is the time to set up a solution, but no cash outlay)
As is, I wouldn't use it without an extreme organizational requirement.
Naah...It probably will be for more than 10 years just good enough for prototyping your applications.
Ops engineers have two choices: adopt a symbiotic relationship with the tooling, where they can be even more productive than before while wielding the tools, or eventually lose their jobs.
CodeStar (and its competitors) doesn't eliminate the future prospects of ops engineers. Instead, they reduce the grunge necessary to get something working. (After all, how much fun is it to actually set up and configure Jenkins ... for the hundredth time.)
So it's possible that over time, ops engineers get more time to worry about things that are harder (or impossible) to automate. For example, organizations run similar, but not exactly the same dev workflows. So understanding your existing dev workflow, and how to tune it is an ongoing thing. And with something like microservices, maybe you have dev workflows that vary by team. Or how do you tune your monitoring infrastructure to let you root cause faster, instead of just taking stuff out of the box. Etc.
I could be wrong, of course. Curious what others think.
You just need to shift from "I know to edit this /etc config file" to "I know to build this on aws|gc|azure". It's actually quite liberating.
Pivotal Tracker for life!
Really, if you're not okay with your job description evolving substantially every 5-10 years, then a career IT is going to be a source of a lot of stress for you.
I love this stuff. I deeply enjoy watching technology develop and become more efficient. This feeling is something akin to watching your kid drive off to college: I'm probably not needed as much anymore, but I'm pretty excited to see how far it has all come. That doesn't mean there isn't a selfish pang of nostalgia for how things once were (and a little bit of existential fear for my future financial wellbeing).
> it's starting to look like I may not be needed on the bleeding edge at all.
If it's any consolation I'm pretty sure this has been a theme of worry for hundreds of years, if not since the beginning of technology. Thankfully humans are extraordinarily adaptable (and bad at future prediction) so a combination of evolving our skills and failed promises means there's probably going to be a lot of work for us to do for a very long time.
> I'm probably not needed as much anymore.
I think this too is just part of successfully doing your job. And getting old. Hopefully you get to move on to new things as you hand off or automate the old. And then someday you realize that the twenty/thirty-somethings are really sharp and maybe woodworking ain't so bad after all. :-p
From first pass, saying CodeStar is going to kill DevOps is like saying that Lightsale (the VPS thing) is going to kill DevOps. At the end of the day it gets you in the door with AWS until your company can be adult enough to build something for real.
If anything, I think its good for DevOps. CodeStar could mean less hacked together junk at the get go. So when the real DevOps gets hired, less to cleanup and unf'k.
But DevOps ? wasn't/ins't it always just a matter of time, for most small/medium sites, until someone hides all of the complexity behind a nice, smart interface and leave very little for people to do?
The nice thing is the technology moves. In fact I imagine that's what attracted you to the industry in the first place. As these things roll forward we start doing more and the pie just grows.
Maybe the future of DevOps is in managing, deploying and optimizing AI stacks? Or maybe just managing the -cost- of AI? Or how do we manage and exchange personal health data on these platforms? Or biotech experiments? Or global IoT stacks? Maybe it's managing APIs, or solutions that are federations of third-party services. So much stuff in the technology pipeline it blows my mind.
Some of the harder skills may become less relevant (they'll never go away, in actuality). The principles and soft skills you have still have a long way to go.
This stuff is not as easy as vendors or aws would have you believe.
Anyone here have experience transitioning from being ops to a behind-the-scenes cloud engineer?
I imagine that market is plump full of CCIEs, etc.
mainframe, timeshare, client server, cloud, it goes around and around
Elastic Beanstalk: Runs in a managed application environment
EC2: runs on virtual servers that you manage
Lamba: running serverless
As long as there is a text with pictures for weird video adverse engineers like me.
I actively avoid videos.
Few things are more frustrating than having to skip back 15 seconds multiple times in a video because I didn't quite get something, or trying to hunt around ahead in the timeline because the current stuff being presented is trivial introductory fluff.
Now I'm not against videos, I just want good text (not just transcripts) to be available for people like me.
This really made me laugh out loud, so thank you. I understand exactly what you mean and I agree. Cheers.
This may also mean that I will never have to touch devops again. I may never have to use Terraform, Kubernetes, Docker, CoreOS, and all that other stuff. At least, not for small to medium-sized deployments. It also means that I'll never need to touch Dokku or Flynn. I was about to say that some company just wasted a lot of money on Deis, but then I remembered that it was Microsoft. So maybe that was a good move, although I feel like Azure is the Bing of cloud providers.
I'm actually angry that this took so long. This should have been available 5 years ago.
True, you may never have to as long as you're okay putting 100% of your faith in the benevolence of a single company. Remember, a good portion of AWS services are completely proprietary and serve as a pretty fancy set of golden, jewel-encrusted handcuffs.
Fully open source projects like those you mentioned, on the other hand, are positioning developers who know them to treat the cloud providers of today as just another source of CPU cycles, storage, and networking. For the services I run on Kubernetes in AWS, I can lift and shift those in a day if/when the day comes that Amazon is a little less benevolent with their pricing and service levels. If we were asked to do the same with our services built expecting AWS functionality, it would takes months to years. Guaranteed.
Yes, there's a trade-off that I don't get 100% managed services and I have to know a little about the workings of the platform, but I would make that same commitment to AWS the second it didn't do everything for me. In my experience, that moment usually occurs somewhere around the 10-minute mark.
Amazon is great today, but Bezos won't live forever and who knows what happens when his successor takes over. It could be great, or it could be a mass exodus of people dumping years of investment. Personally, I'd rather hedge my bets and use their low-cost VMs as my source of compute today, but leave myself plenty of dead-simple exit options and sleep a little more easily at night. That's what open source software is all about.
Also, not all doomsday scenarios are terribly far fetched. AWS might one day have to discontinue a service after losing an intellectual property lawsuit, for example.
Everything old is new again.
Years ago I wrote a blog on how to set up git commit hooks, a sample shell script to update an app, etc. I did that myself for a year or two, then went back to using Heroku and other services.
There are a lot of small projects that live inexpensively on Heroku. That said, I would personally not do a large multi "server" deployment there because of cost, but for the small stuff I think that they are the 'best of breed.'
What this has me thinking now is that a micro beanstalk doesn't cost more than Heroku initially and will scale much more economically if my product takes off; and I can actually achieve `git push` deployment without a lot of yak shaving.
I had a similar scare yesterday when Google launched Google Earth in the browser with a bunch of technology which, on the surface, competes with my start-up, but digging deeper, there is often still an opportunity, and they've justified your market.
CodeStar is more like AWS Elastic Beanstalk, the one that I still don't know how to take it into my toolchain and development flow.
Now AWS CodeStar, that requires bunch of clicks, wizards, permissions configuration, choosing a template and then another changes...
To get started with they needed to show bunch of screenshots, while something like Heroku would tell how to use the whole thing in 2-3 commands in terminal. That's all.
However, I hope something like CodeStar get more polished and mature so I can move have one place to bill and one thing to worry about and not docker, kubernetes, mesos, ansible, ssh into machine, database backups, etc...
edit: Just found out for web applications they use Elastic Beanstalk as deploy step.
I think this looks great.
You can always tell when someone's never worked in an "IBM shop" or an "Oracle shop".
Amazon refuses to be pinned down on how small (it's "low to moderate" bandwidth on their instance matrix), but you will feel the pinch with extended usage.
But still, you can hop on an m4.xlarge for only 150 a month. Get 14GB of available cache on 4 cores with high networking.
FWIW, redis is maybe an outlier here, but it's pretty stark. The prices are just so far out of reasonable range it feels like. Their postgres pricing seems more in line (assuming heroku's support is good, I have no experience with such).
The real question is what does the cloud platform future look like when that slack goes away, but I think we're pretty far from that reality.
Edit: Looking into the CloudFormation templates created, there are CodeStar resources, so, I assume, the new resource types are just not documented yet. Not that it needs much documentation, but I will try creating a template with those to test if it works.
Edit 2: Here are the new resource types:
Description: Starting project creation
ProjectDescription: AWS CodeStar created project
ProjectId: !Ref 'ProjectId'
ProjectName: !Ref 'AppName'
StackId: !Ref 'AWS::StackId'
Description: Adding application source code to the AWS CodeCommit repository for the project
CodeCommitRepositoryURL: !GetAtt [CodeCommitRepo, CloneUrlHttp]
Description: Adding the AWS CodeCommit repository to your AWS CodeStar project.
ProjectId: !Ref 'ProjectId'
DependsOn: [SeedRepo, CodeBuildProject, ProjectPipeline, SyncInitialResources]
Description: Adding all created resources to your AWS CodeStar project
ProjectId: !Ref 'ProjectId'
I don't mean to knock it, it's great to see this functionality make it to AWS, I was just wondering if there were any significant differences to Azure's offering?
(Thanks for the heads up)
Definitely a cool thing, a good way to have something that's easy to start with, but as you need a more complex infrastructure would let you add it as necessary without incurring the usual "learn everything at once" cost.
But I think this might already be true if you're comparing Heroku with AWS.
It's pretty trivial to do this kind of thing with Jenkins or Gitlab CI and those don't have that issue. And then you can deploy however you want to any provider.
Maybe I don't really understand the concept of lock-in. I've never felt "locked in" to anything on AWS, unless you're talking about contracts for reserved instances.
> It's pretty trivial to do this kind of thing with Jenkins or Gitlab CI
I think you might be trivializing what's being offered here. This is bigger than a CI server.
It basically just seems like an AWS specific deployment recipe as a service for people who might not be familiar with a tool like Ansible. Is that accurate?
You only lock yourself in if you allow yourself to lock yourself in.
Disclaimer: I'm the founder at distelli
Not using their code hosting, use GitHub for code + Wiki.
My question is CodeStar only worth using when you need to deploy a basic "template" app out the door? What happens when our requirements mean changing web server config, firewall rules and all the actual things that happen that are customizations? If you start on CodeStar are you "stuck" with it or can you easily get off it but keep the underlying services? I know its just EC2 horsepower under the hood. The last thing I need is a service that isn't customizable, if that's the case maybe I'd be better sticking with straight Elastic Beanstalk?
That being said, after poking around I have a few criticisms:
1. It doesn't seem like the example templates include Docker. At this point, I think Docker should be considered a must-have for any new ops tool. It makes your application much more portable and eases the learning curve for new developers.
2. Getting things set up and working is still too complex. It took me 20 minutes to get my first project working fully (even just using their standard tools).
3. There doesn't seem to be any ability to bring in an existing project. It'd be awesome if I could just provide a GitHub URL and Amazon would automatically set up all the ops tools I need to have a production-ready deployment of that project.
I welcome the competition though. Let humans focus on unique work while computers automate things.
 In fact, I founded my startup on that premise: http://getgandalf.com/
>>each project using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.<<
As a developer I don't need to know all of that. All I need to know is one command to deploy my deployment package for the project. That way I am productive quickly. Then give me the ability to configure the VM/container size and autoscaling. Now that my app is up and running, give me something like Azure WebJobs to run my worker tasks.
Amazon needs to have a better PaaS story than what BeanStalk is. BeanStalk is very flexible but in return, it ends up being too complicated to set up.
Compared to Heroku and Azure App Service (which I am using in a .Net project and was surprised to find very easy to get started with.), AWS Elastic BeanStalk is very complicated. With Heroku and Azure App Service, the developer friendly PaaS UX is more or less a cracked problem. AWS just needs to copy it well.
Plus, it's a template, not a closed solution - after it runs, you can observe what was created and tweak to your liking. If you do so much configuration as you're implying, you probably have some templates of your own.
I have no experience with devops and want to learn to manage my applications directly (mostly use services like now or heroku currently), but looking at learning aws always seemed intimidating with the 600 different services they run.
It depends what you mean by that. In a big way, this negates the need for a dedicated Ops engineer for small to medium deployments on AWS. If you're a developer looks to operate your own small application, this is a great entry point.
Ops people were already largely negated in small to medium deployments. This doesn't change much about that.
Developers who can handle operations tasks are, as always, in huge demand.
So amazon moved from letting you develop from the cli to a gui-code editor.
Me too. Hopefully support for Visual Studio 2017 will come out soon as the toolkit for it is still in preview
Doesn't quite look like they've nailed the ease-of-use quite in the way Heroku have yet, however.
It seems to just be something that makes it easier to get started with AWS. A bit like how you can choose Azure when creating new projects in Visual Studio, which will take care of Azure-related boilerplate
Surprised they don't plug that in front of this.
I get that the graphics aren't the important part, but jeez.
I'd say they are orthogonal. Craigslist's graphics are bad/nonexistent but I like its UX a lot.
This is despite their terrible UX, not a result of good UX.
CPanel looks nicer than their UI.
I understand the urge for some people to use a control panel, but CPanel and it's like seem specifically designed for people that want to be webhosts without knowing anything required to be an actual webhost. I think that's reflected in the UI/UX.
And yet, while reasonably pretty, cPanel sucks. It's actively harmful to effectively administering a webserver.
Maybe how it works matters more than how it looks?
(Not to mention, most tasks you should script/do from the CLI, once you got the hang of things.)
My vision for AWS is being able to login to AWS and see c9 with one click deploy and configuration.
I'm patiently awaiting for this. It was one thing that really woke me up to Azure with their wizard form directly from Visual Studio. After AWS reinvent 2016, the Azurphoria wore off and I was an AWS fanboy once more. Having the ability to edit/deploy inside my browser would be a huge tide turning moment.