The idea is to stop/shutdown any resources (such as EC2 instances, RDS databases, ECS tasks, etc.) when you're done for the day and restore them back up next time you continue.
Currently it's in a very early stage, a hobby side-project :)
Ideally I'm thinking to provide a CloudFormation template or terraform config for example, to provision a scheduled Lambda with predefined "start of work" and "end of work" times and have it automatically do that for you.
I'd like to know if something like this would be useful for you too and what do you think about it?
I'm aware that Capital One was found guilty of not properly securing customer information when they started moving it to "the cloud" (AWS). I'm also aware that the person who orchestrated the data breach, by exploiting the bank's AWS (mis)configuration, had been hired by Amazon as a developer. Maybe the bank's IT staff created "cloud custodian" after the incident.
Custodian was already there , they have a strict pipeline but what I hear is that the environment where the hack took place was bypassing the normal organisational pipe governance and did not even have coverage of the custodian on the account. Basically shadow it. I still put a lot of blame on the way AWS Iam makes it incredibly hard to stop the use of the credentials outside the vpc in the event they are stolen , for example source ip restrictions do not work if the bucked it using kms encryption because kms will decrypt on users behalf appearing to come from a different ip than the user . The called via is a farce that only work with about 4 services out of hundreds and the meta data v2 with its ip level TTL is a marketing gimmick
Thanks for sharing this. Looks like a sophisticated solution, although might be too complex and enterprise-y for what I needed as a sleepy developer, but is definitely worth having a look :)
Or an API gateway. I have something similar (though very bespoke for me) that has a simple API Gateway web page as it's front-end. It also opens up a firewall entry for my IP so I can connect to the machine.
Depending on my mood I either use the web page or some simple curl aliases to start and stop an instance.
I had been looking at creating a lamba to check for the existence of pods, services, alternate namespaces etc., as I learn K8s on AWS, to shut down the cluster when I'm not working on it.
Thanks. Shutting down K8S cluster when not using is a really good idea. Maybe one day I play around with AWS's k8s cluster and add the trick to aws-cost-saver.
Im very interested in learning about your work flows, like how does a typical day look like? I do all my work on a local pc and only use servers for production. So im interesting in how/why you use the cloud stack
It really depends on the company and developers preferences. For our company's case we do have a "dev" AWS environment for a couple of reasons:
1) To avoid having a side-IaC for your dev env (e.g. using docker-compose/local k8s/localstack) which could take same time as your prod-IaC (terraform/CloudFormation) to prepare. Instead of hours preparing/debugging local dev we can work on great features or refactoring some tech-debt in the real code.
2) Since we have many other teams and upstream dependencies it's much easier to deploy on a AWS when testing complicated features. In an ideal world you shouldn't need it but unfortunately it's never ideal world.
3) Local dev solutions could easily end up with "but it worked on my laptop" due to various reasons from Docker host configs, to localstack deviations from real AWS resources, etc.
But at the end of the day you should see what makes more sense for you :) I guess for many people spending time on preparing a great local dev using docker-compose/k8s/swarm/localstack is cheaper than using AWS for dev anyway.
Not OP, but in my company we do develop locally, however, merged feature branches are deployed to a AWS environment for testing. The environment is identical to production, but with smaller instances. Betas are deployed to yet another identical environment for QA to approve the changes. All three environments are managed by the same Terraform configuration to ensure they are identical.
Though I find a stronger argument in using containerisation to ensure an identical application run-time environment no matter the infra it’s running on.
If we are talking about infra engineering, then sure. That’s where I find localstack helps
I would like to know how and why you use resources like ECS. et.al. There reason why I do not like to use cloud/serverless is because I find debugging is much easier on a local computer or a server I have root/metal/hardware access to. And that serverless have a very slow iteration rate, it can take minutes just to run a simple test that would be instant locally.
The reason I'm asking is because on my free time I'm developing a cloud IDE but I'm having trouble understanding the market. I beleave cloud dev services will be a huge market. But I don't understand how people use it practically in their dev setup.
I use it (and similar tech) because I don't want to think about individual boxes at scale, rather instances of an application. Containers + orchestration allow me to develop and test in the exact same environment my apps will be in in prod, which allows me to mostly remove the hardware and OS as a variable that could affect its runtime behavior. They also make it easier to spin up a local copy of my services' runtime environments for testing, as I need only describe it in a simple yaml file and spin it up w/ docker-compose in a single command.
I found at first testing was arduous, but you learn to adapt. In my case, I increased the amount of debug logs my application throws (can also be useful to enable in prod when trying to hunt down an issue) and invested time in creating unit/integration tests that run outside of the container/lambda to validate specific paths/features. Worst case there's always docker exec and language-specific debugging when I want to dig in manually.
I'm still pretty skeptical about serverless (I mostly use it for cloud-specific automation), but as much as people like to shit on Docker it's been pretty great to use from an ops perspective. I interpret most negativity about it as curmudgeonly reactions to the admittedly excessive hype in the technologies, with some fairly pedantic complaints about the (imo minimal) overhead such technologies incur thrown in as an uninteresting justification.
If they don't work the same on local computer than on ECS perhaps docker failed to deliver what promised?
BTW: I absolutely hate debugging in AWS, it's such a nightmare, docker was the first thing that made thing hard even on premises, but AWS pushed things to 11.
I think the key to sanity if you have to use public cloud is to put extra effort so you discover all bugs locally before you deploy to public cloud. Which probably is not a bad thing.
Ok, I don't know how else to ask and I'm seriously not trying to be an arse but how do you? I'm genuinely interested as it's something I've been struggling with myself lately.
interesting work! i feel like this can't be a new problem. what comparable tools exist for doing this today? its basically just spinning up a staging env while you work and spinning down when you knock off work right? almost like a heroku dyno spinning up for you.
That's a good alternative for EC2 / RDS instances but when it comes to other cases like DynamoDB provisioned throughput or Kinesis shards or ElastiCache it wouldn't help much
Ops people need a Dev environment to test, especially for Infrastructure as code. With an increased connectivity between developer code and Infrastructure, I prefer my team finds out everything in a development environment that's exactly like staging and prod. That way deployment to staging is proof of the IAC change. There's less risk.
This also lets developers try out new services in the cloud without impacting the staging... (This is on top of a sandbox for research, but integration of research takes time and an environment too)
I guess depends on the size of the company. How else would you constantly test integrations between different services? Especially if some services can't be easily ran on the laptop.
Also if you add terraform/ansible and all that jazz to the table. Where else? Infra team needs dev/stage as well.
It’s super useful to be able to spin up a dev environment for doing things like experimenting with the performance impact of different configs. It’d be annoying and potentially disruptive to other team members to do that on staging.
It looks like a nice tool doing it's job well :-) I'd also consider services like provisioned DynamoDB tables or Kinesis streams in this context because there you can also waste lots of money depending on your setup. For example, you could decrease the provisioned read/write capacity for a DynamoDB table or decrease the shards of a Kinesis stream over night. I've discussed these topics in a blog post, in the context of a CloudFormation stack, if anyone's interested: https://www.sebastianhesse.de/2018/04/22/shut-down-cloudform...
Great addition to pile of scripts to take with you on problem solving missions. I have something similar, but written in terraform.
WRT AWS cost visibility, I find it is usually faster to spin up and grab most of the relevant information using mlabouardy's [komiser](https://github.com/mlabouardy/komiser) than to try and standardize yourself.
Not shilling, just a tool I like.
If you're using IaC and have it set up in a CI/CD pipeline, you could also achieve the same by having a cronjob set a flag outside of work hours, and use conditionals in your IaC based on the value of that flag (e.g., for Terraform `count = var.scale_down ? 0 : 1`)
While this only works with instances, I usually just cron the shutdown command with a future time. Nice thing is, if working late I give myself a couple hour window to cancel shutdown on any nodes I'm still using.
I find it price gouging that AWS doesn't already implement this on their own within their pricing structure and that you need a 3rd party tooling to do this.
It allows you to create automatic start and stop schedules for your Amazon EC2 and Amazon RDS instances. It uses a combination of CloudWatch/Lambda/DynamoDB to check against tagged instances.
If you don't like their prices, why don't you just switch to another provider? Plenty of good clouds. And even more dedicated / VPS servers, if you want truly crazy savings.
The downside of shutting down the resources is that there may be none available anymore in that sizing or pricing when you want to start them up again. I suppose they also don’t want to risk being liable for that.
https://github.com/aramalipoor/aws-cost-saver
The idea is to stop/shutdown any resources (such as EC2 instances, RDS databases, ECS tasks, etc.) when you're done for the day and restore them back up next time you continue.
Currently it's in a very early stage, a hobby side-project :)
Ideally I'm thinking to provide a CloudFormation template or terraform config for example, to provision a scheduled Lambda with predefined "start of work" and "end of work" times and have it automatically do that for you.
I'd like to know if something like this would be useful for you too and what do you think about it?