
Manage AWS costs on non-production environments - askaquestion01
https://microtica.com/aws-cost/
======
boredgamer2
Woah, No! Stop. Their homepage claims _" Grant us least privilege permissions
to your AWS account(s)."_

But then their docs [1] give you this admin policy to use.

    
    
      {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "*",
                "Resource": "*"
            }
        ]

}

[1]
[https://microtica.atlassian.net/servicedesk/customer/portal/...](https://microtica.atlassian.net/servicedesk/customer/portal/1/article/109215872?src=-140889264)

~~~
radedespodovski
Since with Microtica you could provision any kind of AWS service the policy
mentioned in the documentation is left to be more open so you can tryout fast
and then when you figure out what you need the access could be reduced.

Ultimately, the user has the complete control over what access he will give to
Microtica.

Since this policy was primarily intended for DevOps module, and Cost Optimizer
needs only a subset of those permissions we will update the documentation to
avoid the confusion.

~~~
a012
No, please just no. Don't give anybody outside of your org full admin
permissions. Putting a bad example is bad, and also show their incompetences.
At least they can put a red giant box to warn people, not expecting everybody
know not to do it.

~~~
scarface74
I avoid giving _myself_ admin permissions except when absolutely necessary. I
created a “read only role” with no permissions and then started adding
permissions to it as I run into issues.

I log into our management account and switch to the read only role for our
prod account. If I have to switch to admin role I have the toolbar display as
red.

If I’m that paranoid about me making a mistake, why would I trust a third
party with those rights?

~~~
radedespodovski
Just realized that the example with CLI in the docs have the right policy with
least privileges. Somehow the part with the full access was overlooked. We
just updated the documentation.

I completely agree with your approach, we also encourage our users to start
with the base permissions and then give more when necessary. Even more, to
give an access only on resources provisioned by our system. As we
automatically tag all resources, using IAM policy conditions this could be
easily done. The control is always on the user's side.

------
MaxBarraclough
Perhaps there's some value in this offering, but it's not hard to configure
automatic scheduled startup/shutdown of EC2 instances just using what EC2
offers. You can leverage AWS tagging to single out which instances are to be
started up each morning. Shutdown is easy, as any operating-system can be
configured to auto-shutdown in the evening.

This isn't as straightforward as it might be - it takes a blog-post to explain
it: [https://schen1628.wordpress.com/2014/02/04/auto-start-and-
st...](https://schen1628.wordpress.com/2014/02/04/auto-start-and-stop-your-
ec2-instances/) , [https://www.thinkforwardmedia.com/automating-
ec2-instances-a...](https://www.thinkforwardmedia.com/automating-
ec2-instances-aws-lambda/)

My own notes and a relevant AWS Lambda script, in Python:
[https://pinboard.in/u:MaxBarraclough/b:f0b059256f32](https://pinboard.in/u:MaxBarraclough/b:f0b059256f32)
,
[https://gist.github.com/MaxBarraclough/211e569cb57b46c0ddb48...](https://gist.github.com/MaxBarraclough/211e569cb57b46c0ddb481f6adcefdd1)

~~~
radedespodovski
Probably if you have a couple of accounts and infrastructure units then
probably you could set up similar configuration which will do the job. We at
Microtica also started with that approach but as we started to expand with AWS
accounts and multiple infrastructure units it become hard to manage all of it.

Imagine if you have to use different tags as selectors with different schedule
times for each. You would have to control each of them through CloudFormation
or even manually log into EC2 instance to update the script.

In addition, the solution provides few more benefits: 1\. Auto-tagging 2\.
Native integration with infrastructure provisioned with Microtica 3\. Cost and
Saving dashboard 4\. You don't need to manage your own infrastructure which
will handle start/stop operations for EC2 and RDS 5\. Schedule notifications

~~~
scarface74
Why would I trust any company that is reckless enough to recommend giving
their service full access to my account - with _any_ access to my account?

------
nostrebored
While I think this is a cool idea, I'm really not sure how it works after
reading through the features page and the landing page. I can't tell which
resources it spins up and down. The listed supported components like DynamoDB
and S3 seem like strange choices here -- I'm guessing S3 shifts objects to
Glacier and restores them and DynamoDB changes provisioned capacity, but I've
walked away with more questions than answers.

If anyone here is on the team, can you provide a bit more details as to what's
happening behind the scenes?

Maybe I am misunderstanding the product and it's really an orchestration tool
around when an environment is spun up and down, but then I'm having a hard
time seeing the benefit over CloudWatch Events -> Lambda -> Pipeline.

I discuss issues like this with customers regularly, and I'd really love to be
able to introduce this into conversations, but I need a bit more to go on :)

~~~
radedespodovski
Hi, I am Rade from Microtica.

The benefit of Microtica Cost Optimizer over using CloudWatch+Lambda comes
when you have to manage more different schedules. With CloudWatch+Lambda
solution you also have to manage the schedule infrastructure no matter how
simple that infrastructure is, this could become an overwhelming repetitive
task.

We added additional features like: 1\. automated resource tagging 2\.
enable/disable schedule 3\. manual start/stop of resources assigned to a
particular scheduler 4\. cost and saving dashboard 5\. notifications

Cost Optimizer is just one feature of the Microtica product. Microtica is a
DevOps automation tool that you can use to provision complete cloud
infrastructure and deliver applications on Kubernetes using CI/CD.

The components you are referring are ready peaces of infrastructure that you
can just pick up, combine and provision in the cloud.

Behind the scenes, we register a schedule in our system and when start/stop
event is fired, first we obtain temporary credentials (assume role) for the
AWS account where resources to be scheduled reside, then we call AWS APIs to
perform scheduling operations over EC2 and RDS.

~~~
nostrebored
Thanks for the explanation!

------
scarface74
We just went through an exercise on saving costs by shutting down resources in
our development environments. I looked over our bill and found the following:

\- most of our costs for Aurora was data. Shutting down the database didn't
save us too much of anything. We were already running some of the smallest
instances. Aurora Serverless wasn't an option. It was missing features we
needed.

\- We only had a few small T* pet servers in DEV that we could shut down.

\- Most of our greenfield projects are either Lambda (no cost when you aren't
using, very cheap when you are) or Fargate (Docker). We could iterate through
all of our clusters and set it to 0 at night and spin them back up as needed I
guess

\- You can't just shut down your ElasticSearch cluster.

\- Do you really want to deprovision load balancers and your NAT gateway?

Did I mention that we have an entire team that works opposite hours than we
do?

-

------
xmdx
A problem I see with this in big teams is people working different hours and
want to use an environment in the middle of the night.

Don't even need a big team actually, just takes 1 person who decides they want
to do some work during a 'sleep cycle'. Pretty common where I work, you never
know when someone might be interested in picking up some work.

~~~
draugadrotten
For some lab environments, we've simply configured automatic shutdown after
hours if and only if nobody is working. If the system is in use, it is not
shut down. Then the system is started again on demand. Spin up time is so
quick that there is no value in pre-starting.

This is working well. A few guys used to complain about having to wait for
that first boot, but in reality, it's almost always the same guy who's first
in the office that gets it started, so complaints quickly went away.

~~~
xmdx
Yeah this makes sense, better than a system where you have to enter dead hours

------
ykevinator
Why flagged? This is a good service. There is also skeddly that does this, we
rolled our own because we needed auto scaling and sleep and then we needed
load based auto scaling.

------
yhvh
How is this different to aws instance scheduler?

~~~
radedespodovski
Hi,

The scheduling part is close to AWS instance scheduler solution. The added
value of Microtica Saving Schedules is that it's much easier to manage
compared to aws saving schedule where you have to update CloudFormation
whenever you would need to change configuration. This could become hard to
manage especially when you have multiple accounts and multiple different
resource groups with different schedule configs.

On-demand start/stop of resources associated with the schedule is also
something that is hard to deal with when solution is done with CloudFormation.

We also added additional features like: 1. automated resource tagging 2.
enable/disable schedule 3. cost and saving dashboard 5. notifications

------
scarface74
How did spam end up on the front page of HN?

~~~
MaxBarraclough
How is it spam? Do you think their product adds no value? Posting startups
isn't just permitted, it's part of the point of HackerNews -
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
scarface74
The link was to an uninformative marketing page. I couldn’t tell for the life
of me what it actually does or how it works and I know the ins and outs of AWS
pretty well.

~~~
radedespodovski
We are a startup trying to get more and more feedback and provide a solution
that would help the developers community and business. I don't understand how
sharing a feature from a product can be considered as a spam.

You can find more information about our cost optimizer in this blog post:
[https://microtica.com/reduce-aws-costs-on-non-production-
env...](https://microtica.com/reduce-aws-costs-on-non-production-
environments/)

~~~
scarface74
This page is at least somewhat informative. But, it wasn’t until I saw the
more granular permissions after you edited the policy did I get some idea of
what you are doing.

If you are aiming at a technical audience, the first thing we all did was look
at what permissions were required because our first concern was security.
Especially if we are handing over cross account permissions.

The second thing I want to know is how you are doing it?

But honestly, if you have a cloud native implementation like we do, just
controlling EC2 and RDS only scratches the surface. We have DynamoDB, Fargate
services, ElasticSearch clusters, Redshift clusters, etc.

~~~
radedespodovski
Actually, we launched cost optimization when we realized most of our users
required simple out of the box solution that will save on infrastructure
provisioned by Microtica, they spend mostly EC2 and RDS compute resources.
Because it is fully integrated, with couple of clicks they could start saving,
no need for additional tools or custom solutions. But we realized it could
work for any infrastructure.

We wanted to start with something. There are other aspects of cloud cost
optimization like rightsizing, unused resources, use of spot and reserved
instances that are as important as instance scheduling. As well as additional
supported services you've mentioned that we are planning to support in the
future.

The solution is fully serverless, the scheduling as you are already familiar,
is based on triggers that initiate calling certain AWS APIs to manage state of
the resources. On top of it, we added support for enabling/disabling the
schedule, on-demand start/stop of resources in case somebody needs the
environment immediately, auto-tagging, cost per schedule etc. all integrated
in one solution.

There are tools similar to ours on the market that could solve more complex
saving workflows but are also more complicated to manage and are pretty
expensive.

If you are interested in how it works, feel free to try it out it's free. Any
feedback is welcome.

