Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Komiser – Detect potential AWS cost savings (github.com)
166 points by mlabouardy 8 months ago | hide | past | web | favorite | 42 comments



The best way to reduce costs as a startup is to not use it in the first place. I know its not popular to say, but both aws and GCS are too expensive for most startups. Better just use a local decent cloud provider. You can always move to GCS/aws when the time is right.


its expensive if you misuse it, those cloud providers provides services and computing models that cannot be found on low cost providers such Serverless and Containers.


If you are a startup with a none-Facebook growth (aka most of the startups), what you want to spend money on is functionalities, not spending effort on additional technical constraints because of restricted infrastructure.

Basically, AWS (or any cloud provider) is great for young startup with 0 money, medium sized companies who can optimize to some degree and big sized companies too big to organize efficiently the logistic of buying servers and the instrumentation of infrastructure. It leaves the mid sized startups which have an interest of buying their own infra, but migrating away from a cloud provider also have a cost of itself.

It's not as clear cut as I am stating, you also have cheaper alternatives than AWS, Azure and GCE. You can go for OVH or DigitalOcean, less possibilities and technical offering, but cheaper prices which can compute with hosting things yourself mid-term.


I agree, most young startup goes with PaaS (Platform as a Service) like Heroku, OVH or even Elasticbeanstalk, even us we started with that and then migrated to IaaS and FaaS.


How about the savings of moving regular anticipated loads to traditionally colo or managed infrastructure? That type of usage much, much cheaper (CapEx+OpEx) than shoving it all on AWS for million$ per month like Apple or Netflix (who can afford it).

Cloud *aaS is best suited to several major use-cases:

0. Experimental projects of limited duration

1. "Peaking" overflow capacity for burst of transactions or daily sinus maximum load (/.-resistance)

2. Batch jobs (ephemeral computing)

3. Disaster Recovery/Business Continuity (DR/BCP)

4. Informal IT to bypass bureaucracy

Source: Hi, I'm a former client-facing Fortune 200 AWS consultant from back in the day. I don't own Amazon stock or have any current conflicts-of-interest.


I found PaaS model a great fit for young startup, however when you want to scale your business, you will have trouble scaling your app with the traditional PaaS (no monitoring, lake of flexibility...)


Another way to cut the cloud cost is to get the cheapest resources to begin with. I've been working on https://cloudoptimizer.io which is a free service to find the cheapest CPU/GPU/Memory resources in the cloud.


I agree, however, there's more things to do to reduce the cost like deleting snapshots, unused disks, unassigned elastic ips... that's the purpose of this tool :)


Would be good to put a few examples of recommendations the tool makes in Readme. Right now it looks like a cost explorer.


I'm working on it, I will write some posts on Medium and blog to describe how to use the tool to reduce the cost


I'd love to hear some feedback on how to improve Komiser.


I used to do this in a spreadsheet for an old client. The spreadsheet combined two sources into a single flat table:

1. AWS resource list, tags and spend.

2. Datadog utilisation.

From this sheet a derivative sheet was created that had functionality on it, so that the data sheet could be regularly updated. The sheet was sorted in order of cost, and a cumulative sum totalled up all the spend. The column next to that gave a cumulative percentage of total spend so you could quickly see how spend was distributed.

There was a set of indicator columns at the end of table calculated by formula which show 0 or 1 dependent on whether the indicator applied. The indicators where things like:

1. Can down-grade instance.

2. Can kill instance.

3. Consider for contract.

4. Cheaper on Azure.

5. Can be on-demand.

6. Is unreliable.

etc.

As we thought of new things I'd add them to the spreadsheet. This was of working was very effective.


Have you been in touch with Amazon? When I worked at a company that used AWS, they had a couple support engineers come out to our office for a day or two to help us cut costs. They said their whole job was going around and helping people spend less on AWS. They might like a tool like this, and have some good ideas for you, too.


I didnt try to contact them yet, I'm working on some upcoming cool features, also I will release the support of GCP this week. Once done, I will do more effort on the marketing part :)


How is this tool going to help me reduce cost better than AWS’s built in cost explorer and related tools? Not really seeing the added value, but I might be missing something.


AWS cost explorer gives you only the proportions of costs spent for each service you use, and it requires deep understanding of AWS to get the value our of cost explorer (like adding tags, buying reserved instances, etc ..). however this tool, can be used by everyone, its user friendly, it gives you an overview of all services you're using and shows in a map format the regions you're using, and it gives you recommendations and you can even deploy actions like deploying a lambda function to delete unused disks or snapshots (this feature will be added in upcoming days). Also, this tool supports multiple cloud providers, so with one tool you can analyze and detect potential cloud cost savings.


If you are running Kubernetes http://valence.net might be interesting


Autoscaling is both simpler than this and more complex than this.

For predictive autoscaling, boring old-fashioned forecasting techniques appear to work fine and are very fast and very cheap.

For reactive autoscaling, boring old-fashioned control loops appear to work fine and are very fast and very cheap.

There's still no substitute for understanding your application's design.

If you tried to design and build a factory the way folks propose to rely on autoscalers and ML ("we'll ignore what's in the building and rely on a thin, unintelligible gas of floats!!"), you would be fired.


Yeap. Autoscaling strategies are often traditionally hand-coded to depend on the costs associated of not completing transactions or site going down from a business impact analysis (BIA). To add AI/ML would need a number of business rules, metrics, automation parameter limitation and (a) fitness function(s) in order to use it.

Examples of hard-coded autoscaling strategies:

Shopping cart or ad delivery network -> anticipate load with extra capacity by scaling up instances before they're needed and kept until they aren't beneficial.

Neighborhood social network:

- scale +1 instances after avg latency > X0 ms for Y0 minutes

- scale -1 after avg latency < X1 ms for Y1 minutes


> Autoscaling strategies are often traditionally hand-coded to depend on the costs associated of not completing transactions or site going down from a business impact analysis (BIA).

Adding on this, my current thinking is that it would be possible to better surface this tradeoff by modeling autoscaling as an inventory problem. There's a stockout cost and a holding cost; for a given probability of hitting a cold start and for a given cold start lag, it should be possible to compute the optimal instances on-hand.

Then there are, as you point out, many special business rules. Where I work we have a pool system for testing environments. It works reasonably well until a particular team, sitting upstream of approximately half the company, makes a release. Then suddenly everyone's pipelines kick off simultaneously. So now they have a system which watches for signs of impending release and pre-emptively scales up.

Autoscaling should not be your first line or final line of defence. It is a powerful tool, not a magic wand.


autoscaling strategies can help you to reduce your cost, however people often forget that small things like unused disks, snapshots, nat gateway can cost you a fortune


https://valence.net/ works, but it looks like HTTP gets the default nginx page.


I'm pleased to announce the new release of Komiser:2.1.0 with beta support of GCP. You can now detect potential cost savings of both AWS and GCP in one tool. https://github.com/mlabouardy/komiser


Thanks guys for the feedback, the support of GCP (Google Cloud Platform) will be released this week, so with one tool you can reduce your cost and optimize your cloud environment security :)


Hey, this looks really cool! I've been working on a very similar product: https://cloudcosts.io/. It's only hosted, no open source versions of the software and much simpler.

It has a few users and I haven't figured out yet if it could be monetized. The basic thing that I wanted was a daily email of my cost changes from AWS.

How have you found trying to integrate Google Cloud and AWS billing? That's where the real value in this type of product is to me.


Thanks for the feedback, yes I will be release the GCP support this week :)


Nice - we just got a demo/beta from https://www.prosperops.com (disclaimer i worked with the prosperops folks in a previous company). They're focused on helping you save $ by managing your RI's. Its pretty slick setup. Best way for me to describe it is that they're basically a wealthfront style roboadviser for RI's.


cool, Komiser is about optimizing the cost of all AWS services not only Reserved instances :)


It depends on how well/dynamic your application is deigned, but if some parts are a bit static (let say, the DB side), conciser reserved instances you can get 20 to 40% reduction in cost for it.

True, it requires a plan for 1 to 3 years, but it can reduce your bill by a significant amount.


The README links to this dead link:

https://s3.amazonaws.com/komiser/aws/policy.json

Also, wasn't AWS deprecating links like this?


It's at the top of the repo itself.


thanks for pointing this out


At the end of Sept 2020.


I fixed


The Youtube video linked to in your GitHub readme doesn't seem to exist.


hmm, are you sure ? I just try it and its working, anyway here is the link : https://www.youtube.com/watch?v=DDWf2KnvgE8


Need this but for Eth smart contracts


The readme contains path-style S3 links, which need to be changed https://news.ycombinator.com/item?id=19821406


Supported until September 2020, I don't think they need to be changed now.


Some engineers in Mozilla said the same thing about their certs that expired yesterday


Haha, that's why I updated url


I fixed the link, thanks for the feedback




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: