
Show HN: Komiser – Detect potential AWS cost savings - mlabouardy
https://github.com/mlabouardy/komiser
======
holoduke
The best way to reduce costs as a startup is to not use it in the first place.
I know its not popular to say, but both aws and GCS are too expensive for most
startups. Better just use a local decent cloud provider. You can always move
to GCS/aws when the time is right.

~~~
mlabouardy
its expensive if you misuse it, those cloud providers provides services and
computing models that cannot be found on low cost providers such Serverless
and Containers.

~~~
kakwa_
If you are a startup with a none-Facebook growth (aka most of the startups),
what you want to spend money on is functionalities, not spending effort on
additional technical constraints because of restricted infrastructure.

Basically, AWS (or any cloud provider) is great for young startup with 0
money, medium sized companies who can optimize to some degree and big sized
companies too big to organize efficiently the logistic of buying servers and
the instrumentation of infrastructure. It leaves the mid sized startups which
have an interest of buying their own infra, but migrating away from a cloud
provider also have a cost of itself.

It's not as clear cut as I am stating, you also have cheaper alternatives than
AWS, Azure and GCE. You can go for OVH or DigitalOcean, less possibilities and
technical offering, but cheaper prices which can compute with hosting things
yourself mid-term.

~~~
mlabouardy
I agree, most young startup goes with PaaS (Platform as a Service) like
Heroku, OVH or even Elasticbeanstalk, even us we started with that and then
migrated to IaaS and FaaS.

------
bayareanative
How about the savings of moving regular anticipated loads to traditionally
colo or managed infrastructure? That type of usage much, much cheaper
(CapEx+OpEx) than shoving it all on AWS for million$ per month like Apple or
Netflix (who can afford it).

Cloud *aaS is best suited to several major use-cases:

0\. Experimental projects of limited duration

1\. "Peaking" overflow capacity for burst of transactions or daily sinus
maximum load (/.-resistance)

2\. Batch jobs (ephemeral computing)

3\. Disaster Recovery/Business Continuity (DR/BCP)

4\. Informal IT to bypass bureaucracy

Source: Hi, I'm a former client-facing Fortune 200 AWS consultant from back in
the day. I don't own Amazon stock or have any current conflicts-of-interest.

~~~
mlabouardy
I found PaaS model a great fit for young startup, however when you want to
scale your business, you will have trouble scaling your app with the
traditional PaaS (no monitoring, lake of flexibility...)

------
freediver
Another way to cut the cloud cost is to get the cheapest resources to begin
with. I've been working on
[https://cloudoptimizer.io](https://cloudoptimizer.io) which is a free service
to find the cheapest CPU/GPU/Memory resources in the cloud.

~~~
mlabouardy
I agree, however, there's more things to do to reduce the cost like deleting
snapshots, unused disks, unassigned elastic ips... that's the purpose of this
tool :)

------
genie514
Would be good to put a few examples of recommendations the tool makes in
Readme. Right now it looks like a cost explorer.

~~~
mlabouardy
I'm working on it, I will write some posts on Medium and blog to describe how
to use the tool to reduce the cost

------
mlabouardy
I'd love to hear some feedback on how to improve Komiser.

~~~
ken
Have you been in touch with Amazon? When I worked at a company that used AWS,
they had a couple support engineers come out to our office for a day or two to
help us cut costs. They said their whole job was going around and helping
people spend less on AWS. They might like a tool like this, and have some good
ideas for you, too.

~~~
mlabouardy
I didnt try to contact them yet, I'm working on some upcoming cool features,
also I will release the support of GCP this week. Once done, I will do more
effort on the marketing part :)

------
jmacd
If you are running Kubernetes [http://valence.net](http://valence.net) might
be interesting

~~~
jacques_chester
Autoscaling is both simpler than this and more complex than this.

For predictive autoscaling, boring old-fashioned forecasting techniques appear
to work fine and are very fast and very cheap.

For reactive autoscaling, boring old-fashioned control loops appear to work
fine and are very fast and very cheap.

There's still no substitute for understanding your application's design.

If you tried to design and build a factory the way folks propose to rely on
autoscalers and ML ("we'll ignore what's in the building and rely on a thin,
unintelligible gas of floats!!"), you would be fired.

~~~
bayareanative
Yeap. Autoscaling strategies are often traditionally hand-coded to depend on
the costs associated of not completing transactions or site going down from a
business impact analysis (BIA). To add AI/ML would need a number of business
rules, metrics, automation parameter limitation and (a) fitness function(s) in
order to use it.

Examples of hard-coded autoscaling strategies:

Shopping cart or ad delivery network -> anticipate load with extra capacity by
scaling up instances before they're needed and kept until they aren't
beneficial.

Neighborhood social network:

\- scale +1 instances after avg latency > X0 ms for Y0 minutes

\- scale -1 after avg latency < X1 ms for Y1 minutes

~~~
jacques_chester
> _Autoscaling strategies are often traditionally hand-coded to depend on the
> costs associated of not completing transactions or site going down from a
> business impact analysis (BIA)._

Adding on this, my current thinking is that it would be possible to better
surface this tradeoff by modeling autoscaling as an inventory problem. There's
a stockout cost and a holding cost; for a given probability of hitting a cold
start and for a given cold start lag, it should be possible to compute the
optimal instances on-hand.

Then there are, as you point out, many special business rules. Where I work we
have a pool system for testing environments. It works reasonably well until a
particular team, sitting upstream of approximately half the company, makes a
release. Then suddenly everyone's pipelines kick off simultaneously. So now
they have a system which watches for signs of impending release and pre-
emptively scales up.

Autoscaling should not be your first line or final line of defence. It is a
powerful tool, not a magic wand.

------
mlabouardy
I'm pleased to announce the new release of Komiser:2.1.0 with beta support of
GCP. You can now detect potential cost savings of both AWS and GCP in one
tool.
[https://github.com/mlabouardy/komiser](https://github.com/mlabouardy/komiser)

------
mlabouardy
Thanks guys for the feedback, the support of GCP (Google Cloud Platform) will
be released this week, so with one tool you can reduce your cost and optimize
your cloud environment security :)

------
kodebrew
Hey, this looks really cool! I've been working on a very similar product:
[https://cloudcosts.io/](https://cloudcosts.io/). It's only hosted, no open
source versions of the software and much simpler.

It has a few users and I haven't figured out yet if it could be monetized. The
basic thing that I wanted was a daily email of my cost changes from AWS.

How have you found trying to integrate Google Cloud and AWS billing? That's
where the real value in this type of product is to me.

~~~
mlabouardy
Thanks for the feedback, yes I will be release the GCP support this week :)

------
pandemicsyn
Nice - we just got a demo/beta from
[https://www.prosperops.com](https://www.prosperops.com) (disclaimer i worked
with the prosperops folks in a previous company). They're focused on helping
you save $ by managing your RI's. Its pretty slick setup. Best way for me to
describe it is that they're basically a wealthfront style roboadviser for
RI's.

~~~
mlabouardy
cool, Komiser is about optimizing the cost of all AWS services not only
Reserved instances :)

------
kakwa_
It depends on how well/dynamic your application is deigned, but if some parts
are a bit static (let say, the DB side), conciser reserved instances you can
get 20 to 40% reduction in cost for it.

True, it requires a plan for 1 to 3 years, but it can reduce your bill by a
significant amount.

------
xtracto
The README links to this dead link:

[https://s3.amazonaws.com/komiser/aws/policy.json](https://s3.amazonaws.com/komiser/aws/policy.json)

Also, wasn't AWS deprecating links like this?

~~~
babo
It's at the top of the repo itself.

~~~
mlabouardy
thanks for pointing this out

------
adar
The Youtube video linked to in your GitHub readme doesn't seem to exist.

~~~
mlabouardy
hmm, are you sure ? I just try it and its working, anyway here is the link :
[https://www.youtube.com/watch?v=DDWf2KnvgE8](https://www.youtube.com/watch?v=DDWf2KnvgE8)

------
ryanmarsh
Need this but for Eth smart contracts

------
judge2020
The readme contains path-style S3 links, which need to be changed
[https://news.ycombinator.com/item?id=19821406](https://news.ycombinator.com/item?id=19821406)

~~~
ollyculverhouse
Supported until September 2020, I don't think they _need_ to be changed now.

~~~
sanjams
Some engineers in Mozilla said the same thing about their certs that expired
yesterday

~~~
mlabouardy
Haha, that's why I updated url

