Hacker News new | past | comments | ask | show | jobs | submit login

The API gateway seems quite expensive to me. I guess it has its use cases and mine doesn't fit into it.

I run a free API www.macvendors.com that handles around 225 million requests per month. It's super simple and has no authentiction or anything, but I'm also able to run it on a $20/m VPS. Looks like API gateway would be $750+data. Bummer because the ecosystem around it looks great. You certainly pay for it though!




It's a steal for mediocre organizations where traditional development of even a simple proxy to some other AWS service would be a couple months' planning and launch late with show-stopping bugs, eating a couple man-months total before reaching stability and ultimately serving only a few thousand requests before getting shelved.

In circumstances like these, I've estimated per-request costs of systems at tens of dollars.


you should definitely look at Kong[1] - they have an opensource API Management tool. They do have their own commercial product Mashape[2] whose pricing is $250 plus 20% of the API Revenue[3] and they have one more option called Gelato[4] that provides Kong Integration

[1]https://github.com/Mashape/kong

[2]https://www.mashape.com/

[3]https://market.mashape.com/pricing/providers

[4]https://gelato.io/pricing


curious to know your setup


From a load perspective, if the request pattern is even, 225 million req/month are only about 85 req/s (assuming ~730hrs/mo). Any $5/mo VPS running even a very heavy and suboptimal web framework can handle that.

It's likely more spikey than that (i.e., peak/off-peak times), but certain server-to-server loads have a very consistent load pattern. (For example, at Userify[1] - SSH key management -- servers check for updates every 90 seconds or so or 10 seconds for premium plans or self hosted, so the load pattern is literally a flat line.. extremely predictable. We'll probably switch over clients that can handle it to websockets and maybe hashed etags and cut that load pattern into oblivion, but for now it works and is simple/auditable code/extremely reliable, which is a very important factor in our case.)

To GP's point, spiraling costs are definitely a factor with most of Amazon's services such as DynamoDB and especially Lambda... they are sometimes an effective use of devtime, especially in the beginning of a project (and when dealing with a mature platform like DynamoDB and maybe not so much Lambda), but you have to carefully consider the cost factor as you scale. For example, Lambda is often literally several orders of magnitude[2] more expensive than an equivalent ELB. (i.e., it can be more than 100x more expensive.. for small scale or maintenance tasks, that may not matter.. for a heavy/core service, it definitely matters!) So, as they taught us in AWS SA, design for cost: use the more advanced services when it makes sense, but optimize across all axes, not just devtime.

TL;DR: most cheap cloud instances can do 85req/s.

1. https://userify.com

2. https://twitter.com/JamiesonBecker/status/802185522139582464


hmm interesting. I guess I haven't really thought of this from cost point of view. It is nice knowing that I can have a Flask endpoint up on AWS Lambda + API Gateway that can scale using only my credit card. That peace of mind, once you write a code you won't have to worry about scaling seems like a pretty good deal and should carry a premium.

I'd be happy to have 85req/s hopefully by then I'd be charging lot of money.


Exactly - and don't forget about things like Elastic Beanstalk as you grow. Those automate even the setup of things like the ELB/ALB etc, trading a higher fixed monthly cost for a lower cost per transaction. Still, if your business model supports it and if it ain't broke, just leave it where it is... AWS does give you many good choices.


I just begun to look at ELB and curious to know how it's able to scale my php app straight out of the box.

Having said that it's a struggle to figure out how to get Laravel running on ELB


Handling the spikes is the tricky part. People apparently like to run cron jobs on the hour so a few times a day I need to handle up to around 1200 req/s


how did you handle the spike?


Just made sure it was fast enough to handle it. I have benchmarked it and it should be able to do close to 2250 req/sec. After that I'll have to drop a load balancer in front I guess.


It used to run in PHP/redis but when the load got too high I rewrote the server in GO. No database needed as the server downloads and loads the files in memory.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: