The new pricing model charges are based on the number of instances you have running. This is largely dictated by the App Engine Scheduler, but for Python we will not be supporting concurrent requests until Python 2.7 is live. Because of this, and to allow all developers to adjust to concurrent requests, we have reduced the price of all Frontend Instance Hours by 50% until November 20th.
Our app (python runtime) is going from about $9 a month to projected $270 a month, almost entirely due to frontend instance hours. I have to think that once python 2.7 is live and concurrent users != concurrent frontends, this will get waaay more reasonable.
"""The Frontend Instance costs reflect a 50% price reduction active until November 20th, 2011. Learn more about the price reduction and ways to reduce your costs permanently."""
So come winter, I'm looking at > $500 a month... here's hoping concurrent requests on the python runtime will make a huge difference, otherwise, sheesh.
For the kinds of multiples in increases being talked about, I'm hoping GAE also brings online some professional support people with a 24-hour turn around time. The only times we've been upset with GAE was the lousy support-via-web-form (a form with instructions that doesn't seem to address how we should communicate with the GAE team at all) when something wasn't going right on your end that directly cost us revenue.
I think the turn around on a response to our support problem took >2 weeks. It was successfully resolved, but it was an issue such that we were looking at having to essentially start our entire enterprise over from scratch if it wasn't taken care of.
This is a bait-and-switch, plain and simple.
Are you doing the "any part of an hour is billed as an hour" thing that EC2/Azure do?
That would be extremely useful to understand the new pricing scheme better.
Could I be guessing right in that this means no concurrancy at all within a given instance - so an instance chewing CPU time an instance blocked waiting for IO to complete are no different in this model.
Google is charging per datastore operation, and last I saw was defining each entity returned as an operation.
If you were building a social network or twitter clone using the techniques they recommend in this talk at GoogleIO: http://www.google.com/events/io/2009/sessions/BuildingScalab...
...you could easily pull back a hundred entities each time you pull someone's feed. Each entry is a separate entity. It'd cost you a buck for every pageview.
Is that an accurate assessment?
Btw, a dashboard showing handlers sorted by average latency would be a big help! I can use appstats to get close to this, but given that the concurrent frontends are the dominant factor in pricing going forward, having it build in would be great.
Which, in my opinion, is insane.
It's not clear to me, yet, what impact the new 2.7 concurrency features will have.
With the new changes, API time doesn't count, so all you're left with is pure CPU time. Since a single instance can service one request while waiting for another's API to return, you're pretty much limited by the CPU time you use.
I hope to be proved right, anyway.