I want to give good feedback. But I also want to be a little bit of jerk. I'm pretty sure that I woke up this morning and produced 20MB of data before my coffee finished brewing. Here are some things that have more than 20MB of storage:
Ok...enough sillyness. 20MB? If you have any index on your data this is really more like 12MB of data. I'll admit that the 20MB is free and enough to get started with a design, but there no way you'd be able to run any sort MVP on that amount of space. With the current lack of logging a couple hundred visitors would fill that up in a few days with just web traffic logging. $10/month isn't that expensive for the full 10GB...actually a great price point....but the 20MB just looks silly.
If I were you all I'd up it to 1GB. 20MB just looks so...1992ish? Heck even 100MB would look more appealing.
Please don't take this as too much of criticism. I'm using the service and love is so far. I'm sure you have some spreadsheet somewhere that says the free DB should be 20MB and I'm sure that there is a very logical reason for picking the number. But it is a bad number.
In short, there's a free tier, and then it's $0.05/hour.
There isn't really a concept of what you get for that (process/memory/disk/io/bandwidth...).
At that hourly price, it's ~40% the cost of an small EC2 instance, ~170% the price of a micro instance and ~5% the cost of an extra large.
If you aim for approx 15 apps on 1 extra large, that gives 1gb per app and somewhere around 1/2 EC2 CU instance.
Redundancy can be added by replicating the apps to a fallback server but not routing any traffic to them when everything is ok. If you had 15 EL servers and distributed each app's fallback server randomly, having 1 EL server go down would mean your 14 remaining instances would be handling 16, instead of 15, apps - not unreasonable.
Drop the EC2 prices to reserved instances, and there's suddenly room to grow+profit.
Without knowing what you are actually getting (EBS? LB? S3?) it's impossible to tell if this is a good or bad value.
Personally, deployment through git/mercurial isn't worth an even minute price premium over straight up EC2. Heroku had autoscaling, varnish and reverse proxy, possibly on higher margins - which I think is a large part of what makes Heroku, well, Heroku.
I've updated the pricing page to reflect that IO, load balancing, traffic and repository hosting is included.
I think that part of being a PaaS is to continually improve the infrastructure for optimal performance for our users, so stuff like Varnish will be added to that list once it's implemented.
It's not just Git deploy. If I ran my own EC2 instances, I'd have to maintain Windows. They have some other value adds too (free small MSSQL dbs and basic CI in the dashboard, etc).
What performance can I expect from a single instance? I know that your doing shared hosting on AWS Instances, but not sure which EC2 type, nor how many instances are being deployed to each. Without knowing that, it's hard to comment on whether five cents an hour is worth it or not (also that is time that the application is deployed correct, not compute time?)
We're monitoring the performance of our beta users apps and will figure out a reasonable estimate of what to expect before we begin charging. In terms of processing power we're currently aiming at performance roughly equivalent of 1/2 EC2 CU.
The prices are for the time the application instance is live rather than compute time.
I really don't want this to come across as rude, as maybe I have missed something but this sounds like you've set pricing before you actually know what you're going to have to spend on hosting yourself?
That's a tough decision that all PaaS providers face: offer simple predictable pricing, and face the risk of unpredictable, possibly even negative margins? Or safely convey the infrastructure's variable costs, leaving Amazon in control of the relationship with your customer?
The AppHarbour guys are right to focus on pricing as early as they can: it's the hardest part. I advise the myriad of Heroku clones out there to do the same.
I'll second that. Especially in the .net ecosystem, it will be much easier for me to sell a PaaS provider to enterprise customers with a pricing model than without.
It should be noted that these are preliminary prices. I've updated the page so this is clearer. That being said we're quite certain about the pricing structure from what we've gathered so far.
Background worker and job pricing are probably more interesting to me academically speaking. At the moment, have people been able to test those? I didn't think so, and I would think they will be more variable in their data usage.
Do you know yet what kind of restrictions/capabilities a background worker process may have? External ports/consistent URI, for example?
1987: http://en.wikipedia.org/wiki/File:Macintosh_SE_b.jpg 1995: http://en.wikipedia.org/wiki/File:Zip-100a-transparent.png 2002: http://reviews.cnet.com/pc-card/kodak-20-mb-compactflash/170... (Note the Discontinued tag)
Ok...enough sillyness. 20MB? If you have any index on your data this is really more like 12MB of data. I'll admit that the 20MB is free and enough to get started with a design, but there no way you'd be able to run any sort MVP on that amount of space. With the current lack of logging a couple hundred visitors would fill that up in a few days with just web traffic logging. $10/month isn't that expensive for the full 10GB...actually a great price point....but the 20MB just looks silly.
If I were you all I'd up it to 1GB. 20MB just looks so...1992ish? Heck even 100MB would look more appealing.
Please don't take this as too much of criticism. I'm using the service and love is so far. I'm sure you have some spreadsheet somewhere that says the free DB should be 20MB and I'm sure that there is a very logical reason for picking the number. But it is a bad number.