The pricing page doesn't indicate how much storage or processing power you get for each price plan. I'm also wondering if they allocate enough resources for a site that does scientific computing on the back end.
Defining requirements in a yml is nice for auto-install, but the scripts tend to break for me worse than "hand installing" via pip.
The reason I'm moving the app to a micro EC2 instance is that dotCloud actively masks the debug interface, so I can't exactly tell why my app is breaking on their servers, but not my local machine. Plus the fact that dotCloud/Heroku is largely overkill for small, team-focused apps. I can't really justify dropping 99/mo for something only 50 people will be using.
Otherwise, it was an incredibly smooth experience.
EDIT: requirements.txt, not YML
First, which requirements are you putting in your dotcloud.yml? In general, pip-installable requirements should just go in a standard requirements.txt file, and dotcloud.yml should be used to describe the higher-level structure of your application (ie which service types you need).
Also, I'm not sure what you mean about "masking the debug interface", but all your Flask/WSGI and nginx logs are available with the "dotcloud logs" command, and you can also log into your service instance directly to debug using "dotcloud ssh" (in which case logs can be found in the /var/log directory). You can also certainly use Flask in debug mode, by setting "application.debug = True" in your wsgi.py
It is true that you may have trouble using the interactive, browser-based debugger with the default setup on dotCloud, then but this is a Flask limitation, not a dotCloud one. From http://flask.pocoo.org/docs/quickstart/#debug-mode: "the interactive debugger does not work in forking environments (which makes it nearly impossible to use on production servers)"
Hope that helps, and if you have any other questions, drop us a line: firstname.lastname@example.org
Consider my one issue moot! Next time I'll read the docs better before complaining...
Any docs changes we could make to avoid others hitting the same problem?
Thanks again for taking the time to set me straight on this.
Pricing is still the same and admittedly not perfect for small-ish projects on a budget. Our delibarate strategy is to charge businesses for a high-value service (and in our experience they gladly pay for it!), then find ways to subsidize hackers, students and bootstrapped startups with something free or cheap that doesn't cause us to bleed money. That is, in my view, the only way to truly help hackers in the long run - what good is our free service if it gets snatched up and digested by another unrelated business within the year?
As to whether it makes sense for scientific computing, that depends a lot on the type of computation. On the upside, the vast majority of current users are memory-bound, so there is a fair bit of idle CPU sitting around, but even with bursting into other users' unused CPU allocations, if you really need the absolute-highest CPU throughput, you may have to run your own machines.
The resources are sold in "scaling-unit" bundles, which essentially comprise 300MB (burstable) RAM and 10GB (soft limit) disk, with a "reasonable" amount of CPU and bandwidth usage ("reasonable" essentially means unmetered, but monitored for abuse).
That being said, if you have really unusual requirements, drop an email to email@example.com, and I'm sure we can work something out.
full disclosure: I work at dotCloud
I'll admit that the one downside of having an engineering support rotation is that the quality of the written English does vary a bit from day to day because some of our engineers aren't native English speakers. But I'd much rather have a thorough and technically accurate (if somewhat poorly written) support response than a well-written but technically inept one.