The api  is so simple it's absurd. Looking at the design I just keep on asking myself, "but how will it perform?" And the answer is, I don't know. But what I do know is that because of the simplicity of the design, if this compute model catches on, it'll be the start of a new computing paradigm—probably the first genuine attempt at something we can agree is cloud.
The essence of the model is that "caching", in whatever form, is abstracted away. It is up to the architects of the system to ensure that the system performs well on a wide variety of compute scenarios. Let me explain. Usually, in a model like App Engine, you worry about how you'll represent your data in the Datastore, how you'll shard, how you'll use Memcache, and what type of batch jobs you'll run to reduce the amount of dynamic computation that is done on a request. On Manta, you just store your data to their object store and process it as needed. You let the system figure out data locality, what to cache, on which node to perform the compute, etc.
As I said, I have no idea how, or if, this will perform, but as far as abstraction level goes, it's perfect. This is where pricing becomes an issue. At 40µ$/GB/s it works out
to 14.4¢/GB/hr with no mention or guarantee of the performance of the system. If the compute is slow, or the node is overloaded for tasks, what can a customer do? What if the compute is not limited by cpu but access to the underlying object store. Sure they say the compute is done at the edge, but what if they stick 4TB of data on a node that's the compute equivalent of an Intel Atom? So many questions about performance!
All that said, congratulations to the Joyent team. The HN response so far has been muted, but I think time will reveal Manta to be an important step towards true cloud computing.
Hopefully, Manta will mature enough to get a chance to kiss a girl.
As far as pricing goes, you're charged for the wall clock time spent running inside the zone, so if your task queues up because the system is jammed, you're not paying for that. Task runtime can be affected on busy systems, but this has been in the noise for the jobs we've looked at. We're also looking to see how the system gets used in order to understand whether it would be useful to have other pricing options (e.g., tiers of service with guaranteed resources).
As for performance: we're using the system internally every day, and we've found the performance very good. Of course, the best way to find out is to evaluate it on your own workload. :)
Finally, you'll probably be interested to read Keith's summary of our hardware choices here: