(Tack on another order of magnitude or two if we're talking about something like GPU computation, which I would not yet trust in WebGL, even ignoring the problem of bandwidth for the problem specification. If you work the math, it becomes difficult to overcome orders of magnitude in performance differences by throwing lots of slow resources at a problem, because you get to the point where it becomes impossible to make up the difference due to additional communication overheads.)
This basically limits this "supercomputer" to embarrassingly parallel problems that have very small specifications, for which performance is no particular concern, including the fact that one does not even know how much resources you will have for the problem on a second-by-second basis. This is not a large niche, but it can be fun.
These are thoughtful comments, here's a couple more
> his basically limits this "supercomputer" to embarrassingly parallel problems that have very small specifications
note support for map-reduce so it goes a step beyond "embarrassing" (map).
> It's hard to imagine a workload in which this would be the truly best solution.
Biology offers a few examples - typically (alignment-free) sequence analysis is limited by memory, not by communication, so being able to reach a large number of volunteers may be more significant that the time it takes for each of them to communicate their piece of the problem. Here's an example: http://www.almob.org/content/7/1/12
asm.js may overcome that to some extent, but as I mentioned elsewhere, the real problem is bandwidth. Frankly you could run things effectively instantaneously on the client side, and still end up with wildly worse performance on this sort of cluster than a real supercomputer.
Google V8's regular expressions can outrun C for real bioinformatics workflows, and it's actually not even close. I routinely write JS that outruns C and C++, and you haven't seen it in reports because my papers haven't been accepted yet. I doubt that I'm the only person capable of writing "amazing" code. Do you write JS?
As for bandwidth, there is no reason to believe that bandwidth is an issue for all parallel architectures. Stream processing works very well under the given constraints, for example.
I classed that in with doing GPU computations in JS; it may work in a carefully selected browser and GPU combo today, but it's going to be a while before it's out there.
And even if your JS is running at native speeds, it doesn't solve the bandwidth problem, which is nowadays arguably the distinguishing characteristic of a supercomputer, since the compute nodes of a supercomputer are hardly any faster than a commodity desktop anymore. (Some, yes, sure, but nowhere near the multiples there used to be.)
Hello, this is Sean Wilkinson, the author of QMachine. I'm not sure what "excepted scientific computing" means, but there is an example study available at http://q.cgr.googlecode.com/hg/index.html, but it's probably going to fail if everyone on HN tries it because the data source (the NCBI) will probably crash ...