My older brother taught me to play when I was eight, and we played fairly equally for about 10 years, Then, within a year's time, I was consistently giving him 9 stones.
Also in the late 1970s, I wrote a Go playing program I called Honnibo Warrior which played poorly. I sold it cheaply for the Apple II (written in UCSD Pascal) and actually made some real money selling the source code.
The idea was more about the framework than the actual worker machines. The ability to add and remove different problem solvers on the fly appealed to me. One of the problem solvers, for example, could be your program, or GNU Go, or a program that runs ten Go playing AIs simultaneously, returning the "best" move from each.
All of the different solvers ran on the same machine, as far as I know.
It turns out that the Monte Carlo engines are doing far, far better than the grand hierarchical neural net designs of the past, though.
I especially like the idea that this could be applied to anything, not just Go. I did something in this direction a few years ago with an a-life data processing platform that used SOAP to shard problems to groups of machines.
At that point there are two issues, firstly the protocol becomes a bottleneck if you're dealing in large datasets, and secondly if you're dividing something up spatially you still have to aggregate your results. It would be the same if you divided it problematically for Go moves, unless each algorithm could give a "confidence" indicator that could be used reliably so that the master engine would not have to montecarlo each result set. That could be done if the montecarlo was moved to the sub-servers to test their own moves before sending the moves back complete with percentage wins for direct comparison in the master engine. It would chew up a lot more cycles and mean more machines, but it would remove the post-processing bottleneck.
A list of 10 moves from 30 different machines, even using a heavy-weight protocol like SOAP, would be nearly instantaneous. I would favour a light-weight format such as JSON, though. For other applications, I agree, network bandwidth would be a possible bottleneck.
My initial idea was to have each engine return ten moves, in sequence, with the first move being most important. The moves need not be delivered simultaneously. The Master selects the lowest scoring move that was picked by multiple engines. Other ideas include weighting (the fuseki engine's input is not very important in chuban or yose), and imperative moves (death engine forces a play to save a 30 point group).
The Master would send requests for moves by submitting a board position and whose turn to play.
What I really like about the idea, though, is that anyone could develop an engine (in any language) that adheres to the protocol, and add it to the AI network. At any time.