Hacker News new | comments | ask | show | jobs | submit login

In the late 1970s I played both the women's world champion, and a year later the South Korean national champion in exhibition games. They both mentioned that I was about 2 kyu but experience getting bruised up by an 2 dan amateur makes me think that my rating is less than that - so I would not be a good opponent for you.

My older brother taught me to play when I was eight, and we played fairly equally for about 10 years, Then, within a year's time, I was consistently giving him 9 stones.

Also in the late 1970s, I wrote a Go playing program I called Honnibo Warrior which played poorly. I sold it cheaply for the Apple II (written in UCSD Pascal) and actually made some real money selling the source code.




I've had a design in the works for a few years that is a completely network-based Go AI. The idea is nothing new. It involves multiple computers, each assigned one problem from a board position. One computer might be responsible for calculating ladders, one solves tsumego problems, one to look at influence, one to apply joseki, another for yose, and so forth. Each of submits their top ten moves, in the allotted time, to a master engine. The master engine evaluates the moves (possibly using Monte Carlo simulations) and selects the best.

The idea was more about the framework than the actual worker machines. The ability to add and remove different problem solvers on the fly appealed to me. One of the problem solvers, for example, could be your program, or GNU Go, or a program that runs ten Go playing AIs simultaneously, returning the "best" move from each.


You've described the architecture of a ~15-year old engine called NeuroGo: http://webdocs.cs.ualberta.ca/~emarkus/neurogo/

All of the different solvers ran on the same machine, as far as I know.

It turns out that the Monte Carlo engines are doing far, far better than the grand hierarchical neural net designs of the past, though.


Wow. I really dig this idea of sharded / modular network problem solvers. It would be interesting to take it a step further and let the solvers themselves negotiate their own distribution over time, CPU use, etc. - e.g. replicate all the active solvers on all the clients but allocate chunks of variable size to each depending on which evolves to be the "dominant" solver on any given CPU. You'd need an algorithm to stop Montecarlos from becoming the dominant process everywhere, probably. Montecarlo is notorious for chewing up a processor, but in various applications it often does not need to be run constantly, so the free cycles could be used for something else.

I especially like the idea that this could be applied to anything, not just Go. I did something in this direction a few years ago with an a-life data processing platform that used SOAP to shard problems to groups of machines.

At that point there are two issues, firstly the protocol becomes a bottleneck if you're dealing in large datasets, and secondly if you're dividing something up spatially you still have to aggregate your results. It would be the same if you divided it problematically for Go moves, unless each algorithm could give a "confidence" indicator that could be used reliably so that the master engine would not have to montecarlo each result set. That could be done if the montecarlo was moved to the sub-servers to test their own moves before sending the moves back complete with percentage wins for direct comparison in the master engine. It would chew up a lot more cycles and mean more machines, but it would remove the post-processing bottleneck.


Architecture in a nutshell:

http://i.imgur.com/IZ52u.png

A list of 10 moves from 30 different machines, even using a heavy-weight protocol like SOAP, would be nearly instantaneous. I would favour a light-weight format such as JSON, though. For other applications, I agree, network bandwidth would be a possible bottleneck.

My initial idea was to have each engine return ten moves, in sequence, with the first move being most important. The moves need not be delivered simultaneously. The Master selects the lowest scoring move that was picked by multiple engines. Other ideas include weighting (the fuseki engine's input is not very important in chuban or yose), and imperative moves (death engine forces a play to save a 30 point group).

The Master would send requests for moves by submitting a board position and whose turn to play.

What I really like about the idea, though, is that anyone could develop an engine (in any language) that adheres to the protocol, and add it to the AI network. At any time.

P.S.

http://www.whitemagicsoftware.com/software/java/jigo/


Cool stuff. Do you already have an API for people to write their own solvers -- something that passes a board state and an interface to return ranked choices? How do you stop people on the network from overranking their own AI's choices (assuming this were an open project)?


I do not have an API for solvers. There would have to be a registration system, possibly white-listed by IP address.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: