
Arimaa – Intuitively simple, intellectually challenging - jonbaer
http://arimaa.com/arimaa/
======
oscilloscope
The ultimate intuitively simple and intellectually challenging game is Go.
Go's rules are far simpler than Arimaa's, and it's just as challenging for
computers.

[http://en.wikipedia.org/wiki/Go_(game)](http://en.wikipedia.org/wiki/Go_\(game\))

[http://en.wikipedia.org/wiki/Go_and_mathematics](http://en.wikipedia.org/wiki/Go_and_mathematics)

[http://en.wikipedia.org/wiki/Computer_Go](http://en.wikipedia.org/wiki/Computer_Go)

losethos mentioned this too, although he is hellbanned. His point was that
Go's advantage is it engages the image-recognition abilities of our brains.

~~~
ig1
The scoring rules for Go are much more complex. I've seen plenty of cases
where novices have struggled with end-game scoring, especially under Japanese
rules.

~~~
JulianMorrison
So use Tromp-Taylor rules. "A player's score is the number of points of her
color, plus the number of empty points that reach only her color."

Sorry Japan, but Japanese Go rules are a mess and need to be retired.

~~~
zokier
I'm very novice Go player, but I've only ever played with Japanese rules.
Could you (and ig1) explain what's so wrong with them? I don't remember ever
thinking that the score counting would be somehow complex or unintuitive.

~~~
lambda
Under Japanese (and Korean) rules, there occur situation in the end game in
which it may be difficult to determine whether a particular group is alive or
dead, but the player who would need to play in order to play it out and
determine for certain may not want to do so as doing so would fill in their
own territory, reducing their score.

Japanese rules have a variety of special cases to deal with this problem, but
most beginners don't know them, and many beginners may not know exactly when
to stop (as they are unsure if a group is safe or not), and so may wind up
reducing their score in the endgame just because they are trying to make sure
a group is safe.

Under area scoring rules (Chinese, Tromp-Taylor, AGA, New Zealand, Ing, etc),
you count the sum of your territory and your stones on the board, avoiding
this problem. AGA has a hack that makes both scoring methods work the same;
whenever you pass, you give your opponent an extra prisoner.

Here's an overview of the different rulesets:
[http://www.britgo.org/rules/compare.html](http://www.britgo.org/rules/compare.html)

------
mushishi
If you'd like to have quick grasp of the rules, here's a site I created for
just that: [http://personal.inet.fi/koti/egaga/arimaa-
begin/tutorial.htm...](http://personal.inet.fi/koti/egaga/arimaa-
begin/tutorial.html)

You can play against (a stupid) bot. Just click once on the golden piece to
select it, and then where to move.

I've also developed Arimaa game viewer, where you can analyse your game. The
code is a mess but it might be useful for some.
[http://personal.inet.fi/koti/egaga/arimaa-
viewer/arimaa.html](http://personal.inet.fi/koti/egaga/arimaa-
viewer/arimaa.html)

I had more ambitious goal but was distracted by other things. You can read
more about it here:
[http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=siteIssues...](http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=siteIssues;action=display;num=1284084084)
One of the few reasons I don't like Arimaa is that it is patented. It is
probably the biggest reason I won't likely commit any time developing for it.

------
radarsat1
> On average there are over 17,000 possible moves compared to about 30 for
> chess; this significantly limits how deep computers can think, _but does not
> seem to affect humans._

Interesting assertion that the branching factor doesn't seem to affect humans.
I wonder why they don't think it poses a problem for humans to have a large
branching factor, is there any evidence to support this?

~~~
anExcitedBeast
On an only tangentially related note, read Zen and the Art of Motorcycle
Maintenance. It explorers why human beings are able to make quality decisions
in the face of massive branching factors such as making a chess move or (more
relevant to the focus of the book) develop scientific hypothesis. It's also
just a great read.

------
curtis
The Wikipedia article might be a better introduction:

[http://en.wikipedia.org/wiki/Arimaa](http://en.wikipedia.org/wiki/Arimaa)

------
TomAnthony
Arimaa is a fascinating game; you can learn it quickly and immediately play
against the best computer players and expect to win. The branching factor is
extremely high (~17,000 vs Chess' 30), but each move consists of several
phases where the current player can move several different pieces one after
another, so I've always thought a modified minmax type algorithm may be able
'deconstruct' each move into several nodes on a graph.

Whilst extremely interesting, it seems the amount of research into Arimaa
pales in comparison against research into Go. Go has a branching factor of
~300 so sits far above Chess, but well below Arimaa. It is even easier to
learn but harder for humans to develop an intuitive understanding of how
strong any position is. It is starting to succumb Monte Carlo Tree Search [1]
with games played on a smaller 9x9 (vs the standard 19x19) board.

However, from my perspective whilst MCTS is extremely interesting and have a
wide array of applications, I'd love to see approaches towards these problems
that aren't based around an optimised 'brute force' algorithm.

When Deep Blue beat Kasparov, Douglas Hofstadter noted “It was a watershed
event, but it doesn’t have to do with computers becoming intelligent”, adding
“you can bypass deep thinking in playing chess, the way you can fly without
flapping your wings” [2]. I somewhat feel like this criticism could be applied
to MCTS and Go, and it'll be interesting to see whether the first algorithms
that conquer Arimaa come from a different perspective or not.

[1]
[http://en.wikipedia.org/wiki/Monte_Carlo_method#Artificial_i...](http://en.wikipedia.org/wiki/Monte_Carlo_method#Artificial_intelligence_for_games)
[2] [http://www-
rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/Che...](http://www-
rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/ChessMatch/MeanChessPlaying.html)

~~~
Ziltoid
> you can learn it quickly and immediately play against the best computer
> players and expect to win

That is not true at all. The best available bots on the Arimaa server are
rated above 2000 elo, which is way higher than beginners can expect to be
rated at.

------
quantumpotato_
I've played a few Arimaa rounds, it's very fun. For even more simple +
challenging, try 2v2 speed chess where you give captured pieces to your
teamate.

[0]
[http://en.wikipedia.org/wiki/Bughouse_chess](http://en.wikipedia.org/wiki/Bughouse_chess)

(It's one of the best games I've ever played and only lasts 90 seconds to
play!)

