Yesterday I showed this game to a fellow graph theory buff and we also sat down to think about how to solve this game with AI.
The most straightforward solution is expectiminimax, which I see this solution has implemented quite nicely. In case someone here isn't familiar with minimax, the OP wrote some very elegant and well-commented code that would be a great primer.
The less computationally-intensive approach we came up with was to model the game state as a graph G(V, E), where V is the set of active tiles and E is a set of edges connecting adjacent tiles, weighted by the function c(v1, v2), which returns the absolute value of the difference between two tiles. For each decision, the AI picks the move that minimizes the sum of all edge weights in the new game state.
The reasoning behind this is that the only way to progress in the game is to have tiles of equal values adjacent to each other, for which the weight in G would be 0. Thus, the AI should try to minimize total weight. Eventually there will be large numbers on the boards with large edge weights to adjacent tiles, so the AI would try to keep these tiles next to other large tiles to minimize the difference.
Since the game is stochastic the approach I described may not work in a worst case scenario, but it could also be applied to the existing minimax solution as the weight function for each node in the tree.
Fantastic! Thanks for pointing that out. My cursory glance over the code in ai.js led me to believe that you were weighting it by score, now I see the full picture.
One more thing - have you looked into storing the game tree? I noticed it is starting the search from the beginning every time. I'd expect you would see a branching factor of around 10, so this would only really make a difference at depths greater than 4.
You started an excellent and inspiring GitHub project - I feel like the AI research into this game has only just begun.