For anyone that is doing CS at Columbia University or considering CU, if you like these types of coding challenges, definitely look into Professor Ross's Programming and Problem Solving class. It was one of the best CS classes I took at Columbia.
Each semester consists of 4 different open ended programming problems like these. You work as a team to compete against other members of the class. There's no tests and each class is run as an open seminar where people talk about their strategy and implementation and consider the best approach to solving these problems.
You might want to check out Peter Norvig's CS212 on Udacity. From what I've seen over the past 6 weeks it's an excellent course problem solving and `good taste` in programming.
Yes, that is already on my list, after I complete the compilers class from Coursera. Although the bad side of these kind of courses is that not all of the domains are interesting to everyone, e.g. I almost fell in sleep when Norvig talked about Poker rules :)
It's worse than that. This webpage presents the information in a clear, readable text format with no unnecessary clutter. Why, it's such an old design that it's similar to what Gutenberg was cranking out on his printing press and to what you can buy today on your Kindle. Imagine that!
I absolutely love it. I think it shows you have reached a certain level of academic distinction when you can get away with a css free homepage, preferably hosted unix style at http://dept.yourinstitution.edu/~yourinitials
It's like when people get to a certain level of wealth they stop sending status signals.
You may need to consider a better way to rank players - percentage of games won is problematic (as was discovered during the Tron Google AI challenge) and something like TrueSkill or bayeselo would be more robust.
Yeah, we're seeing the first problems with that right now- newly uploaded bots progress faster than they should because they play against a more "diluted" pool of opponents than the oldtimers did.
We'll think of something.
I also think your alpha-beta bot is objectively better than the (currently) #1 bot.
Easiest thing to throw together now is a script which just feeds the whole game history into bayeselo as a bunch of fake chess matches, with the white move advantage set to 0. That's what was done for Tron.
The more I think about it, the more I like it as a recruiting venue[1]:
* A good way to measure the potential recruit skills
* Gives a good fun experience to go over later at the interview, both verifying it is the recruit who coded it and setting common background beforehand
* You get people who showed some interest in your company, seeing the contest
* You get people who are interested in hacking around puzzles
Adding to what you said, someone who discovers, attempts and solves this solution is already better than 70% of the programmers out there. They show promise of being motivated to crack a puzzle that robs them of their sleep, and take the time to learn and code against the Scribd API.
Even if they don't "win", they are greatly superior than a normal resume pusher.
Smart that they implemented a 10-second time limit, because my first thought was to write a bot that would pass the current game state to Mechanical Turk and have a person solve it. It would be interesting to see how a human fared against Scribd's developers' bots.
If we started asking this as an interview question, NOBODY would pass. Time after time we have "senior" developers who fail to write very simple functions, like to count the number of occurrences of each letter in a string.
For me, personally, it sounds like an awesome question, but it's not clear how you would judge it. Let interviewees' bots fight?
Comparing yourself to others is a good way to gain perspective on how far you've come, and more importantly, how much further you have to go to attain some desired level of skill. Taken to extremes predictably ends up hurting more than helping.
You should look at the basics of AI and various search algorithms.
The first priority would be to quickly pick up the vocabulary: what is a state, what is a utility function and so on.
I recently took a class on AI and we used the (rather popular) Artificial Intelligence: A Modern Approach by Russell and Norvig [1]. I didn't actually read the book, but I've heard good things about it so it's definitely worth a look.
All the lectures for the course are available online as well[2]. The professor my semester (Dan Klein) was a brilliant lecturer, so the lectures are worth watching if you have the time. The lecture notes[3] are also online.
Of course, if you want to participate in this game, you are doubtless in a hurry. So you might want to skim through lectures 2, 3, 6, 7, 10, 11 in roughly that order--they seem to be the most pertinent.
Combinatorial game theory would be the wrong tool for this, I believe, since this isn't a strictly finite turn-based combinatorial game. (And in my opinion combinatorial game theory is more useful for analysis and proof techniques than for creating AI algorithms, but I've only had one class on it.) The blog mentioned "fruitwalk-ab" uses minimax (with the alpha-beta optimization of course), which is the bread-and-butter algorithm for these kinds of games and I expected it to be in 1st place. Sure enough, it is at the moment. (Edit2: No longer.)
In an intro machine learning course you'd learn about minimax and others, but skip paying any money and just read the basic algorithms here and look up the wiki pages for even more examples: http://www.stanford.edu/~msirota/soco/blind.html (The introduction has some term definitions.)
Thanks for the info, interesting stuff. I've got more first-gen minimax going right now, sorta works but gets stuck in a repeating move pattern. Interesting stuff to work on though, i really appreciate the pointers. Hopefully this weekend i can work out a better heuristic valuation of a game state, for a first run i'm just counting my objects vs theirs(very simple), but i need to somehow weight it such that it prefers picking objects up sooner than later. And of course it would better match if it also took into account groupings, but thats for after i get basic board-clearing working.
Hey, honest question here, is it impossible to brute force this and consider (almost) every possible move and get to a non possible to loose point? Given that the board is restricted to 15x15 and the fruits to 81... I'm not saying this is easy, I'm asking if the game AI has an upper limit so to say...
Ohh, I missed that, that makes more sense now, thought ten seconds is quite a lot, you will very probably not be able to brute force every possibilities before every move. Thanks!
Given that each move must be made in 10 seconds, here's my approach. I'm proposing that you use machine learning to evaluate moves, to approximate the exact (search-based) scoring. I'm not implementing this, but feel free to ask me for more details:
Training phase:
Create a random board configuration. Exhaustively explore the search space to find whether this is a win +1, lose -1, or draw 0 for Player 1. You now have a training example: (board configuration, game outcome)
Now, train a neural network (or other non-linear ML model, e.g. SVM) to predict the expected outcome based upon the board configuration.
Deployment phase:
Port the neural network to Javascript. For each possible move, use the neural network to predict the outcome of that move. Pick the move with highest expected outcome. The neural network will run in constant time, most likely well under 10 seconds per move.
> Create a random board configuration. Exhaustively explore the search space to find whether this is a win +1, lose -1, or draw 0 for Player 1
Given that the search space can grow O(4^mn), this can be done only for endgame configurations. Further, not knowing any bounds on the board size makes the input representation difficult to define for a such machine learning approach. And, your target should probably be the weights of an evaluation function, rather than the exact game outcome.
As for the learning algorithm, I know TD-learning was found to be a good approach in various chess programs.
> For each possible move, use the neural network to predict the outcome of that move. Pick the move with highest expected outcome.
You would likely still want to run an alpha-beta search to pick the move to minimize the prediction error.
One thing though: I think resetting the board state should call new_game() again. This likely only matters when testing your bot, but it's nice for variables that need to be instantiated per each game.
It's pretty fun! My current greedy solution is called: SoGreedy. :)
[Sorry for bug report here, didn't find another way]
I think there is a bug with get_my_item_count() and get_opponent_item_count().
These two functions don't return decimal. When players have 0.5, they return 0.
This works locally though.
Example of failed match :
http://www.scribd.com/job_game/match/295703
This line for example
1 / 1 / 0 / 0 / 1
- first number is the fruit type
- second number is the total fruits
- third number is my fruits
- fourth number is opponent fruit
- fifth number is irrelevant
(You'll never see 0.5 in the third or fourth number..)
I think there is some kind of bug where a bot is stopping.
Example : http://www.scribd.com/job_game/match/275573
I simulated two boards where this bug happens online, and it doesn't happen locally.
Also, the API is wrong here :
"return the index of that fruit (starting with 0)" => It starts at 1
About the bug:
Try to trace() out a few messages in your make_move() function, that might tell you whether something went wrong (the logs will appear at the bottom of the game replays.)
I was going to post how they should do a clearer job of describing the challenge (i.e. using bullet points, headers) and main rules, but I guess there's a separate webpage devoted to the challenge:
I think they need to reevaluate how they score a tie, out of 58 games I have 4 losses and 30 wins and the rest are ties, but my percentage of games won is only 50%ish. Seems like a tie shouldn't be put in the same category as a loss.
It should be mentioned that you need to register in order to submit. That was my issue.
However, while my bot was compiling, I moved away from the page. Later on trying to resubmit, I kept on getting errors, until I changed the name of my bot submission.
More helpful error messages would greatly improve the whole process. :)
I go redirected to this link www.scribd.com/jobs/botrace_upload_bot
which then displays: Oops! Something went wrong.
We're working on a fix. Please visit our homepage for some recommended reading to take your mind off of it.
That doesn't really make any sense. Scribd is trying to trick you into solving an undergrad-level AI problem for them? Sounds unlikely. Designing and writing the game probably took more effort than writing a bot would!
Coincidentally, I really like the game design. I took an AI course recently and we had some really similar using Pacman and Python, and that was fun too, but I would much rather use JavaScript and compete online than use Python and the annoying framework code they gave us.
It seems unlikely to me that scribd are actually looking for a solution to this problem, or an isomorphic one. Have you any basis for thinking that?
There are many standard approaches to the general class of two player board games (they are clearly aware of these, with an alpha-beta based bot).
I would guess that what is going to differentiate winners from losers in this specific game are the exact specifics of the setup. So if they are looking for a solution to an isomorphic problem, it must be a very similar problem - as the general type of challenge is well understood.
It just looks like a neat game to me; I doubt its a cleverly disguised problem from 'social reading and publishing' that they are trying to get a cheap solution to..?
Wouldn't be so quick to down-vote his comment. I don't think scribd is stealing code or that submissions will be directly applicable, but there's some cross-over between a game like this and searching/crawling a db/docs/network. It's a great way for them to find devs with skills they need.
Finding good devs is essentially their stated intent, and so they're not looking to use the results of the contest for free but rather identify talented people.
Downvotes on the comment recognize the wrongly cynical views of the poster.
Here is his website: http://www.cs.columbia.edu/~kar/teaching.html
Each semester consists of 4 different open ended programming problems like these. You work as a team to compete against other members of the class. There's no tests and each class is run as an open seminar where people talk about their strategy and implementation and consider the best approach to solving these problems.
This was my favorite from my year: http://www.cs.columbia.edu/~kar/4444f02/node18.html