Hacker News new | past | comments | ask | show | jobs | submit login
Google AI Challenge 2011 (aichallenge.org)
275 points by philf on Oct 19, 2011 | hide | past | web | favorite | 68 comments



From the rules:

> Any attempt to disrupt the normal operation of the contest software or the contest servers will result in the immediate involvement of law enforcement officials. Our policy is to always prosecute.

I get that people shouldn't be intentionally disrupting the servers, but that sounds like an awful policy.


I believe they've had problems with this earlier.


Why? If they don't want people trying to break into their servers. I'm honestly asking, I don't understand your argument


For one, it exposes a heavy-hand predisposition makes me worry that a bug in my program might result in a visit from police.


See also this previous thread from 212 (!) days ago: http://news.ycombinator.com/item?id=2349826

There are two things worth pointing out:

1) This competition has previously been sponsored by Google but isn't run by them. It's hosted by the University of Waterloo Computer Science Club.

2) Some people have been working on bots for this challenge for months now, so they are likely to have a bit of a head start.


The contest is sponsored by Google, but is no longer associated with University of Waterloo CS Club (other than several current and former students still helping to organize it). Please don't contact them for anything regarding the contest. :}

Those who have been participating in the beta will certainly have a jump on those starting from scratch. But also it should be noted that some of the basic game mechanics were changed about 3 weeks ago, including the scoring and objective of the game. So it may not be quite as much of an advantage as it first appears.


About (2), judging by the few games I've seen, the current level of bots isn't very high yet, so the playing field is still open.

EDIT: I hadn't looked at the leader's games yet... They're already pretty good :)


That sucks (some people having a head start). What a silly way to run a competition.


There was a beta testing period to develop the game engine and ensure that the game was viable (i.e. there isn't a trivial solution, it's interesting to work on, etc.). There have been some fairly significant changes in the objectives from most of the beta, so that should help level the playing field somewhat.

It's regrettable that the beta did last so long; this has to do mostly with the up-and-down nature of the time volunteers had to work on the project (they have school and work).


I think the OP just meant that some people had seen this page before, not that there is any conspiracy behind that.


I didn't say it is a conspiracy, just silly.


The one thing that bugs me about the Google AI Challenges is that they do not really encourage using modern AI techniques. It's all about intelligent _developers_ rather than writing intelligent _software_. I really would like to see a challenge which is all about data: identifying patterns and learning to make predictions – rather than developing yet another heuristic for a minimax algorithm...


I like that these AI Challenges appeal to everyone and rope in large numbers by the reward factor of watching your code fight someone else's to the death.

You're right, though, it's definitely not about writing intelligent software. That type of competition would have a much smaller but probably more intensely academic.


You are free to try to identify patterns in enemys ants behaviour, make predictions and win the game this way.


The challenge does not offer disk space for your bot to save data so it has to start from scratch every match (unless you supply it with a lot of offline training data).

There has been talk among the developers of the challenge of offering disk space in on of the future challenges.


An alternative would be to allow bots to connect to an external server maintained by the bot author to get the latest strategy, and dump match history for analysis.


Even just getting a second or two during startup to connect to external servers and again a second or two at the end would be enough.


This would be neat to do, but it introduces a number of additional implications. For instance, the scheduler must be considerably more fair in match allocation as this then has a huge impact on bot performance. It might also mean that newbie bots get trounced by more experienced bots so quickly that they don't learn much from the experience.

Not an easy problem, but perhaps a worthwhile one.


If I remember correctly, the last competition there was a team who wrote a bot using genetic programming techniques. So you are definitely not prevented from using those more advanced techniques if you can pull them off.

Food for thought, maybe you could develop an ML program to calculate the optimal heuristic for the basic bot program.


I think you would enjoy competing on a kaggle competition (google it).



Very cool game spec. One thought: your ants are basically telepathic, so they are able to share information instantly--e.g., the squares they see flow back to your "master bot" which can instantly use this information in the current turn for every other ant. I wonder how much more challenging it would be if you had to build not a "master bot" that controls all ants, but a "bot" that runs for each ant, and they have their own individual inputs and state and can perhaps transmit messages to other ants within a 3 block radius. Maybe they can even lay down pheromones to mark territory... (too much SimAnt as a kid, can you tell?)

Certainly it would be more realistic, and I think even more fun--but maybe a little taxing on the server. But this is run by Google, right? Maybe next year.


The ICFP contest from 2004 [0] was a bit like what you're talking about. Rather than a bot, though, the submission was a description for a state machine that each ant would follow.

[0] https://alliance.seas.upenn.edu/~plclub/cgi-bin/contest/ants...


This is a hive mind ant game!


I wrote one in Python a while back. https://github.com/phreeza/cells

Contributions are always welcome.


From the Reddit post:

> The contest is not ready till tomorrow. Everything is still beta today, all accounts so far will be purged tomorrow. ~amstan Contest Organizer

Source: http://www.reddit.com/r/programming/comments/lhlt9/googles_a...


When is the contest over? Last time I tried to do one of these things I only had about a week which was not enough time for me to come up with a quality program. Otherwise I would like to compete.


The last one started September 10th and ended November 27th.


Great to see a game similar to the one I developed some years ago. My game was a multiplayer realtime programming game. The ants were controlled by a uploadable lua code. If you are interessted, feel free to visit and download the sourcecode on http://infon.dividuum.de/


Just so it is clear, we haven't actually launched yet. Although we should be live within the next 48 hours.


I still think a webservice-based version of these contests would be much more fun. The idea would be to only give the simulator a URL of where your AI is hosted. There would be an expected set of endpoints for the simulator to call and would invoke your service when it was your turn. Perhaps the AI could call services on the simulator as well.

So, instead of having to write your AI in the simulator's language you could choose whatever you want. Another advantage is that you could run your own database to store and query information so that your AI could become more intelligent.

I've been looking for a new side project. Perhaps I've found it.


Stdin/out is used to transfer information from/to the simulator, so any language is possible. The list of languages they support is here: http://aichallenge.org/starter_packages.php

You can also ask the contest organizers to add a compiler/interpreter if anything is missing.


Ah, thanks. I was glad to see a C# package there ;)


"I still think a webservice-based version of these contests would be much more fun."

The latency of this approach can start to hurt badly. Even a 100ms latency, multiplied across numerous turns, means that games can drag on for quite a while, reducing the number of games played, and reducing the amount of fun had. I recently ran a contest that did in fact call you as a web service, but we had to basically require that you were hosted in the same city to make it work in anything like a fun time scale.


While this is true, it would also mean that more games could be run concurrently as the total computing horsepower would be larger than what the contest can afford to fund. :)


Doing this has some really cool advantage, but also some significant disadvantages. Namely, it gives a large advantage to those with access to powerful computers. Ideally, the contest would be one of "Who can create the best bot?", as opposed to one of "Who can afford/borrow the most computing horsepower?"

But feedback is always welcome.


After watching a game or two, I can tell that ants should always travel two by two for warmonging. When two ants (walking together on the same front) encounter a single ant, the one ant is always destroyed.


:) what happens when your pair encounters three? Or if your pair are walking horizontally next to each other in : formation and are attacked from above by a pair in .. formation? I'm very interested to see the strategies that emerge from this challenge.


definitely, when they are paired they can just suck up lines of other ants... it's really cool to watch


Jeovah's Witnesses FTW!


It looks like you can only upload code, and you aren't allowed to write any files, which means that your bot can't be self-improving between matches. Which IMHO is half the fun of an AI contest.

Or am I missing something?


You could run the contest locally and have the bot incrementally improve itself.


But then you're adapting against yourself, or some limited set of demo bots. It's not the same as out in the wild.


Has anyone found the contest end date? I have a project I can't put down (unfortunately), but would really like to have time to enter.


From a quick look the success of a strategy is very much dependent on the characteristics of a map. It seems that there are maps where you cannot even reach your opponent (maze_2, if I'm not mistaken.) Such maps punish defensive strategies. On the other hand, open maps with anthills not far from each other probably punish greedy strategies.


The maps are toroidal, i.e. the left and right sides are connected as are the top and bottom. So it is possible to reach the enemy in maze_2.


I stand corrected, I missed the connection wrapping around the top/bottom edge.


What? No Clojure starter package yet?! C'mon, Google!


Google isn't actually running this. They were only a sponsor in years past.

It's actually run by the University of Waterloo computer science club as an open-source project.

A Clojure package is in the works, I think.


A poster above makes the opposite claim, that the contest graduated from Waterloo to Google.


Hey Andrew(tectonic), this is right up your alley ;)

http://andrewcantino.com/ants.html



Back in the early 1980s I used to play a programming game on the Sharp MZ-80K which involved two programmers writing assembly language programs that would move through the operating system memory in an attempt to reach low memory (where the operating system was not stored) without crashing the machine.

Moving through the operating system meant actually overwriting portions of the operating system as it was running with the running code and the relocating the code to a new location lower down in memory (I've forgotten now but we had some restriction on how far we could jump). Clearly this often meant keeping a cached copy of the memory we were overwriting to put it back, but we could only do that in the operating system itself.


This seems like a true hacker's version of the Tron lightcycle game!


The ICFP 2004 contest was also about "ants" - https://alliance.seas.upenn.edu/~plclub/cgi-bin/contest/task...


There are similar Java games for long http://robocode.sourceforge.net/

You write a robot tank which battles with other tanks.


If you can reproduce faster than anyone else you have a big edge. But I'm curious on what the rules of reproduction are.


A bit simplified: when an ant moves to a food square, a new ant is produced from a hill.


But if you look at the games, some teams create new ants at a faster rate than others, regardless of food consumption. The red ants seem to have an edge on reproduction than others, right from the very beginning of the game.


Account creation is closed.


The contest starts from tomorrow : AI Challenge Fall 2011 - Ants Opens Oct 20 http://ai-contest.com/forum/viewtopic.php?f=3&t=1499


Third time for the challenge, nice going Waterloo!


http://antme.codeplex.com/

Not new, did a similar thing in C# in high school. Definitely a good way to get into programming


Soooooo Cool! Never participated in one of these before. Looks like tons of fun.


ok well i cant it says account creation closed


My challenge back to Google:

The setup:

* A certain person named Prasand is in my Google contacts, along with his phone number.

* Prasand has recently sent email to me.

* Prasand is a reasonably common name in a certain English-speaking country of well over a billion people.

* A common phrase when calling someone is to say "Hi this is <name>" or "Hey this is <name>"

* Usually this comes near the beginning of the call.

* Prasand probably also has a Google account, and Google probably associates his phone number with his account, and thus can look up interesting things about him, such as his name and various words he is likely to use, by using caller ID when he calls into a Google voice number.

* Google knows my first name. Let's call me Natch.

* Speech recognition can be made more accurate if large quantities of data are available with which to build models of how language is used in context.

* Large data sets are available on the internets. Something tells me Google may even have access to large quantities of data already.

* Google even has the capability, if it wants to, to build user-specific language models.

Google, here is your challenge:

Hire an engineering director for your Google Voice team who can manage to figure out how to do the correct transcription of the following five words at the beginning of a phone call: "Hi Natch, this is Prasand."

Hint 1: you should fire whoever did the one you have right now.

Hint 2: less AI, more common sense.


> Large data sets are available on the internets. Something tells me Google may even have access to large quantities of data already.

You don't just need audio data, you also need the correct transcriptions to learn anything from it. This reduces the amount of available data significantly. And producing correct transcriptions is time consuming and expensive.


You mean Prasad.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: