Hacker News new | past | comments | ask | show | jobs | submit login
AI Programmer’s Bookshelf (media.mit.edu)
310 points by geospeck on Dec 14, 2016 | hide | past | favorite | 26 comments

This is about game AIs, which are definitely interesting. But for the most part they overlap very little with what most people think of as AI.

In undergrad (mid 2000s) I partially specialized in both computer graphics and machine learning, and took a video game class to try combining these skills. I have two big memories from this time that stuck with me. The first is the time a dev on Civilization 1 visited our class. He spoke about the "AI" of civilization, which he revealed to be a simple random number generator. He told us people often thought it was far more complex, but that's really all there was to it.

The second memory is when we actually built a 3d engine + game from scratch. Every week we had to add a new feature, so one week I obviously took on the AI. My partner and I were doing a soccer-style game, and I had grand visions of implementing a sophisticated AI using what I'd been learning in my other classes, like SVMs and neural networks. I started doing research, and was shocked to learn that no video games at the time did any of this. I learned that computers at the time weren't really capable of running several versions of an ML algorithm simultaneously (one per agent) while dealing with everything else - and more importantly that there was little need. I ended up spending a couple days building out a basic state machine, and it worked. Even the professor thought we added incredible intelligence to the players, who in reality were just following ~10 rules like "if player in defense mode: move towards the midpoint between the ball and the goal while keeping some distance between other players".

My main take-away from that class is that there was little need for actual AI in video games, and that I should pursue a different career path :/

I had the same outlook, however it is for a game I am currently in development.

And for those who are reading this, and are interested in game AI with interest in making true machine learning integrated into their characters, I gope this will help keep you inspired.

So, like you, I had grandeous ideas and vision about adaptive, machine learning approach to controlling my character in a platform-based fighting game. They would learn from previous mistakes and improve themselves, to create a truly interactive AI, one that can challenge the player beyond just memorizing his pre-programmed state machine patterns or giving him inhuman reaction times. Then I ran into the same issues, the game must run at 60 fps, and constant learning cannot be done, so i've implemented a basic AI, but i know every way it will act, nothing is amazing.

Until one day I went back and looked at my AI approach, I realize I could still use machine learning, but perhaps use a smaller neural network, and a different learning algorithm. So, after implementing a combination of reenforcement learning and evolutionary learning. I let them train for a day. Then something amazing happened, and it 's what i imagine a parent feels when their kids learns to do something: it saved itself. Initially, starting out the AI would just spam buttons and usually end up jumping off the platform and killing itself, or stray away from the edge and just not touching the control stick, but this time, he got knocked off and he saved himself.

It was an amazing feeling, I never taught it to do that, but I gave it the ability to learn to do that, and that was extremely liberating.

So i encourage people to not give up on the ML AI for videogames, I know deep mind recently teamed up with blizzard to make a StarCraft2 AI, and that looks awesome.

It's true that game AI is surprisingly under-developped compared to what you see in academia on other topics... but it is not always that basic.

What you describe in your football example is often called "Behavior Tree", and there are ways to learn those using machine learning (usually by reinforcement learning, using neural networks and/or genetic programming). There was a popular video showing this applied on Mario [1] ... and a paper published 7 years before that video doing the same thing [2] (more context about this in [3]). I remember seeing something on Gamasutra saying that similar methods are used in some AAA games.

[1] https://www.youtube.com/watch?v=qv6UVOQ0F44 [2] http://julian.togelius.com/Togelius2009Super.pdf [3] http://togelius.blogspot.co.uk/2016/04/the-differences-betwe...

Once you see the "man behind the curtain," game AI loses some of its magic. I'm sure a lot of those devs wanted to make something really sophisticated, but like you said there's a balance between function and performance and for the majority of players the faked version works just as well.

I made a euchre game several years ago and put in an AI as basic as you mention for Civ: all it did was pick a random card from its hand that followed suit. If they were placing the first card, it was all random. I got praise from people at how the AI surprised them with feints etc to beat them when they thought they were going to get points. Some I told, others I let believe the magic :)

The base AI that came with the card gaming engine that we developed agents for during university was called "thumb"...it always played the cards closest to an imaginary thumb of the hand holding the cards. Some people played 10+ games against it thinking it was really impressive :D

More sophisticated rules based card game agents can actually look very good unless you play a huge number of hands against them (depends on the game structure though, trick taking games work well, poker agents can usually be understood quickly even though an agent that simply plays very aggressively usually looks strong for a bit).

I don't work in game AI, but my hunch is that part of the reason for any dissonance between state-of-the-art 'AI proper' and state-of-the-art game AI would be the abstraction level of interface permitted by the respective environments each participates in.

A lot of the big, breathtaking, (recent[0]) breakthroughs in 'AI proper' have been at the really really high level of abstraction of "how do I parse sense data". Questions like "What kinds of things are in this picture?", "How similar is this sentence to this other sentence?", or "What kind of thing is this thing?" are really asking, "How do I take this untyped, unstructured information and turn it in to something in my ontology?". This is a problem in the real world because photons and soundwaves don't come with type signatures.

In a game engine, however, you have powerful hooks directly into the physics of the world you're operating in. Beyond that, you've got hooks into the other agents that you're operating alongside[1]. With a higher-resolution system like this, the work required for seemingly-intelligent behavior becomes less. A lot of that work is offloaded into the higher granularity of the abstraction you're working under.

To make a completely unfounded hypothetical, and invert the trope, I'm not sure we'd be as smart as we are if we always could read each other's minds and read and write directly to reality.


[0] To be fair, a lot of the big push from the GOFAI work in the 60s+ was more in line with the kinds of things useful to game AIs; logic-oriented/planning systems.

[1] I know that one way to get more 'compelling' (and less uncanny) AIs in games and in academia is to limit their ability to transgress these boundaries that our intelligence operates under.

You're judging the state of the art in video game AI based on your undergraduate classes?

I used to work in video game AI having studied AI academically and was also disappointed that a) there isn't a lot of horse power to spend on AI so you keep it simple and make it fast and b) the game designers and producers don't want characters that learn, they want predictable well defined behaviours. They can then compose these to make fun gameplay for the player.

But that was in the 90's. At the point I left we were using A* and various optimizations on it and real time planning for multiple moving agents is non-trivial. People were using planning algorithms to make the ais behave in a more goal driven manner. I myself implemented a state machine based on Rodney Brook's subsumption architecture. Low level basic survival behaviours (run from grenade) would override the characters more high level goal (patrol route).

Ultimately I got bored but all the tech used in the 90's has continued to evolve and I'm sure there's some pretty interesting AI going on in games like gta 5 and assassins creed where you have large numbers of people and vehicles interacting.

No doubt machine learning as an off line process to teach characters to drive like players and so on will be a fun and productive area to work on soon

> He told us people often thought it was far more complex, but that's really all there was to it.

You should check out Vehicles: Experiments in Synthetic Psychology. It describes an "ecosystem" of vehicles powered by solar collectors linked to their motors via simple neural networks. One of the big takeaways from it is that, even with incredibly simple networks, you can get complicated behavior that appears to demonstrate intention, when there's nothing of the sort.

I had the same experience. I was creating a multi-user game-like experience and I had to come up with a client-side human "simulator" that was meant to be used if human players couldn't be found (what people would call a "bot" in most games). I tried several different iterations of what I perceived to be complex human-like decision-making rules. In the end what I ended up using was just very simple behavior mostly based on random number generation. Users could not distinguish that from real people. My previous complex rules-based solutions felt very robotic in comparison.

Still, I consider the classic game AI to be a real type of AI. They're just different things when compared to things like Machine Learning. Despite (and your) my example, most of the time, when creating game AI you have to distill the rationale a player would have to a set of rules, and then build it from the ground up as a rudimentary intelligence that can make decisions. Sure, there's "shallower" games, but also games that distinguish themselves when good AI is in place. I keep thinking of the AI in Quake 1 bots, and when a game like S.T.A.L.K.E.R. had good opponents. It was refreshing.

I'd even say there's some beauty in it, especially when you get to some emergent behavior that you did not expect.

I think Machine Learning is all the rage today mostly because you can attack a problem using brute force, without having to understand what drives a behavior. It may be more human-like, but to me it's just a separate branch.

I've been in the game industry for a long time and I've felt the same way. This had the opposite effect on me though, I wasn't very interested in AI because game AI wasn't very interesting. Now I'm getting more into AI largely due to reading HN for years, but also because of interesting applications in computer graphics.

There are games in the 90s that used machine learning techniques, Specifically Black and White and Republic: The Revolution, which were both worked on by Demis Hassabis who would later go on to work for Deep Mind on AlphaGo (https://en.wikipedia.org/wiki/Demis_Hassabis). I also think some racing games have used neural networks and some fighting games use hidden markov models in modes where AI adapts to player strategies.

I currently believe the game industry will adopt machine learning techniques in computer graphics, animation, procedural generation, game balance, simulation (vfx), and offline tools before agent behavior. Although, if NLP and speech recognition get good enough I can see it that stuff getting used pretty widely in certain types of video games.

I've always found the tension between ivory tower AI and real application in games fascinating.

To me, games seem like a great place to test theory in a mode that can be unsupervised and self-grading, while also representing real world contraints.

I'm glad to see DeepMind and others make some advances.

> actual AI

Ex-game developer here. I was also surprised at what "AI" looked like when I entered the game industry. It's easy to dismiss what game devs do as not "actual" AI, when really it's just two definitions of the terms.

In the academic community today, "AI" implies learning, often unassisted. That's a fine definition, but it's also a recent one.

When "AI" was originally coined, it simply referred to software that did things people used to think required some intelligence. Stuff like OCR and playing chess.

As our expectations of what computers can do grew, we kept redefining "intelligence" to mean fewer and fewer things. There's a weird feedback loop here, because our informal definition of "intelligence" is often "whatever only people can do". As soon as computers could beat us at chess, "beating a human at chess" got kicked out of the "intelligence" set. That meant chess software no longer gets called AI.

In games, "AI" just means "the code that controls what likelife in-game entities do". In the fiction of the game world, these entities have real intelligence. The game's implementation of that an artificial simulation of that intelligence. Thus -- "artificial intelligence".

In practice, it often lines up with the historical definition of what kinds of code was called "artificial intelligence". Many early AI researchers were in fact using games as their testbed.

It also ends up being fairly simple. Humans are so eager to anthropomorphize that it doesn't take that much simulated intelligence to get us to see a simulated entity as acting "alive". Simpler AI is also easier to implement, easier to debug, faster to execute, and much simpler to tune.

Learning algorithms are rare in games because they more often than not implement an anti-goal. A game designer's job is to give the player a carefully balanced experience that rides the knife edge between too easy (boring) and too hard (frustrating).

It is not the game's goal to make the entities as smart as possible. They would just kick the player's ass and that's no fun.

An AI that learned on its own is very hard to tune and would likely make the game no fun.

I do think there's room in games for learning AIs. But what I think would make sense is:

1. The fitness function the AI should train for is fun, not beating the player. Instead of rewarding AIs that win, reward ones that the player says were fun to play against.

2. You let the developers train the AI, then you bake those parameters in and don't do learning on the end user's machine.

This seems really out of date. I put together a list of ML books not too long ago


I am half-joking here, but since they have a "related reading" section with more "philosophical" works, they really should include "Godel, Escher, Bach"

The Matt Buckland books, especially Programming Game AI by Example are very good, for simple, practical techniques.

Missing: Paradigms of Artificial Intelligence: Case Studies in Common Lisp, Peter Norvig

Many of the books seem older than 10 years (also in the general exams list[1]). Are the (seemingly) vast advancements in the last years still based on the same principles? (Speaking as a complete ML noob.)

[1]: http://alumni.media.mit.edu/~jorkin//generals/general_exams....

As others have said, game AI doesn't draw from "real" or academic AI or Machine Learning.

That said, the references for AI and machine learning are quite old. Particularly the machine learning parts. The only ML texts on the list are Mitchell and Duda and Hart. The former is extremely outdated at this point. That's not Mitchell's fault -- it was a nice book for learning the basics of machine learning in 1997 when it was published, but all the developments that have made ML a hot subject have occurred since then and in areas that the book simply didn't predict coming. Duda and Hart, similarly, was the bible for certain subfields of ML for a long time, but it won't tell you what everyone's been doing in the past 15 years when ML exploded onto the wider scene.

If I were to add one book, it would be Kevin Murphy's excellent text (https://www.amazon.com/Machine-Learning-Probabilistic-Perspe...). There's no one book that will give you a complete picture of the field, but his is I think the closest available and does a solid job of preparing you with enough fundamentals that you can extend your knowledge from there on your own.

ML isn't really used in game programming. These books focus on more practical techniques such as State Machines, Pathfinding, Planning, Scripting etc.

Exactly. Game AI is not so much about making AI smart, but more about making it fun.

I haven't played it, but there is a game called Smart Kobold that tries to up the AI difficulty as far as it can reasonably go. In some sense, I think it satisfies your criteria. The punishing difficulty is what makes it rewarding, and it accomplishes it through AI rather than making creatures harder to kill or do more damage.

I do agree with your statement in the vast majority of cases. Ultimately, fun trumps everything else when it comes to games (though what counts as fun is subjective). Even Dark Souls, which many (most) gamers consider difficult, is not difficult because of unbeatable AI. It's full of patterns (indeed, that's how you get better at the game: you recognize and respond correctly to those patterns).

This book list is about game AI programming. Game AI and ML don't overlap much.

In particular, I would like to see something on OpenAI Universe

Steve Grand's other book "Growing up with Lucy: How to Build an Android in Twenty Easy Steps" should also be included in this list.


When this was discussed previously, I made an in-depth comment on it there, which I'll reference again here: https://news.ycombinator.com/item?id=8851408

What's the state of the art in path planning? e.g. as used by Google Maps? Perhaps some subset of those techniques could be used in games.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact