Hacker News new | past | comments | ask | show | jobs | submit login
The AI Systems of "Left 4 Dead" [pdf] (valvesoftware.com)
277 points by StylifyYourBlog on Apr 24, 2013 | hide | past | web | favorite | 82 comments



I love this game, one of the last FPSes I still play other than Team Fortress 2...unfortunately, like TF2, it's not very playable unless everyone in the round is amenable to being a team player. The multiplayer deathmatch allows one team to take the role of monsters, who have the backing of the zombie horde to make up for lack of guns. Playing it really provides some interesting insights to how the AI effectively functions, by slowly getting the human players to come unglued, either by targeting stragglers or letting the best players get too overconfident and move ahead of the team. Of course, all deathmatch games have this dynamic but it's very explicit in L4D, mostly because it's impossible for one human or monster player to singlehandedly own the opposing team in the way that an ace player can miraculously wipe out a team in Counter-Strike.

edit: The AI controlled human players is extremely competent, maybe the best bot players yet created. But they do show their limitations in multiplayer...among their weaknesses: they're too willing to sacrifice their healthpacks for an injured teammate, and will almost always try to help an incapacitated player no matter how hopeless his/her cause. Real humans can be total assholes, but in some situations its best for the team to leave a straggler behind, especially if the enemy has set up an ambush for the rescuers.


I too have played a lot of L4D. What I find interesting about L4D is the way the game changes dramatically as go up the difficulty levels. On normal difficulty you can just run'n'gun. On expert you'll be knocked down in five hits, so you need to be much more cautious (unless you have amazing reflexes). Manipulating your environment becomes much more important, particularly the use of throwable items redirect the zombies or stop them attacking from all directions.

Tying this back to the article, the L4D bot AI, in my opinion, is intended to just be good enough to get you enough experience to play online reasonably competently. Multiplayer is the real game, and perhaps PvP is the real real game (I haven't played it much.) Bots are pretty poor at tactics, just bunching up by the player. They also don't use throwables at all.


> "The AI controlled human players is extremely competent... But they do show their limitations"

I have often wished that Valve would make the player-bots more explicitly customizable (particularly because I only play team vs environment modes, and occasional really bad bot decisions make the game harder in a not-fun way.)

There are several settings available through the console, but they can only be tweaked if the server has SV_CHEATS on. At one point I had a list of 4 or 5 minor changes that made the bots substantially more competent, the main one being reducing their following distance to about a third of what it previously was. My most wished for change, which I have not been able to find a setting to control, is the threshold at which AIs will use their healthpacks on themselves or actual humans.


I haven't touched L4D since before L4D2 came out, and I hardly played L4D2, but what I found was that the further we got from the original launch, the less people there were on the servers, and the more dedicated those remaining were. It felt like you had to stay very active or get left behind, such was the tactical/practice element.

So while I liked the team-based nature, I found it actually didn't help longevity - for me, at least. Well, more relevantly, this is also the case for my friends - individually I'm a poor data point as L4D was the last game I took any real interest in at all.


The most inexplicable problem with the AI humans is that they absolutely guzzle adrenaline shots. Is there a needle anywhere around? Your adrenaline junkie teammate will grab it and shoot up immediately regardless of the situation. It's a huge waste of a great item. I initially thought it was just a bug, but as it was never fixed, I guess that's what they're meant to do. It's bizarre.


> like TF2, it's not very playable unless everyone in the round is amenable to being a team player

Amen. TF2 and L4D are two games that I consistently play years after the fact which is unlike other games I have.

Unless you're playing with people you know, both games are equally frustrating. Specifically in TF2, it's those who go and play support classes when there are already enough of those in play on the map on your team that makes it frustrating.


That only matters if you care about winning.

TF2 also has a huge accessibility problem which contributes to an overabundance of support class players, but also makes it a brilliant game. As opposed to the Battlefield series, where seeing the enemy first is most of the battle, it's actually difficult to inflict any damage at all in TF2. For instance, I'm at about the 200 hour mark and when I play demo I can only hit ground-based targets with pipes maybe 20-30% of the time - and that's quite good! For beginners, the only playable frontline classes are pyro and heavy. Spy and sniper are also popular because it's easier to survive when playing them, and maybe sneak a few insta-kills. Medic is easy to play, but can be very boring.

tl;dr: Don't hate the player, hate the game. And don't hate the game, because you'd be hating what makes TF2 such a wonderful game to play. Even at professional levels getting an airshot is a remarkable feat.


I want to bring this up because it's a point very often overlooked about why TF2 is a fun game, and that is that it has a reliable and accessible in game VOIP system.

I don't see this talked about much, but it really makes TF2 shine as a straight up fun game. It allows players to communicate in any way they see fit, from the user casual to the highly formal/competitive. That means players can organize themselves to pull weird shenanigans or play very competitively using communication to gain an edge. Also, since voice is such a uniquely human attribute, it humanizes other players who would otherwise be anonymous participants.

The end result of this is that you get highly cohesive communities that make the game great!


Getting an airshot is harder at professional levels because by then they have an extremely good grasp of how to airstrafe - without that they'd be lit up instantly. It's actually easier to hit someone midair in general due to easily predicted trajectories; an easy trick to kill newbie scouts is to wait until they've burned jumps then take them out on the downward arc.

Once you have a lot of practice, it's amazing what you can do in TF2 - e.g. chaining 3 rocket jumps then taking out a sniper while still midair.

Better yet, a lot of mechanics like pyro reflect get much more interesting when you're facing other skilled opponents because of how much prediction is involved in the timing.


Valve is (was) very open with their publications. There are more available on their website[1]. I definitely recommend taking a little time to look through them.

[1] http://www.valvesoftware.com/publications.html


Any idea why there hasn't been anything since 2009? Have they stopped publishing/presenting or just not updated the site?



too busy making money off steam.


Adjusting enemy spawn and behavior by a calculated 'emotional intensity' of the player - very nice.


If a SurvivorBot ever gets far out of place for any reason, it is teleported near the team when no human is looking at it (AI cheating algoritm!!! Really enjoyed that)


This works as long as the player doesn't notice. Counterexample: the Watson NPC in the Sherlock Holmes Nemesis game, http://youtu.be/13YlEPwOfmk


One of the reasons for this is SurvivorBots, despite their relatively solid behavior overall, can occasionally get stuck on an obstacle or trying to decide between divergent priorities (follow the others or grab a powerup? They can get stuck in a "dancing loop" where they shuffle a couple steps one way and then the other.)

It's important in L4D to require players to pay attention to their teammates, but it's also important to maintain the "fun" aspects of the game. It's one thing if the AI gets behind because they're being attacked and the player is doing a bad job of protecting them; it's entirely different if the AI gets behind because its algorithm sometimes gets hung up (and it's not fun for the player to have to try to solve pathing glitches!)


There is a similar exploit people have used in survival maps. If you were are able to get your player physically stuck (usually by moving up and down an angled area which goes below your player height, e.g. stair rail with ceiling above) you would teleport to the nearest teammate. http://www.youtube.com/watch?v=m_TzVMFRO-c


The climbing sequence is particularly impressive but maybe a tad excessive in my opinion but. I count six operations, two of which seem to computationally intensive due to their trial-and-error approach. I can see the reasoning behind it if the AI is climbing a physics props but surely map data could be retrieved relative to position and save all that ray tracing (if that's the correct term)?


Not sure what you mean by "map data". You can certainly have a list of props that are in the way and have them as a mesh. It's certainly easier if you have only one prop, but you have to account for that they may stack together in any way- and suddently the geometry configuration becomes non-trivial again.


Google's PDF viewer works for reading in the browser (scribd isn't working on this one):

https://docs.google.com/viewer?embedded=true&url=http://...


pdf.js works as well.


scribd is "unable to display this document", and the download button doesn't even work. Just link the original.

http://www.valvesoftware.com/publications/2009/ai_systems_of...


The link is to the original. The scribd link is to the side and is automatically added by HN.

(I never really liked scribd and with pdf.js it's clearly obsolete)


For PDFs at least, scribd seems purely parasitic.


I believe the intention of having the scribd links is to mirror the content if the original source goes down under the HN reader load, which has happened at times.


Or, the intention of having Scribd links is to boost SEO for and awareness of Scribd, thus increasing the value of Y Combinator's equity stake.


This is news.ycombinator.com. I don't begrudge that.


It wouldn't bug me if it didn't decrease the utility of the PDF links.

EDIT: as pointed out below, it doesn't. Belay this comment.


It doesn't. If you don't want to see Scribd then don't click on the "[Scribd]" - the rest of the title takes you straight to the PDF itself.


Well, blow me over. How did I miss that?

Thanks.


IME, HN readers will almost always post either a Google cache link or cache the original content. The scribd thing just slows everybody down.


PDF submissions are automatically substituted for scribd links.


I had wondered why everybody else at HN seemed to love that garbage. What a terrible misfeature.


Scribd is a Y Combinator investment.


Yeah, I figured.


Question about the navigation mesh.

I'm assuming that represents an entire open area (otherwise the path optimization makes no sense) and the dark stuff are the obstacles. In that case I'm wondering how the arrive at the actual grids and arrows (slide 12)


Yes, navigation meshes can be created either automatically, by a designer, or through a mixture of the 2 (usually they are first computed and then adjusted by someone). Indeed, automatic and stable generation is essential in case of big, procedurally generated scenes, and it would be in general a nice achievement for the field. gmaslov provided links that explain some common ways of generating navigation meshes, that quickly looked to me like improved/refined application of the "flood filling" method. I just wanted to point out that there is a very interesting way (imho) of generating a 2D/multilayered navigation mesh, which is, by computing the Generalized Voronoi Diagram of the obstacles and then by annotating the vertices (and thus edges) of the graph with information about the closest obstacles' points. This way, we have both a "backbone graph" and a subdivision (a mesh). Here is a link to the related research of my Uni: http://www.staff.science.uu.nl/~gerae101/motion_planning/ecm... check it out if you are interested, there are also a couple of summarizing images and videos. The advantage is that such meshes are smoother, compact, yet they ensure very good coverage of the walkable space.

Disclaimer: I am a MSc student working on the above method for my final thesis (involving GPGPU steps and a true multi-tiled approach).


This link explains fairly well how a navigation mesh may be automatically generated from the level geometry: http://udn.epicgames.com/Three/NavigationMeshReference.html . I believe it's also a good practice for the level designer to examine it manually and make any necessary corrections for sharp corners, narrow corridors, etc.

As for the path, A* search is usually the name of the game for any kind of 2D pathfinding. With the usual Euclidean distance heuristic it always returns the shortest path, but it's possible to use an "inadmissible" heuristic to make it run faster (and produce sub-optimal paths). The arrows shown on the slides are a little baffling; I can't imagine why those four vertically stacked boxes on the right-hand side would create a jagged path, for instance. It may just be exaggerated for effect.


Yeah I couldn't make sense of the arrows but I only know AI from 2d and have 0 clue about 3d game development.

Most toy examples I know use an even square grid so I was wondering if I might be missing something there.

Either way thanks for the answer (same goes for the other posters who provided links)


You might be interested in checking out https://code.google.com/p/recastnavigation/ (which, coincidentally, is what UDK/UE3 also uses nowadays (as a reference to the sibling comment))


Navigation meshes are commonly generated from actual collision geometry, i.e. as created by the art and design team.


That's a very interesting read, even though in practice, the ingame AI still feels lackluster. Survivor bots often ignore nearby players attacked by special infected to move (slowly) towards some far away target (another attacked/incapacitated player) and special infected bots could really use some cooperation (as do many human players ...).


I always wished that I could tell the bot teammates "stick together", "fan out" or "help me" via the chat. That would give them an actual sense of social intelligence, and bring them closer to real players (to be 100% realistic they would then have to ask "where r u??" :)).


When Left 4 Dead first came out, one of the features they were proud of was a radial mouse menu of chat commands. You could make your character say "Heal me!" or "Tank!" or really any of about 27 things. I think it was intended so players could communicate without breaking character . . . which turned out not to be a high priority. I can't remember seeing anyone use it in a live game.

Anyway, I always thought the bots should respond to those commands, and was always surprised when they didn't.

Then again, the experience with the commandable AIs in Half Life was so profoundly miserable, I thought maybe they couldn't get it working reasonably. After all, when you give a command to a human, there are a lot of contextual clues involved in how to interpret it.


Oh man that would be so cool. I suppose if text based adventure games can "understand" meaning within text, there's no reason it couldn't be remodelled into a chat setting.


There are plenty of FPS games with AI that you can control with commands. The Rainbow Six games all have this, and if I'm not mistaken, I believe Counter Strike's bots can be controlled with commands as well.


SOCOM (going back to the PS2) had voice commands for your team via the headset.


Quake III's AI not only receives but also gives commands by chat. If you enter a team full of bots, you can see them talk to each other and "plan" to a certain extent.


"Daemia patrol from quad to bfg for 3 minutes" ;)

http://www.angelfire.com/linux/vmerchand/quake.htm


Hah! Quake 3 and UT99 both were helped immensely by having the ability to command bots--that often would be the difference between a stalemate in a CTF map and a flawless victory.


I never felt like playing this game until now. The AI sounds very interesting to try for myself! Game producers should release such documents upon launching the game, it makes it that much more interesting.


Its hard to find a good match these days, people will either kill you or themselves at the beginning of a PVP match.


but then some people will look through the docs, find the exploits and become the top players. Wouldnt really be fair from a game play standpoint but I agree this was one of the most interesting things i have read in weeks.


I don't think it's all that hard for many students (or others with no life) to play the game enough to find flaws in the AI without releasing technical documentation. You're right though, this is a concern... Perhaps if they release the docs a few weeks delayed?


Great stuff, have to disagree on the "replayability" section though. There were only 2-3 tier 2 weapon spawns on the entire map, the crescendos were pretty much always in the same spot (this was a bit better in L4D2), and so on. After the first 10 times every map felt the exact same.


As a 3+ years of playing every night, i will have to disagree. The weapon spots are not important... The replayability comes with everything it provides. I have over 1 tb of dem files because i record every single match since start, and i still enjoy it.


Oh, I don't think the game is not replayable, I had a lot of fun playing it and L4D2. I just don't think it was at all obvious accomplished their goals with the director AI that they set out to. Hence "replayability section" (directly referring to the PDF)


This is exactly the reason why I don't like playing computer games. The games feel really scripted. Enemy character that wait until I cross a line before they attack me etc.

I wished that instead of investing in graphics, the AI got more attention.


It is not fully scripted. The game has enough random elements that it might result in an unplayably hard scenario e.g. key portal blocked by dozens of mobs. Hence the AI must be smart enough to reduce the intensity of the generated content to a sufficiently playable level.

This is a challenge in every randomly generated game content and the AI Director solution seems to be a fair one.


In light of the recent post on intelligence being analogous to the maximization of entropy available for future choices, I think things will get a lot more interesting in the future. Skynet may not be far off... (Even if computers can't feel, they might be able to put up a good fight.) That said, I don't know how I would feel about game characters stalking me on Twitter for a few days before showing up unannounced at my in-game job... Sometimes it's nice to know it's a game. (To say nothing of the autonomous military applications.)


play multiplayer?


I only ever play multiplayer (mostly Halo 4 these days) - any time I try Campaign I find the AI rather boring and predictable. I don't know if the Halo series are bad for this or not.


Yeah, it's not anything amazing from my experience. I've heard the FEAR games had decent AI, as well as Far Cry.

If we're not talking FPS then I've been impressed by the AI in XCOM. God damn those bastards are hard work.


I have heard that about Far Cry too, but even as a fan of all of the FC games, I really cant see why people say that. I'm not saying the AI is bad or sub par, just that I'm ignorant!!! To be fair though, I don't play that many games, so I don't really have much to compare to.


in the past few months i started to learn a little bit about AI, including algorithms in planning (like PDDL) and logic engines. the more i learned the more i saw what the gaming companies, and that entire community, are doing and how they are some of the biggest consumers of these technologies. i have to say that it's impressed me, and it seems like the gaming companies are on the forefront of multiple areas of technologies, not just graphics and specialized CPUs.

no if only i could hire from that talent pool :)


Woah, I've been doing the same thing for the past few months because I took an AI branch for my last year in college. Unfortunately I feel they've been teaching us mostly outdated and useless algorithms (with all due respect).

I wish I was taught more about neural networks, machine learning, data mining techniques... and less about classic (and failed) AI algorithms. The most I learnt about these techniques was from teaching myself, even for the last year.

Perhaps we attend the same classes?

[sorry for the OT guys, no PMs in Hacker News!]


no, i'm not a student.

presentations and papers like this are what i'm referring to:

https://skatgame.net/mburo/icaps2010-pg/ICAPS-PG.2010.1.bart...

http://www.plg.inf.uc3m.es/icaps-pg2007/papers/Symbolic%20Ex...

http://www.slideshare.net/StavrosVassos/the-simplefps-planni...

http://abotea.rsise.anu.edu.au/data/offline-htn-planning.pdf

http://www.guerrilla-games.com/presentations/VUA07_Verweij_H...

etc etc etc. turns out a lot of these ideas people thought were failures are getting a new lease on life in games, and they're working pretty well.

curious what classic AI algorithms you think are failing. from where i sit i'm seeing them get a new lease on life. maybe it's just the natural cycle of things: they get hot and invested in, they then fail to live up to their assumed potential, they fall out of favor for a while, and when everyone who was familiar with them - and their shortcomings - is gone a new generation finds it again and starts the cycle over.


That's hardly AI. It's just algorithms. Parametrized, using a probability distribution to randomize the environment, user programmed algorithms.

EDIT: Well, if this AI in game development, then so be it. But it's still not AI in the generic way, IMHO. To downvoters: I didn't contest the fact that they don't work. I do enjoy this games a lot and I praise the programmers that make such games work.


Ah, yes. The perennial curse of AI: as soon as it works, it's not called AI anymore ;-)


Reminds me of this passage from Douglas Hofstadter's GEB:

Loocas the Thinker comes across an unknown object--a woman. ... "Behold! I can look upon her face, which is something she cannot do--therefore women can never be like me!"

And thus he proves man's superiority over women, much to his relief ... The woman argues back: "Yes, you can see my face, which is something I can't do--but I can see your face, which is something you can't do! We're even."

"I'm sorry, you're deluded if you think you can see my face. What you women do is not the same as what we men do--it is, as I have already pointed out, of an inferior caliber, and does not deserve to be called by the same name. You may call it 'womanseeing'. Now the fact that you can womansee my face is of no import, because the situation is not symmetric. You see?"

"I woman-see," womanreplies the woman, and womanwalks away . . .


If I remember correctly, people used to criticize IBM's Deep Blue for not being "real" AI because it just basically brute-forced a ton of possible play paths in the chess game. Someone then said: "Saying that Deep Blue doesn't really think is like saying an airplane doesn't really fly because it doesn't flap its wings."


That's what "AI" usually means in the context of game development.


Exactly, AI in games solves a different problem than "traditional" AI. It's merely about creating the illusion of intelligence. Not unlike what happens in rigid body dynamics (approximated, not actual physics sim), graphics (plenty of tricks here, e.g. displacement maps vs actual geometry), animation (e.g. blending canned animations).


One could argue that "traditional" AI is also about creating the "illusion" of intelligence. That is what the Turing test is testing -- whether the illusion is sufficiently good to fool the tester into thinking that they're communicating with something intelligent.

The difference is in the degree to which the illusion has to hold up. Game "AI" exist in a limited, artificial world; so they can get away with "simple" algorithms without any learning elements. General AI needs to operate in the real world (even if constrained to a particular domain), where the "rules" are too numerous and complex to program in advance; thus learning is an essential part of "traditional" AI.


Thanks for clarifications. I couldn't say it better.

In a real world, you can't live on illusions, an AI really needs to perform ( think Googles self running cars ). Learning ( and all the other useful ingredients of the AI ) are essential.


Thanks, I edited my comment. I am more in the field of general AI ( and the rest of pure definition of AI ). So normally what I saw is the intelligence of a programmer put into very clever algorithms. I don't see any learning effect , or persistence effect. In fact, once you shut down the game, it starts all over. I'd love a game that learns from the player ( like how to climb over an obstacle ). Or can anyone point me to such a game?


If a system can learn, isn't that Machine Learning (which itself is a subfield of AI)?

Searching algorithms have always been part of the AI field.


IIRC, there was a survival horror game that claimed to learn from and adapt to the player in order to provide a scarier experience (playing to the player's phobias and such). Not sure if that qualifies.


F.E.A.R.'s AI was supposed to work this way, I do believe.

http://web.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf


Search algorithms (especially ones using heuristics such as A*) are squarely in the AI category (planning a sequence of actions).

From a broader perspective the entire question of "should these bots seem humanlike or act rational" is a typical AI question (think the 4 squares of AI definitions in R&N). In this example it is specifically of interest to find a path that is not the most rational but rather the most humanlike (well technically I suppose you could find a performance measure to judge humanlike and optimize that and it would be rational after all).


The intro of http://openlibrary.org/works/OL16095031W/Artificial_Intellig... discusses the variety of definitions.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: