So I had set up a test scenario, with an agent that could travel, buy stuff at shops, use axes, cut trees, and so forth. When I ran this test, I had set a goal of procuring timber.
What I expected the agent to do:
1. Go to shop
2. Buy axe
3. Go to forest
4. Cut tree
5. Drop axe
6. Pickup timber.
So I ran my GOAP test, and saw what the agent came up with:
1. Go to forest.
When I saw that step 1, I was disappointed, because the agent clearly should not be going to the forest when he has no axe yet.
But then, as a proud parent of my AI, I saw what the agent had actually planned:
1. Go to forest
2. Pick up firewood
3. Go to store
4. Sell firewood
5. Buy axe
6. Go to forest
7. Cut tree
8. Drop axe
9. Pickup timber
Hooray! I simply forget to give my agent money. But GOAP made him smart enough on how to get some money on his own.
I had of course equipped the agent with 'sell' action, and I had placed firewood in the forest, which I promptly had forgotten about when writing a test scenario.
I was so proud of that smart little agent!
Cool stuff, that GOAP!
And this reminds me, I wrote that code 12 years ago, and I still want to incorporate it (with a switch) into the xscreensaver code, but never came to it...
Plus, the ghosts are faster than you when you are eating dots.
I tried writing a text adventure game generator once that would combine puzzle templates you gave it like "object X you need is in a tree and you need a climbable thing Y to get it" and "object X is in a vending machine and you need a coin to get it" to create basic chains of puzzles. Without more constraints it would do silly things like create a vending machine containing ladders, vending machines in trees, vending machines containing other vending machines and trees, and vending machines that gave you a coin for a coin.
All sorts of amazing, um, features, have been found in that game.
1. Obtain a vessel capable of holding liquids.
2. Drink from the vessel until it has one unit left.
3. Empty the vessel onto the ground.
4. Refill the vessel to its maximum capacity from the puddle you just made.
5. Drink from the puddle.
So you can also quench your thirst by swimming in a river, then drinking the coating of water stuck to your own ear.
So the cat thing was exactly the same problem. An infinite number of cats could walk through one puddle of spilled alcohol, and then ingest enough alcohol to get poisoned by licking each body part. Lick first toe of right foot? That's a shot. Lick second toe of right foot? Another shot. Lick third toe of right foot? We're having a party now!
On the one hand, it seems like there should have been some additional effort there. But on the other, the game actually simulates that when you dunk something in water, it gets wet, and then stays wet until the water evaporates. I don't know of any other game that not only does that, but also handles dusts and powders, and things falling from the sky due to weather, and even getting diseases by inhaling vaporized droplets of a contaminated liquid.
You would think that with all that incredible detail, the pathfinding AI would not kill the framerate just from two dozen pastured sheep all trying to find their next clump of grass to eat.
The pathfinding is pretty inefficient because it's entirely invented by the guy who wrote it. Who doesn't know anything about pathfinding algorithms. Also it's entirely single threaded.
Dwarf Fortress is kind of the ur-example of passionately solving the wrong problem.
It's a fantastic game, and it's heartbreaking to me how much more fantastic it could be if the author adjusted his priorities a bit.
Make it procedurally generated and allow people to share the best ones they find.
I think so too but couldn't work out how. It needs something extra on top to give you a reason to work through the puzzles I think. So games like Sokoban where you could generate random puzzles don't need much of a reason behind them but all good text adventures are tied together with a plot of some sorts unless you can think of any examples. I binge read text adventure walkthroughs for puzzle ideas and the same generic puzzles felt surprisingly common (e.g. grate + crowbar, key + door, vampire + garlic, coin + vending machine, tree + ladder, spade + grave) but a game composed only of these would probably be dull.
Actually "virtual reality world created by a computer that doesn't understand how our world works" is the plot of the VR game Job Simulator I think.
I've been thinking on these lines recently too, and maybe the game could be to allow others to experience the fun you're having in building these rules and watching the outcomes? Letting them author the rules, and watch the outcomes. Or letting them manage and upgrade the bots, sorta like an overlord who doesn't actually chop wood etc.
This is kind of a nebulous idea (and definitely not new), but spawns from the fun I've been having the past few weeks in writing a bunch of simple rules for cities to attack, ally, trade with each other. Watching kingdoms evolve, alliances form and break, funny situations like a rock shortage eventually triggering a massive war among all kingdoms, was fun. It might not be as fun for a passive observer, but as the author of the rules, the feedback loop of making changes and watching the emergent outcomes was pretty neat (kinda like programming, heh). But it is a scary path to go down, if the intention is to market it as a game, because it doesn't look or sound like any "real" game.
On a different note, Dwarf Fortress is ripe with such emergent outcomes. On another different note, Robocode is quite fun where you author bots and rules and watch how the bots fare against each other.
I love emergent AI game situations like that! Even that feeling of seeing enemies in Doom fighting each other hasn't been surpassed by that much in modern games I feel. Spelunky had a few moments like that though.
> Watching kingdoms evolve, alliances form and break, funny situations like a rock shortage eventually triggering a massive war among all kingdoms, was fun. It might not be as fun for a passive observer, but as the author of the rules, the feedback loop of making changes and watching the emergent outcomes was pretty neat (kinda like programming, heh).
Sounds fun! Any more situations like that?
I tried another prototype where it was a text adventure in a single room containing NPCs where certain NPCs liked, loved or hated each other (which limited their actions), there was objects characters could obtain (like a gun to threaten people to do actions for you or money to bribe people), characters had attributes (like intimidating, gullible, jealous) and as the player you could ask characters to do things for you and they could do the same. It had the same issue of nonsensical things happening which were unintentionally funny. Huge state space explosion though.
So I gave a NPC the goal to make a sandwich from bread and cheese which two of his friends were holding. Instead of just asking his friends for the ingredients, he obtains a gun, threatens a NPC he hates to steal the ingredients from his friends then threatens him to make the sandwich and hand it over. A valid but inefficient solution in the search tree basically. You'd probably want some common sense logic that most characters want to achieve their goal in a way that doesn't impact their social standing. Same thing as before though, instead of making the NPCs normal humans you could come up with a story about why they don't act normal to sidestep the common sense issue.
You kind of need to get a feel or knack for navigating the so-called "edge of chaos" between order -- too many restrictions/rules = predictable patterns = boring -- and disorder -- pick everything random and it becomes the conceptual equivalent of the "rainbow puke" mush of random RGB pixels that is always infinitely different but always looks like the exact same flavour of static. Similarly if you pick random notes from a few octaves to make a tune -- in theory you should come across all the famous earworms like Axel F, Smoke on the water, Final Countdown, etc etc, in practice you get a different slice of the same warbly nothing 99.9999% of the time.
That is, in some sense, where the Infinite Monkey theorem seemingly breaks down, while at the same time it is also exactly where it derives its poignancy. It's a very powerful pulling force between two opposites.
Either way, read all about procedural anything generation for games on this great website: http://www.gamesbyangelina.org/
One very interesting idea, that has become quite feasible with modern computing power, is to use constraint solvers. Sure they are NP-complete but in practice they do a pretty good job. Imagine if you have a loose framework for puzzles, or challenges (maybe rooms with monsters and items) and it generates a beautiful wide variety of novel game play. Except, just like in real life, you can't always win. But you want it to, just like in a "hero's story", be just winnable, all the time. And you want your player to be able to trust that it is (it doesn't sound like a lot of fun bashing your head against a random-generated puzzle that turns out to be logically impossible). Constraint solvers can actually do this! There's a few articles on the Games By Angelina website.
I think that a constraint solver could very easily generate random Sokoban-levels that are guaranteed to be solvable. Now you need some (relative) measure of "difficulty" to sort them in order to create that feeling of progression and challenge. Which I also think is doable (amount of backtracking, "mental stack" required, etc).
Haha, exactly, it was a really funny prototype. It took you by surprise by how much common sense knowledge you kept having to feed into it to stop it doing bizarre things e.g. knowledge like vending machines only contain small things, vending machines don't dispense the thing you put into them, large things aren't found in trees, ladders + trees + vending machines are large, vending machines are found on the ground... Even with all that it would put vending machines in the middle of a forrest, trees in an office and stack vending machines on top of each other so you would have to keep going if you wanted it to have some grounding in reality.
Also, if you're not careful you'd need constraint solving to make sure adding a new object into the world didn't short circuit a previous puzzle e.g. if you had an elaborate puzzle to obtain a coin but then added a second coin for another puzzle you could use the latter to skip the former puzzle.
> The kind of game like Bureaucracy that puts the player into some borderline absurdist rage that soon changes into laughter.
Yeah, I was wondering if you could treat the absurdity as a feature somehow instead of the hard work in trying to fight it. The major roadblock I was finding is it's hard to make the player care about what's going on if it's just a bunch of generic objects and puzzles. I've barely seen any advancement in AI in terms of interesting computer generated plots and characters which is what you could combine the above with.
Instead of better gun fighting AI and path finding, I'm really looking forward to games with complex AI characters where the gameplay involves you persuading them, befriending them, manipulating them, interrogating them etc. where the game has a plot that adapts to what's going on.
Games are all about shooting stuff and physical object puzzles right now. I'd like to see game equivalences of movies in genres like drama and thriller that revolve around complex character AI interactions.
For those unfamiliar: You're trying to impersonate someone, so you use a convoluted trap to get some cat fur, which you use as a mustache. But the person you're impersonating doesn't have a mustache, so you draw one onto his ID with a marker.
That's easily the funniest part about that puzzle!
I'm pretty sure you can find all of these in Japan.
I'm curious about "8. Drop axe". Did your system have the constraint that the agent could only hold one thing at a time?
Or was the goal to carry the maximum amount of timber, in which case it's completely sensible to leave the axe in the woods and pick it up again when you return (assuming no other constraints, like competing agents or weather).
pickup-item was defined as:
PRECONDITIONS: HANDS_EMPTY==1, NEAR_ITEM==1, HAVE_ITEM==0
POSTCONDITIONS: HAVE_ITEM:=1 HANDS_EMPTY:=0
Which makes it so that agents carry one thing at a time, indeed.
You can use it in different ways: like you describe: have some agents prefer certain actions, different from the preferences of other agents.
The way I use it is to adapt to the environment.
For instance, I make walking over to let's say an apple, less costly if the closest apple is near.
Look up `Children of Orc.'
The player can give assignments to the other Orcs, and the action plans for those assignments are displayed in-game.
The Orcs are smart enough to find cheap plans.
When tasked with procuring stone, e.g. they will see what the quickest way is.
Sometimes scavenging for discarded stone may be cheaper, other times, with pickaxe in hand near a quarry, quarrying is the way to go of course. They adapt. But whatever plan they come up with, it is communicated to the player in an info-window.
The search space for the plans is ridiculously large, so testing pre-conditions needs to be insanely fast. With AVX, you get to test up to 256 boolean preconditions in one go!
To give you an idea about search space: the Orcs have a repertoire of 115 possible actions.
Stringing together a plan of let's say 12 actions (some plans are longer), gives 115 12 possible plans. That is a scary big number: 5350250105473711181640625
A* helps guiding the search through this space, but can only do so much.
So every A* operation needs to be as fast as possible.
Much of the automated planning literature is about finding ways to find the balance between computational tractability and expressivity of the domain & goals. Many attempts to fuse logic and imperative programming with an aspect of history: hierarchical task networks for example, which are like a high level procedural language with non-deterministic method dispatch. Games like Killzone 2 for the PS3 also had an interesting followup use of automated planning via hierarchical task networks, to do squad tactics; http://aigamedev.com/open/coverage/htn-planning-discussion/
Another thread is Golog as a variant of Prolog that uses Reiter's Situation Calculus to formulate complex goals and domain constraints in a relatively consistent syntax and applied towards "Cognitive Robotics": http://www.cs.toronto.edu/cogrobo/main/systems/ .. it's unfortunate a lot of that work seems stalled since Reiter passed away.
With all the recent hype around connectionist, deep learning AI, etc., there still is a lot of important work going on in the symbolic logic side of AI, particularly in this space of working in dynamical domains. Fusing the two will be really interesting.
If you read this article and want to learn more, click that link.
https://github.com/stolk/GPGOAP has an implementation of goap in C, the code is really clear and easy to understand.
First, remember the first principle of game AI, the goal is not to defeat the player but to entertain him. A simple way to say it is that too good AI is not what you want.
Secondly, for most games, the fun is simply not on the AI component of the game.
If you take those two points together, 1) good game AI is tricky concept, 2) that's not where the fun is anyway, then you got that not a lot of people cares about game AI.
Does the fact that people tend to spend more time on the multiplayer component of a game rather than the single player play any role in this 'not caring about AI'.
I mean, I really did not enjoy playing against AI because they are quite stupid in most cases, and the only way to make a singleplayer become harder does not seem to be to make the AI smarter, but rather have some 'modifiers' against the player.
For example in L4D, the damage you take from a hit by a common zombie on normal difficulty is 10HP (out of 100HP), but on expert the same hit is worth 20HP.
Other things to make it harder would be to spawn in more NPCs and just get the player more busy, or speed up the events in the game (like tetris).
But in all games with a decent multiplayer, it is just _way_ more fun to play against other human beings rather than shooting a bot. Because it actually poses a challenge.
EDIT: typo fix
There's more to it than that for me. No matter how good the AI is, shooting at AI feels empty and dull compared to shooting at actual people. Not only are you assured cunning, but also someone who hates your guts and lets you know in chat! I suspect many others feel that way too, deprecating singleplayer in favor of multiplayer from the get-go.
>Making the AI "way too fucking smart" at insane/hard difficulty and then dumbing it down seems way, way, way better than hitpoint modifiers.
It sure does. Have you ever played Civilization? The AI is always dumb; the only way to make the game harder is to make the game fundamentally unfairer. In this case, making the AI smarter is a much harder problem than in a shooter game though.
Wow, this is precisely what turns me off multiplayer. I don't want to chat with someone who hates my guts, or tells me I'm a newbie, or mocks me for not playing perfectly. I remember CounterStrike 1.6 used to be toxic like this, populated by teenagers with too much time in their hands, outplaying you at every turn, and constantly insulting you.
No, thanks. Singleplayer doesn't belittle me, and is more immersive.
Being punished for being new or not that good or not having as much time as the teen players (who are often the largest percentage of the game) is not my idea of fun. I remember I played an FPS a few years ago (won't say which one) and it felt like I was being punished for trying to get into the game. So I uninstalled it. As far as I can tell, the game is more or less dead now. By punishing new players, the fans killed the game. Yay for them. Instead, I moved on to some great single player games which treated me well and were much more fun for me to play.
On Left4Dead(1 and 2) I was part of those people "punishing" new people in a way. There was this rule that we would kick anyone who had less than 500 hours on the game, or less than 1k if everyone of the friends we played with had more than 1k hours.
We were not insulting to the new guys in any way and usually just asked them to leave. But if they would not leave, we would kick them. But to be honest, if they would stay in the game, it would not have been fun for any of us. For us it would have made the game too easy, and for them they would just quit after a few minutes of getting dominated anyway.
When you are new to a game, I think it makes more sense to just play with friends who don't mind sticking up with you being new, or trying to find a way to play with other new people. When I got kicked from playing a game like Payday, I did not mind because I know that I am a noob and that If I end up playing against people who are far above my level, it would be fun for neither of us.
Sorry that you had that experience though, if you stick with some of the multiplayer games - through the horrible community sometimes - it actually can be quite fun. But my advice would be to find some dedicated people to play with, they really make any game more fun.
I've also had games where I think the games design actually greatly influenced how much new players were being punished, for example by giving players who've played longer much better equipment. I've experienced that too and that was extremely not fun. That particular game also isn't active anymore as far as I know. So it's not just the players. My gaming time is too limited these days to play something toxic, be it players or the game itself.
But my advice would be to find some dedicated people to play with
I'd love to, but after work and other obligations, time is short and syncing up with my game-playing friends is hard. Most of them also play different games to what I enjoy so it's hard. The best I've managed is to get a friend to play dark souls 3 with me maybe one night a month... :(
Single-player and multiplayer online co-op for me.
The new Battlefield game was probably the first were I have only played the multiplayer (and lots of it).
It could be true. Making them people-like might get close to that mark, though. When I did multiplayer on COD Ghosts, I thought I was playing against actual people since they were acting like players. Then, I asked my brother which company he chose for Internet. He said he didn't have Internet. That I was playing against bots. One of few times I was speechless over an AI performance.
Most pub games on cod are too easy and thus you might as well be shooting bots. I would be truely impressed if the bots could match games in clanmatches etc.
Of course, the point would be to have AI good enough for the average player. so that is pretty neat!
You also did not have to suffer from kids shouting on the voicechat and all the bad community parts :-)
Lmao. You got me there. I didn't consider that. It does seem more than that, though, as the bots did things like camping, soloing, revenging, or careful groups. The CoD player knockoff might be behaviors like tea bagging. It did something goofy but I cant remember if it was that or something else. Once he said it was bots, I quicky found its weaknesses to start killing them in mass.
"You also did not have to suffer from kids shouting on the voicechat and all the bad community parts :-)"
You might be semi-joking but this is a real market. Lots of people got off Live or at least its voice features because of this. We're even talking on an invite-only, low-noise forum for similar reasons. So, AI or game developers should keep that in mind when making tradeoffs in marketing.
Around the same time my friends and I started having families, most of us stopped gaming online because it was hard to coordinate all of us having time to play together, and playing with random people online was (often) a bad experience as we could not play 'on their level'.
A game with an actual good AI that feels like real players (without the annoyance of them) would be something I could see myself spend some time on.
In principle, sure. However, taking a "way too fucking smart" AI and dumbing it down believably isn't necessarily that much easier than going the other way, taking a dumb AI and smarting it up.
Problem is you can't just slap a "dumbassness" dial onto a smart AI in a way that appears natural.
Possibly the simplest attempt could be to take a "way too fucking smart" AI and represent the dial as a probability that the AI takes a random action instead of the way too fucking smartest move. That gives you a very nice smooth continuous dial that will correlate monotonously with difficulty level.
But if you were to play against it (by which I mean you, cause I suck at videogaming, myself), you'll probably figure out soon enough that it's a strictly artificial handicap providing the difficulty, in a very similar manner to changing the monsters' HP or speed. Also it's not a lot of fun, if close to the hardest difficulty, you just luck out and the monster performs the most stupid move, and trips itself or something.
Such a dial doesn't represent cleverness. I suppose you could attempt something a bit less artificial, like limiting the search-tree depth for the planning algorithm or something. Question becomes how granular that dial would be (search-trees aren't that deep usually) and what settings correspond to what difficulty level, you'd need to playtest this. Also the question remains to what extent this artificial limitation will appear obvious or not.
2001 had this figured out. HAL purposely makes dud moves in chess so that the humans have a chance of winning.
Therefore, I've always enjoyed single-player games more. I fully agree that I do not want the AI to beat me every time; that's just frustrating. I want to be entertained with a good challenge that I can ultimately defeat. Surely there is an interesting game design challenge in this field?
Between those three and their proceedings or conference videos, it's really easy to stay on top of developments in the industry. For an example, check out the AI Summit videos on the GDC Vault - many of them are free.
Planning approaches were a pretty new thing in games 10-15 years ago, but since then there's been a lot of work on exploring them, and people understand their benefits and limitations much better these days. For an example, there's a great study of the use of planning across many games by Eric Jacopin at GDC 2015 AI summit.
But AI in games does have a bit of an identity issue at times in that the obvious application of game AI(an AI opponent) doesn't represent the broader picture(FSMs appearing in every character controller and interactive element, planners employed in some forms of procedural content generation). With machine learning being the hot thing now, some AI devs are looking for angles to use it, which tends to push them towards backend data work that isn't too far removed from what other web and mobile companies are doing.
Also, nobody wants to run a google-sized deep learning network or what have you in order to train a mere game AI.
Things have gotten better over the last decade or so, with game studios letting developers publish research papers or slide decks or even entire software stacks as open source. Sometimes studios pay to send developers out to give talks and do recruiting. A decade ago, it would have been a hard sell to even get permission to buy your own ticket to a developer conference at most studios. The last AAA studio I worked for had similar policies - only a few people at the studio ever went to conferences, and we all had to pay out-of-pocket. That studio's policies have changed and they now let employees give talks. In comparison, the silicon valley firms I worked for had better wages and encouraged us to both attend and give talks. I think some of the improvements in games industry culture here have probably been triggered as a result of competition from SV startups and open source.
When it comes to AI there are specific challenges: It's very hard to repurpose one game's AI for solving another game's challenges. It's very difficult to build truly general AI frameworks. Good game AI depends on knowledge of game systems, content, and mechanics - how each ability works, how the environments are laid out, the structure of level objectives, etc. All of these things can end up changing during development, and in a multiplayer game they may vary from match to match. As a result, even if you have a bunch of knowledge from other titles and you can reuse source code from the best titles in the market, you may still fail to build good AI for your game on your first few tries. It's that difficult.
General purpose machine learning frameworks have helped here though, and people using off-the-shelf engines like Unreal can take advantage of all the existing tools there to start building AI quicker. So that's nice.
For example, I remember when you turned the difficulty up to 11 on the AI from the first Black Ops game, the enemies just turned into aimbots with a half second or so delay before firing (made watching the kill cam pretty funny). On the other hand, it's not feasible to hand your AI a depth field (which is already not super cheap to produce) and have them use real computer vision methods on it. Also it would probably be too strong a nerf, and now you'd have a very expensive and laughably inept AI.
The original Halo:CE had great enemy A.I. too. Though I think was more 'scenarios' being programmed in. What made the campaign fun asides from the big levels was that the enemy encounters were different every time.
Especially on the Legendary difficulty. To this day, the Halo series has the best set pieces and is overall the best single player experience of any FPS save for Half-Life
Unlike a lot of newer games where it seems higher difficulty is just more enemy health and damage.
I do agree with the author about the lack of progress in FPS enemy AI. The recent releases from the Call of Duty franchise are nice games, but the AI enemy combatants look dumb next to their FEAR counterparts. Why has not FEAR's AI strategy been adopted by other games/studios? The article does not mention anything about this.
I believe online-play preempted advancements in game AI.
People will play competitively online if they want a challenge. Single player mode in FPS games these days seem less geared towards being a challenge and more geared towards being cinematic.
I would argue STALKER has the best AI in any FPS.
I really got the impression that a million little actions were happening around me even if I was not there to observe them.
It really keeps the game fresh and exciting.
Haven't thought about that in a long time.
The STALKER series was my all time favourite shooter series, and I'm sad that STALKER 2 never saw the light of day :(
PS, a friendly note in case you didn't know: people typically don't sign comments on HN (as the guidelines say, your comment is already "signed" with your username)
When asked about STALKER he said that it is a possibility provided GSC can make a successful (and profitiable) go of it again, giving them the resources to make a true STALKER sequel.
On the plus side Grigorovich resisted all attempts to take over the STALKER IP, I'm convinced he wants to do it right, if it is going to be done at all.
On the other negative side, most of the original STALKER devs have joined other companies in the time between GSC first shutting down and reopening.
I follow AI much more than I following gaming. I definitely find it curious that enemy AI has hasn't made a lot of progress when game-playing in a different form is such a focus of deep learning.
2. Quality control: DL is a blackbox. Bad for the playing experience for obvious reasons. And even if just using pre-trained models, the lack of control goes directly against what both the designers and players typically want (predictable fun).
3. Finally, DL hasn't made THAT much progress here, actually. There have been serious efforts to apply DL to modern games (e.g. DeepMind with Starcraft), but what you've probably seen is "just" DL applied to some finite-state, 100% deterministic, complete-information, zero-sum games. A far cry from complex dynamic environments.
Which is not to say it cannot be done. Just that the incentives (both business and technological) are aligned differently -- too little bang for a lot of buck.
Come on, you could have decent AI back in the day that pinned me down and made me run for cover, then had me at that cling gling oh shit moment when a perfectly timed grenade lands at your feet just as you were starting to take a breath†. Today we're stuck with scripted sequences and playing whack-a-mole with sniper rifles.
† only HL1, Halo and FEAR seemed to manage that.
Especially back in the day!
It's more of an art. Applies to graphics, AI, audio...
I remember a discussion on gamedev.net once where exactly this was brought up. Its very hard or even impossible to tune an ANN to behave how the designers want it to. Simpler systems may not be as technically intelligent, but their behaviour can be tuned, encouraged, scripted etc to provide the experience that the designers wish.
Beyond that, I remember they also talked about how it simply wasn't worth it because game AI isn't about being smart, its about being fun, interesting and "looking smart" (ie looking humanlike, human flaws and all). Machine learning is simply solving a different set of problems than what games have.
We did some of this for racing games which really have pretty easy AI when compared with character games or strategy. An example is take ghost runs from real players, essentially splines from real players good and bad, but adjust the runtime play to the current vehicles but try to run that line. That way runs appear more human and probably can't be replicated by AI as well.
For racing there is also the typical rubber banding AI which tunes to the runtime play.
Most AI in games really come down to just scripts and action recipes. Zone based events/actions, path passed actions/events, follow/ranged/staged reactions, global actions/events, AI directors that can tune the game to the player or difficulty dynamically etc.
Left for Dead probably has the best AI director out there for tuning some great co-op games. It does med drops, weapons, enemies appearing all based on the current state of the players and skill. Music also changed per player based on their situation and intensity of that situation.
The Director, sometimes referred to as the AI Director, or simply as AID is the artificial intelligence of Left 4 Dead that features a dynamic system for game dramatics, pacing, and difficulty.
Instead of set spawn points for enemies, the Director places enemies in varying positions and numbers based upon each player's current situation, status, skill, and location, creating a new experience for each play-through. The Director also creates mood and tension with emotional cues such as visual effects, dynamic music and character communication. Moreover, the Director is responsible for spawning additional health, ammo, weapons, and Special Infected, like the Witch or the Tank.
It should be noted that there is another Director in the game, which controls the music on a per-player scale, called the Music Director.
Most FPS games with the exception of titles like Alien: Isolation simply haven't needed anything complex. Essentially more complex AI with the same freedom wouldn't be noticed by the player in most FPS games. Given more freedom a more complex AI suffers from being less comprehensible as the player is less aware or unaware of the actions it took to do something clever. For example a clever enemy that flanks the player is experientially identical to a scripted event that spawns dumb enemies on the players flank. Further the advantage to the second technique is not only that the AI requires less work but the entire scenario can be carefully designed.
Which is not to say a more intelligent AI wouldn't make for a fun FPS game it's just people haven't been making those games.
I suppose for some players and some games, it makes sense to train for "good" - but even with chess - is it really fun to play an ai you cannot realistically beat - ever?
The same stuff that made critics laugh at enemies not seeing you when standing on a 0.5 square meter shadow in the middle of a well-lit room because you're technically "hidden in darkness" is what is now "solved" by removing the mechanics altogether in favor of safe and cool looking scripted sequences.
Basically, this: https://twitter.com/robotduck/status/759529875992698880
For skilled gamers (which I am most definitely not) I can see how this 'cheat' is annoying, but for me, who isn't the best at precision movement in 3D games, I found it helped me immensely during these types of sections to simply turn off the sound.
We might wish that the folks at EA or Activision make shiny cool AI (and I'm sure many of them there want to!) but in reality that doesn't really do a whole lot to benefit their paying customers.
Besides - "AI" in games is really about how to create stage pieces that simulate more complex behavior. Like all software development, it's mostly smoke and mirrors, and it's all about how smart you can set that up to give the user the impression of something magical.
E.g. an enemy coming out of cover at a different position than when they last popped out is cool, but working out that coming out of cover is just a bad idea and waiting in ambush would be more realistic and smarter, but lead to stalemates.
Heck, you hardly ever see even vaguely decent morale implementations ("hey you just effortlessly killed 80% of my unit, so I'll, um, charge you!!!").
Again, in EverQuest some enemies would run away and get help. In World of Warcraft they just mindlessly attack until dead. EverQuest's AIs (lame as they were) were simply too deadly for a truly mainstream MMO.
Charging cavalry would be stupid. Soldiers knew no horse would challenge a bristling shield wall.
The monster would run then find bigger monsters and all of them would come back to kill you. Often killing every low-level player in the area.
For a very incomplete example, you have sentry style mobs which run to activate an alarm and "bring help". You have mobs which run in fear and aggro any additional ones they path too near to.
In general most developers have noted that players hated this type of gameplay because it meant that one idiot could ruin an evening for 40+ other players in the dungeon because the 'train' would then clobber all the other players on the way back to their individual spawn points.
WoW also generally doesn't have zone lines that you can escape out of. The fleeing mechanic isn't something that's all that challenging to advanced players who know how to stun and slow. It actually makes encounters easier because the monsters stop doing damage at 20% health or whatever. Contemporary games have other ways of adding challenge.
That's not to say I don't enjoy playing them, but I do sort of miss the days when games actually had deep single player gameplay.
The AI was a joke, though, but that now seems to add to the charm. It goes really well with the campy voice acting and over-the-top characters.
It's a game I'll keep recommending over and over again. The sheer amount of player choice, customization, and environmental openness and interactivity is tough to beat, even today.
>The AI was a joke
I don't understand how I should take your comment here? Are you agreeing that other than half life, there are no good single player AIs?
Once CoD:UO (and 2, and so on) came out, they began killing off the great historical single player experience they had.
No idea why I'm getting downvoted. There are literally only a handful of first person shooters that are story-driven given the great majority that aren't: Quake, Doom, Duke Nukem, Sin, Wolfenstein, etc.
Saying that you miss the "old days" when first person shooters had stories makes no sense other than stirring some misguided Southpark-esque "'member when..." nostalgia.
I mean, if "FPS" means "a first-person game where you often shoot things", then Deus Ex, Bioshock, Half-Life, Thief, Fallout 3/4, Skyrim, etc., can all qualify. Hell, throw in Dungeon Master, and maybe even Portal and Minecraft. Of course one can argue where to draw the line based on how much "shooting" takes place.
But people often say "FPS" to mean specifically "games like Quake and whatnot". Under that definition, shallow gameplay is part of what makes a game an FPS.
That seems to be unfair to Quake - there's a fair amount of depth and strategy involved, mostly with the hogging of powerups. I thought Quake 3 was just a game about two guys bumblebee dancing while trying to shoot the other, until my mind was expanded by a top Quake 3 player explaining his thought processes in the below video - the vast bulk of the talk is about map control, not reflexes.
An FPS has to be a shooter that's in first-person view, but does that make every game where you shoot something with a first-person camera view an FPS? What if the game has other elements? Which definition wins out? Is Fallout more RPG, or is it more FPS? Or is it a turn-based strategy game? Most people would say it's an RPG that happens to have guns and is played in first-person mode and part of the game mechanics is reminiscent of turn-based games.
When you say "FPS", it's a pretty strict definition, because if it's an RPG that happens to be played in first-person with a gun, it's going to be called an RPG first. Maybe RPG-FPS. But FPS really means "run around with a gun and your main interaction with the world is shooting it". There is an entire world of games that do exactly that, and we call them FPS games. Borderlands is an FPS with hints of MMO, dungeon crawler, and RPG, but since the main interaction with the world is running around with a gun and shooting things, it's an FPS. I don't think that's a bad thing.
(The point being, it doesn't mean much to say that few FPSes have deep stories, because if a game has a deep story then people tend not to consider it an FPS.)
You can still have deep games that are Quake-like, Half-Life being the canonical example. Deus Ex can be played, like Fallout, in third-person so I'd say it's disqualified.
Why is HN turning into reddit with random downvotes and no discussion? Must be the weird gaming crowd.
Again, you're talking about the definition, not the thing. Half-life is very Quake-like in its mechanics (camera and controls), and not very Quake-like in its gameplay (exploration, puzzles, etc).
As such, saying something like "FPSes aren't deep" is more about terms than games. Whether you agree with it depends on whether you take "FPS" to mean Quake-like mechanics or Quake-like gameplay.
(Incidentally Deus Ex is first person. The newest ones have a hide-behind-cover feature that temporarily shows you in 3rd-person.)
We're talking about the original Deus Ex which is first-person only.
The most interesting part of that game is Rapture itself, the mnechanics and general gunplay just sucked honestly.
Look at the old Rainbow Six games against the new ones, same for Ghost Recon. Hidden and Dangerous was a great game too.
Those games are actually hard, and have gameplay that isn't just "watch explosions, shoot some enemies, get shot, sit behind wall to recover HP".
Just look at Rainbow Six 3 against Rainbow Six: Vegas 2, the second one was about as linear as you could make a game, the first one gave you a lot of options in how you would achieve a mission.
If you're saying that these games are shallow, I will fight you irl.
The whole genre of linear single player FPSes is sort of dead. Lots of mixed genre games that tie in RPG elements with big open worlds and character customization though that are doing quite well and could benefit from better AI.
Even with a friend though we can't reliably beat the top level AI in Ghost Recon Wildlands so it can't be that bad.
I personally like the interactive movie feeling, the scripted events and the scenes. I did never enjoy CoD in multiplayer which I find too hectic. The battlefield series with a more slow-paced and strategic gameplay suited me better in multiplayer. However I generally don't enjoy competing in MP that much anymore - too little time and interest to train enough to be competitive with the younger players.
Linear single player FPS is definitely not dead, although multiplayer is the current focus. As a quick example, DOOM (2016) has both linear single player and multiplayer sides, but the single player side was much better received.
I suspect that there's an element of self-selection going on here; there are lots of people who play the single player, but an onliner can't see them because when that onliner goes online it's 100% people who go online that one sees there.
There could be ten times that many playing it for the single player and they'd remain completely invisible.
I did, when I actually bothered to play CoD (the first Modern Warfare was probably the last one I enjoyed) and couldn't stand the multiplayer.
When I had been used to games like Quake, Unreal and Battlefield the multiplayer in CoD always seemed so half-baked.
The genre seems well and thriving.
AI is often at its best when it does something stupid that is understandable. For example, in Halo when you kill a bunch of the big guys, sometimes the little guys drop their guns and run away. This makes them very vulnerable, but it is fun. A smarter AI (one that would win more) would only pop out when you aren't facing them, or if cornered just shoot back very accurately from two points. But getting hit from behind isn't much fun, and getting shot at head on from two directions is also not much fun.
Good AI is usually a mix of good game design (and art) as well as good engineering.
For example, AI can micro manage way better than you can, it can do a billion APMs. It has complete situational awareness at all times and never gets distracted by real-life. In an RTS where scouting is key, it could fan out with as many drones as it can produce and gather way more information than a human player could.
You need to constrain your AI so that it's not impossible to defeat. For example, limit how many actions it can perform in a given situation, or make it stubbornly cling to some losing strategy for a while even when it could easily flip to the most optimal one in an instant.
Watch some StarCraft/StarCraft2 bots, it seems like an easy challenge, but they can't beat a good human player when they have to deal with the fog of war.
PS: Multi drone scouting tanks your economy and becomes a really bad idea. You want to send out one scout that can both gather useful information, not die, and not significantly hurt your economy. To the point where some very high level players don't always scout early.
Tic-tac-toe, checkers, chess, even Go have all fallen to computers.
As for scouting, the ability of an AI to endlessly harass and endure minimal damage because of precise unit movement becomes a problem. A human can't finesse that as well and still have attention left over for other, more urgent priorities.
It's probably the case that at a strategic level the AIs generally suck, but on a highly situational level they have an advantage that needs to be kept in check.
Remember, at the meta level you need to have just enough army to survive an all in, while maximizing resource collection and research. Further, if you don't use a range of strategy's people will just pick the counter to your game style. Do you build static defense vs harassment? What do you do for vision? And everything has a cost in time or resources.
Further, things like army composition, baiting, high ground, choke points etc. make combat very tactical which bots have issues with.
When they released an API for Brood War, an AI competition popped up shortly thereafter, and I expect that there will be one announced shortly for SC2.
Infinite APM AIs in starcraft still basically suck.
There strategy/high level planning is so terrible that perfect/unfair micro doesn't save them.
Honestly, the AI in most games is downright terrible to start with, but for those where it's actually competent, you need to nerf it a bit or it becomes overwhelming.
AI in real time strategy games all suck, because the problems there are very very hard.
Even with blatantly unfair advantages and cheating the AIs still suck.
No such competent AI exists for real time strategy games. The "decision tree" is a billion times more complex than something like Go.
They can be given the same kind of information in a way that mimics a player's knowledge and perceptions without having to cheat by making them "forget" their perfect knowledge of the world state or act deliberately stupidly while having that perfect knowledge (of course, you still may need to make them stupid sometimes for the sake of fun).
If you're trying to make a bot that plays the game with only the information the player has then that's cool and interesting. But if you are designing an AI as part of the game then you're making them not consider their perfect knowledge by adding restrictions.
If you're playing a poker game then of course the AI knows what's in your hand. Telling it to purposefully play subotimally and telling it to not consider the player's hand to me is equally artifical although one might lead to more enjoyment.
I felt like the tactics used by every enemy in the game basically involved flanking the player from two sides, and never approaching directly unless there were no other ways around (which was rare, as the levels were basically one combat arena connected to another in linear fashion).
It was pretty easy to exploit this by basically charging the enemy and aiming for the head as if you were playing an old-school strafe shooter, which didn't give them enough time to formulate their flanking plan.
I found by doing this the game became trivially easy, even without using the bullet-time thing.
I found the enemies to be less dumb than your usual cannon fodder, but you'd mulch through them pretty fast if that was your goal. They did try a lot harder than those in other games, which was nice, but they weren't that much of an obstacle. Their patterns are somewhat predictable, so you can often anticipate where they're going to go, or what they're going to do.
The problem with the F.E.A.R. AI is they don't necessarily adapt to the player like real people will. You can do the same thing a hundred times and they'll fall for the same trick.
Getting ambushed? Well, poke out, draw their attention, and fall back to a more advantageous position. Human players would just wait it out, they know you have to go through there, but the AI, ironically, gets impatient. Their goal is to kill you, not to defend things.
I'd like an AI that gets wise to my tricks, that starts to react differently. If you're using grenades frequently they might field more heavily armored troops. If you're sniping they might snipe back. That'd force you to adapt, to switch it up, to avoid becoming predictable.
The problem is that requires a pretty robust AI to manage troops and a deeper objective than "kill player".
If that was more unpredictable, if each death switched things up slightly, you'd have a far harder time gaming the system.
Like you observe, that might also make the AI appear smarter since the flaws are less obvious.
It was one of the few online games I played these days, I've been sticking mostly to single-player RPG's and grand strategies.
Civ is the same: the AI has routines that try to win, but also a whole bunch that role play.
Interesting idea. Clever use of a classic algorithm in a novel way.
Whereas academic planners at the time were struggling with PDDL (a symbolic declarative language for describing planning domains -- i.e. problems and actions which can appear in plans), GOAP rewrites all the actions in C++. Then, no problem for procedural preconditions and postconditions, for instance: just call C++ code to check what you have to.
// ----------------------------------------------------------------------- //
// ROUTINE: CAIActionAttackFromNode::ValidateContextPreconditions
// PURPOSE: Return true if real-time preconditions are valid.
// ----------------------------------------------------------------------- //
bool CAIActionAttackFromNode::ValidateContextPreconditions( CAI* pAI, CAIWorldState& wsWorldStateGoal, bool bIsPlanning )
// Intentionally do not call super::ValidateContextPreconditions().
// Firing from a node ignores range.
// AI does not have a weapon of the correct type
if (!AIWeaponUtils::HasWeaponType(pAI, GetWeaponType(), bIsPlanning))
// AI does not have any ammo required by this weapon type.
if ( !AIWeaponUtils::HasAmmo(pAI, GetWeaponType(), bIsPlanning ) )
// AI must already be at a node.
SAIWORLDSTATE_PROP* pProp = pAI->GetAIWorldState()->GetWSProp( kWSK_AtNode, pAI->m_hObject );
if( !( pProp && pProp->hWSValue ) )
HOBJECT hNode = pProp->hWSValue;
// Node must be derived from SmartObject.
HCLASS hTest = g_pLTServer->GetClass( "AINodeSmartObject" );
HCLASS hClass = g_pLTServer->GetObjectClass( hNode );
if( !g_pLTServer->IsKindOf( hClass, hTest ) )
// Node must be correct type.
AINode* pNode = (AINode*)g_pLTServer->HandleToObject( pProp->hWSValue );
if( !( pNode && ( pNode->GetType() == m_pActionRecord->eNodeType ) ) )
// The node must be valid in terms of FOV.
if( pAI->HasTarget( kTarget_Character | kTarget_Object | kTarget_CombatOpportunity ) &&
pNode->IsNodeValid( pAI, pAI->GetPosition(), pAI->GetAIBlackBoard()->GetBBTargetObject(), kThreatPos_TargetPos, m_dwNodeStatus ) )
// Preconditions are not valid.
// ----------------------------------------------------------------------- //
It's an extension of Dijkstra's algorithm that adds a few heuristics.
> dates back ... extension of ... adds a few heuristics.
Fact is Empire Earth was one of the very first computer games to use A* algo. Age of Empires 1 used a worse faster different algo.
Fact is in the age before Wikipedia and Google, decade old papers of obscure algos were more or less lost and forgotten. They were written on paper with no digital copy readly available. So game devs often reinvented things without not knowing of previous work, it took several years until the early 2000s that this old stuff got scanned and made public viewable.
Citation? The only mention I can find about the pathfinding algorithm used by Age of Empires is this:
A* is supposed to be an optimization of Dijkstra's. I don't know why you call it CPU intensive.