Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Game devs, what is the hardest part about designing an AI for your game?
89 points by sgillen on March 6, 2019 | hide | past | favorite | 83 comments
I've been thinking a lot about different machine learning techniques applied to video game artificial intelligence. I'm wondering if any video game devs would like to share their experience about what parts of making a game AI are challenging, things that are difficult to get right, time consuming, or tedious.

Thanks!




Game AI is mostly pathfinding (which basically falls down to creating navigation meshes for levels), and very basic decision making skills, which is usually a weighted decision tree (e.g. if they're getting shot at, cover might be weighted at 80%, shooting back at 20%). You don't want to actually create something actually smart to fight, you want something with understandable, predictable logic for the player and something they feel clever outsmarting. Anything too complex and players will get discouraged by the difficulty.

Machine learning isn't really a great fit for this, as it is neither predictable nor understandable, and thus the player can't plan around it or try to outsmart it.


That reminds me of a lesson developer's learned when making a stealth game, Splinter Cell. Originally, when you were spotted the enemies would just try to shoot you, but tester feedback said that players didn't like getting shot out of nowhere. The devs added audible cues that you'd been found, like the guard yelling "Hey you!" before they started shooting. It turns out realistic AI behaviors aren't necessarily fun AI behaviors in a video game.


Gaming is full of tricks - e.g. I liked how Bioshock developers said that they made enemies deliberately miss their first shot so the player has time to react.

Or how many shooters lower incoming damage and increase outgoing damage as you come close to dying to create those "I survived with 1HP left!" moments.


Splinter Cell also has cooldowns in some/most levels (like most stealth games), which is of course an absurd notion. A guard on a military base wouldn't back out of "DEFCON1" after two minutes just because the invader didn't show herself for that long.


There's a cool isometric stealth game called Robin of Sherwood, where you can follow what the guards are doing. A guard might accidentally stumble upon someone you killed, be alarmed, look around, run off to warn their commander, then the commander organises a search for the culprit, with lots of guards explicitly looking for you everywhere. Eventually if they can't find you, the whole thing dies down and everybody resumes their original duties, assuming the dead body was just a fluke or something.

Also, if they find someone you merely knocked unconscious, they assume they'd been sleeping on the job. If you tied them up, they free them, but I forgot if they search for you or if they just assume they got accidentally tied up somehow.

Also, you'd think they might raise the alarm if they find less and less guards present at their assigned post.


Thanks for sharing this. It's a great illustration of why playtesting (and testing in general) what we think will work is always a good idea.


Depends on the game. The entire genre of strategy games could probably benefit from smarter AI. More so for 4X style games. Mostly, the current AIs act like players, but make terribly inefficient decisions that get 'balanced' at various difficulty levels by throwing more resources at them or other advantages. So the games vary between being absurdly easy, to being difficult simply because the AI cheats, and even then, it doesn't usually leverage its advantage well. As an alternative, people play against other players, but with long-running strategy games, this adds real life logistical problems. An AI that plays like a human, and can be adjusted for stronger or weaker play would be amazing for the 4X genre. Perhaps difficulty adjustments could even be reverse to giving the player 'cheats' or free resources on lower difficulties if that's easier than throttling the AI.

Outside of those games, I'd say any game with competitive multiplayer could benefit from smart human-like AI. Some people play games for a challenge. The unpredictability of other humans is most of the fun, whereas playing against a typical game AI is just a matter of figuring out how it works, where it fails, and then taking advantage of that failure repeatedly. Considering the market for competitive online games, there's likely a similar market for competitive AI games.

I suspect the real barrier that is that it's a lot cheaper and easier to sick players on one another than to make a challenging human-like AI.


TA:Spring had these AIs that would exploit shit in a terribly annoying way. Like, you had ballistic projectiles that would take a while to arrive at the target, and your units would fire them. Well, the enemies had them too, except the AI would exploit its superior micro by firing off the projectiles at maximum range and maximum inbound velocity and then immediately shifting outbound.

You can't move your bloody units as fast as them so your units would fire at the place where they were 'going to be' within the range or at the place they were at the limit of the range.

Utterly infuriating as a player. I know it doesn't have to be that way, and the path could be closer to strategic decision making, but it's a funny example of one of those AI decisions that wasn't constrained in one way.


Yeah, that sounds less like a 'smart' AI gone wrong and more like a typical cheating AI. In this case, instead of an actual cheat it's just able to act at superhuman speeds. Similar to the speculated issue with the Starcraft AI. Even though that one was constrained in actions per minute, it wasn't really constrained the same way a human is.

In general, the smarter you make the AI, the more you have to work to give it constraints to keep it from cheesing the player the same way players often cheese dumb AIs.

In the case of an RTS, you can force the AI to manipulate a cursor and view area the same way a human does. Only give it partial information about areas outside the current view. Put limits on cursor manipulation. Maybe add a fudge factor that makes its clicks land randomly within X distance of its target, where X is based on the average speed the cursor has moved in the last half a second. That alone would make it harder for it to give orders to a bunch of individual units fast enough to out-micro a player, and you're really just giving it constraints similar to a human operating a mouse.


Which Spring AI was this? I don't remember any of them being any good.

I made a Spring AI back in the day (it wasn't very good either).


I saw it do it with BA, but I don’t remember the name. It was many years ago. Perhaps RAI? Was that a name?


In clicky action games, the AI has a definite advantage over human players, because it's not encumbered by an interface. For turn-based strategy games, that advantage disappears. Outside of heavily researched abstract predictable games like chess and go, I haven't seen any games where AI can beat a skilled human player at strategy.


There are actually various anecdotes about game developers playtesting "smart AI" where playtesters came back with very negative impressions, because e.g. a couple NPCs giving suppressive fire while another group flanks you quietly felt to players like the NPCs were cheating.

I believe one of these anecdotes was about the AI in F.E.A.R. and their solution was to add very clear callouts to the NPCs whenever the NPC made a decision ("Flanking, cover me!"). I think FEAR also had the concept of a director AI which handed out "tickets" to NPCs for attacking the player; an NPC without a ticket couldn't attack the player, so this limited how many NPCs engaged simultaneously.

I remember AI in Jagged Alliance 2 frequently flanking you and at first that was very frustrating; the stock (1999) JA2 AI is much better than every newbie to the game. You really had to think tactically and think about your moves and approach to be successful in that game. If you just go in "guns blazing" the AI will outmaneuver and outgun you. AI in that game was sometimes quite adept with grenades and such as well.

Edit: Talk about FEAR AI, might/might not be related https://www.youtube.com/watch?v=BmOOrh5lq7o#t=1m44s


Honestly the main reason I enjoyed XCOM (the new one) over X-Com (the old one) was that AI felt a lot smarter than me. I found it entertaining in how clever it was. You'd think you set up a clever perimeter and all and oops, got flanked by a little Sectoid hiding behind a post box.


Please... head over to Paradox Interactive and seek to join their team if you're bummed out to a point of thinking that. (And you're welcome to link to this comment in your application.)

Their audience is basically desperate for a better AI that does better strategic and tactical thinking rather than pathfinding, weighted decision trees, and a couple of high level heuristics.

Here's what I and I'd gather many others would want instead: for the AI to go through the dev game logs, and perhaps even end user game logs, and then learn from what humans are doing, so they behave more like them. And then re-inject that into the game so it's something of a challenge to play without needing to give the AI some serious bonuses while playing.

As things stand the AI is so bad you can start as a purposely broken one province nation and still turn into a major power. [0]

And if you're thinking "no, the AI will trash them and they won't find it fun," here's a thought: right now, the AI is so bad that the only way to have some fun is to give it insanely large bonuses. But much like in a game of Go, as a player I'd much rather get those boosts myself to overcome a much stronger AI opponent.

The same holds for other 4X games, like Civ.

Playing Chess or Go against a computer used to suck. It no longer does thanks to how much better AI got.

[0]: https://www.youtube.com/watch?v=qJhyZ-9ktpo


My understanding of the latest-gen Pdox games is that parts of the AI are shared at the level of the Clausewitz engine, and that Clausewitz in turn was developed for EU3, but designed as an extremely general engine for 2D games, not even just GSGs. So in some ways the HOI4 AI, for instance, is using techniques that might have worked great for EU3 (and, in my experience, worked fairly well in Victoria II, if you treat the economy separately), but are now unsuitable.

I'm interested as to whether they've improved the AI in Imperator - not that I'm going to buy it - as the Jomini sub-engine was supposed to streamline a lot of the "cruft" that had built up between 2007 and now.

As to your point RE: OPMs going on world conquests, I think that's part of Paradox's cross-franchise decision to move away from historicality and towards more "casual" gameplay (i.e., map painting), not just between EU3 - 4 but HOI3 - 4 as well; and ultimately has more to do with their balance choices for the player than the strength of the AI.

I do think that, as it stands, HOI4 and EU4 are at the same level as certain competitive games, where instead of trying to play historically, many players just wait for the next patch's balancing to over-correct in favour of some set of poor AI choices, and possibly for Reman's Paradox to explain it to them, and then exploit that to their advantage. Making the AI more competent, and removing the need for balancing and overcorrecting, would remove that perceived ability to exploit a broken enemy, and thus some of the desire from the playerbase for map-painting over historical accuracy.


The only Paradox game I've played is Stellaris, which has god-awful AI. Looks like it uses the Clausewitz engine, so perhaps that's the reason. But the AI empires are terrible at balancing their economies, and it doesn't take long for the player to completely outstrip them, even when giving the AIs advanced starts and bonuses. And then the military AI isn't very good, to the point that when it is outmatched by a stronger opponent, it basically sits and waits to be slaughtered.

The only remote challenge comes in the form of the endgame 'crisis' which is basically an empire that appears late in the game with huge armies and has no economy, just scripted reinforcements and possibly some scripted counters from other special AI entities. None of the regular AI empires will have advanced enough to even think about fighting it, so it mostly serves as a check to see if the player has played efficiently enough to that point to stop the threat, and marks a good point to end your game once you have.

The scripted events make for a somewhat interesting game, but if they could provide an AI that actually made the universe feel like a competition in the early and mid game, and like the other empires were a factor in the end game, it would bring the game to another whole level for me. Of course, they'd probably have to fix their general game balance as well, which an a whole other issue.


> As to your point RE: OPMs going on world conquests, I think that's part of Paradox's cross-franchise decision to move away from historicality and towards more "casual" gameplay (i.e., map painting)

TBH I'm actually fine with that, so long as the AI can end up doing the same. I'm still waiting for an AI Ulm to vassalize large chunks of the HRE and become emperor.


I'm not much of a chess player, but one thing I've wondered is whether the games against Chess AIs are interesting for good players. Do they feel exciting as though you were playing against a similarly skilled human player? And can you distinguish between an AI and a similarly skilled player very easily?


The thing with Chess (or Go) is that unless you've a club in your area or a skilled relative, you're playing alone against an AI or playing online.

It's of course much more exciting to play against skilled human players. In particular your parents or your childhood friends while growing up. But unless you're living in some large city you might not find many other offline options after you start beating all of them.

That leaves you with a choice between going "pro" to some degree or another, playing online against other people, and playing against an AI (online or not). I'm honestly at a loss to give you an unbiased view of which is most interesting, since I've seldom found the time to play Chess in the past decade or two. What I do know though is that no bot gives you good social interactions.


Much of the time it's too predictable, to the point it's no longer a simulation.

For example, something like Battle Brothers, where the AI tends to attack the least armored ones. You can put a group of poorly armored people on one side of the map and then put heavy damage ones on the other side. The right side draws the combat which gives the left side space to take out the dangerous targets. Humans would spot the tactic straight away. Setting up I exploitation tactics is not so fun for some people.

A worse case scenario are the FIFA games. Every release has some untested exploit. In some, the defenders often push too high and you can score goals from halfway across the field easily. In some, the goalkeeper gets confused when passing the ball right in front of goal. It becomes less a football sim and more how to get formations that optimize exploits.


Do you think there is a class of players who don't want to feel clever for outsmarting a predictable AI?


All players want to win. The level of difficulty is equal to the level of accomplishment, but games have to be made for the average rather than the exceptional, so many better than average players won't be challenged by the difficulty.

Nobody is going to invest the resources into a real AI just for the hardest difficulty modes. If it cannot be used on Easy/Normal, it simply isn't happening.


> Nobody is going to invest the resources into a real AI just for the hardest difficulty modes.

Which is really sad because "hard" difficulties in most games just mean that you have less health and NPCs do more damage or their "cheat factors" are turned to 11. They're still the same stupid NPCs as on "easy", but now they just headshot you behind cover and have zero recoil / perfect accuracy. It's technically difficult but not really fun.

One of the most obvious examples of this is GTA 5 where all NPCs are equally dumb, but "harder NPCs" just mean that they tank huge amounts of damage, while they have very little aim delay and essentially perfect accuracy. Due to various bugs in the game they'll often shoot through walls or even entire buildings: technically hard, but not fun.


The "tank huge amounts of damage" thing has annoyed me going back to the old RPG games on NES and SNES. How do we make the game harder as the player progresses? Give enemies more HP and DMG values. Ok, how does the player combat this? Have him aquire levels/skills/equipment that increases his own DMG and survivability to restore things to the same balance they were earlier in the game. Repeat ad nauseum. Some variations restrict the player enhancements to DMG so they're forced to chip away at enemies in long dull fights. Others restrict survivability, which tends to result in shorter, more random fights, while others allow near infinite enhancement, but lock it behind boring, redundant grinding.


Vanilla Skyrim is an especially annoying specimen in this category, because it has this weird "S curve". With low levels most enemies are way stronger than you (ok, that's good, a level 5 player shouldn't be able to defeat city guards, for example), then there is a "growth phase" up to ~level 40-70ish (depending on play style) where the player is on par or better than NPCs, and after that the player's abilities level off and NPCs continue to grow stronger, so a high level player has a hard time again against NPCs of a class that she could previously defeat easily.

Which makes no sense at all.

Luckily for Skyrim the game thrives on mods anyway.


Considering how many successful games are released targeting a niche market, there's probably a pretty sizable niche market of 'better than average players' that some studio could take advantage of.

Market size is only half the equation, the other half is the cost to develop the feature. As methods and technology improve, the cost to make a smart AI for a complex game system should decrease. As the cost decreases, you need less of the overall gaming market to make a return. Might just be a matter of time before somebody takes a gamble on a tough AI, has a huge success, and then everyone starts to emulate it.


But smart AI isn't fun to play against. Take Chess- AI is smart to the point of domination against players. In a FPS setting you just have an aimbot. In a RTS game you would have an AI with passable macro and insane micro. I think the only real chance of a smart AI being fun is in a pure strategy game (I believe it's GalCiv 2 with the epic story about a backstabbing, scheming AI).

Smart AI is easier to make than a mediocre, fun AI. Optimality is is a clear goal. Fun AI is less clear and requires a unique blend of technical and design expertise.


Seems like you're conflating Smart AI with an optimal game playing algorithm. An aimbot isn't hard time create, sure, but it's not exactly smart, either. Nor is it fun. I agree it's harder to make mediocre but fun AI, and that's what current shooters do. The point is that there's theoretically a level above that, harder to achieve but not impossible, where the AI behaves like a human would. A reasonable balance of predictability and unpredictability, skill, strategy, but not perfection.

In a real time strategy, it seems we're approaching that level with that StarCraft ai. Need to perhaps restrain the ai's ability to micro by fuzzing it's ability to control quickly and precisely in bursts, and then continue to advance it's strategic ability, but I expect it'll get there eventually.

In the case of shooters. It might involve giving the AI a noisy view of the game world, to force it to approach the problem the way a human does. Humans don't just have an accuracy rate or a timer between an event entering their screen and their ability to shoot at it. A human has to filter through visual noise to notice a target, and decide how to engage. A human has to use an arm and hand and muscle memory to manipulate a mouse to get bullets on target. Those are all imperfect controls, with a certain amount of uncertainty between intent and action. All that is further influenced by how prepared the human was, how much they have to adjust their previous plans, or recenter their attention.

It's a difficult task to identify all the things that actually make a game difficult for a human and make sure the AI faces a close approximation of the same problems. Then it's even more difficult to get an AI to a point where it can compete on a human level, not to mention various human levels.

That doesn't mean it's not possible or not worthwhile.


That's why smart AI must also have smart scaling factors. Maybe some randomness. Maybe some imperfections, like players do. I think we should design the best and smartest possible AI and scale it down for average players. Currently we have dumb AI boosted for advanced players. And boosted very naively - by giving them more hp or make them deal more damage.


How does that fit into games like Dwarf Fortress where "fun" is in losing? Does winning count as achieving whatever goals the players set for themselves before reaching game-over?


Some players play DF with specific goals in mind. Others play in order to have a story like experience. In the latter case, the "fun" in losing is only "fun" if it creates a kind of dramatic tension. DF is also unusual in that worlds are intended to be played over and over again. "Losing a game" only adds more interesting history to the world, so "winning" and "losing" loses a lot of meaning. What's "fun" is playing again and having that historical "loss" influence your next game.

Like I said, though, many players choose a goal oriented play through. Achieving that goal is a win for them (and often they abandon the fortress and start a new world afterwards). I remember one person bitterly complaining on the forums that the game was too easy because they had already won the game. After they succeeded in becoming the mountain home for their civ, they felt the game was over. Sometimes it's hard to explain other ways to play games ;-)


Not 8f you are playing Dwarf Fortress. Losing is fun :)


I'm curious about that. Keeping it dumb of course makes sense for adversarial uses - if the AI is you enemy. But I could imagine there are scenarios where the AI is on your side - e.g., as a support player, NPC or pet. Would a smarter approach be more useful in such a game?


Even if the AI is friendly being able to predict their behavior is still desirable. Nothing worse than friendly AI ruining your game because it went off and did something you couldn't anticipate.


What if the friendly AI called out it's plan, and the player then could have an agree/disagree option with the proposed strategy?

If done well, it could feel much more like you're playing with a human partner.


Then, sure, because it is now predictable. Alternatively, I don't think it would be an issue if the AI was good enough that it could do unpredictable things that were actually intelligent (which would pleasantly surprise the player) or if the unpredictability and occasional awfulness of the AI were somehow an intentional challenge of the game.


If it's too smart on your side, wouldn't it do all the playing for you? For example, you enter a room full of enemies and your AI partner kills them all the second they come into view. What's the point of playing, then?


True - but that would require that would require that the AI partner is both overpowered (they can kill the whole room without risking to take damage) and completely ignores me.

maybe an alternative appraoch could be to have an AI that tries to figure out what you're trying to do - and then help you with it.

E.g., if the two of you have frequently witnessed a certain enemy launching an attack that can only be deflected by a particular shield spell, an AI healer could "learn" to cast that spell in advance when they see such an enemy.


You should also prioritise credible behaviour that will keep a gamer immersed. If you can achieve this with simple rules / tables etc that might be better than a full-blown AI.


I’ve been designing/coding games since I was 11yo and have a degree in gamedesign & dev. The hardest part isn’t the technical side of the game’s AI. It’s finding the balance of an AI that makes the gamer feel powerful enough to be fun- but not too powerful to become bored - and not find it too hard that he/she becomes demotivated. Obviously every player is different so finding a balance that can keep the gamer excited enough to continue is extremely hard. Getting the emotional-experience side of the player into the sweet zone is hard. One would say ‘why not make the AI such that it adapts to the user’. Which is 50% of the answer. But the challenge then becomes that you don’t want good players to be at the same level as bad players when the AI creates a similar experience for both. You want both players to enjoy the game, but not in the same way.


The Solution here would be rather than weighing the AI Difficulty for good players and bad players.You must have different sets of AI.


Making an AI that isn't "perfect", but rather it is fun and challenging to play against.

My friend was working on a chess variant for years, and it was very easy for him to make the AI be very, very difficult but that isn't fun at all! It would always win.

He then tried adding an element of randomness to it to dumb it down. But then the AI just looked perfect most of the time while occasionally making bizarrely dumb moves.

He later added "personalities" to the AI, where each one would try their own sets of strategies and attributes. He continued to make it better and better.


> My friend was working on a chess variant for years, and it was very easy for him to make the AI be very, very good but that isn't fun at all! It would always win.

Oh yeah, I can relate to that experience. My chess app had an "undo turn" button, where you could jump back an arbitrary number of turns. The most frustrating thing wasn't the AI winning, but realising that the AI had already won several turns ago and you're just a dead man walking, with all remaining options leading to failure.

I never hated A* as much as during those games.


Fun and challenging is key.

One thing about predictable AI's that irks me is once an optimal strategy is discovered, variance from that strategy is no longer rewarded. This creates a tactical endgame I wish didn't exist.

For instance, I played a lot of Gears of War Horde mode and difficulty is increased by increasing health, accuracy, and damage of the enemies. And the same strategy will win over and over. That's great for a grind-y game like Gears 4 Horde, but beyond a certain level it becomes a race to build an automated killing machine that once solved is no longer engaging.

Once you build a wall of turrets, why in god's name would an enemy keep rushing? Just the smallest bit of learning would go so far. The game would be way harder, but also WAY more interesting. Playing on normal where you actually have to hunt the enemy instead of holing up and waiting for them to rush your lane would have been awesome!

That's just one example in one game that I'm very familiar with, but isn't a unique problem. Even in 2019 there is a lot of room to make a challenging and fun AI.

The trick is to make every match feel like you could lose, not like you can't win. AI/ML should figure out what makes close games close, and keep evolving the enemy's strategy to keep them close, not just their stats.


I'm curious if your friend's project is open-source at all, would love to see the implementation details of this?


It isn't open source, but you can still play it if you have Flash installed: http://www.kongregate.com/games/thegrandestine/chess-evolved...

Imagine chess but with 100+ different pieces and an element of collecting/upgrading pieces.


Cool, thanks!


The risk of overcomplicating. You can put a lot of effort into detailed game AI which just makes for a hugely unfun experience. Something simple or predictable often leads to be better gameplay.


I think any game with strategy is going to have to use reinforcement learning to have any hopes of being understood by its players. Concerns around cheesing strategies can be mitigated for real time games using constraints and cheesing in turn based strategy games is a symptom that maybe the game you're working on isn't that strategic.

Ultimately Chess, Go, Dota, Starcraft are compelling games to watch and play because the skill cap is so high. No hand coded AI for such a game could ever prepare you to enjoy playing any of those games professionally or get to a point where you can understand professional play. High level strategy in those games would never even be discovered had we not had a large population of players playing those games for long hours.

Odds are most strategy games won't be as successful as any of the above games but the newcomers could still have compelling strategic elements that gamers could explore had they been able to find other players that are slightly better than they are so they learn but at the same time aren't frustrated by too big a skill gap. A small multiplayer community is why the strategy game market is pretty bleak except for a few large players.

Reinforcement learning can provide a better experience to your players because you the game developer don't need be the world expert at the strategies of your game to create a top level AI. You just make sure the rules of the game are there and self play will take care of the rest. You can constrain the AI such that it's enjoyable by both your newest and most devoted players. You can discover OP strategies and better guide your design process. Excel is great for balancing but if you want to make a game complex enough then balancing will be a tremendous challenge without a larger team.

If you're interested in leveraging reinforcement learning for your game, I run yuri.ai which helps game developers do exactly that. So reach out to me at mark@yuri.ai if I can help!


Thanks for your response, I had actually heard about Yuri before and was hoping you would respond to this post!

I have a lot of questions for you if you don't mind, maybe you don't want to answer any/all of them that's fine.

1. How do you handle constraining the AI such that it's enjoyable? If you read through the comments here you'll notice just about everyone is saying the hard part is not making a good AI but in making a fun AI. Do you know of any research in this area? most examples I see are all about making the best AI possible. It seems like it's really hard to find a good reward signal to use for "fun". Have you seen this [0,1] paper on learning from human preferences? does Yuri attempt to do something similar? It seems like it would be labor intensive for the play testers.

2. Does Yuri make use of imitation learning at all? I.E learning from lots of human data when it's available to bootstrap the learning process?

3. Do you let the game designers impose any structure on the AI? I.E. "I want three phases for the boss where he progressively gets more aggressive" or use RL just for certain small parts of the behavior? A well defined path finding algorithm with the RL deciding where the agent tries to go? stuff like that?

I have a ton of questions about the technical details side of Yuri too, what frameworks are you using, which RL algorithms etc etc, not sure if you are willing to share those or not.

[0] https://arxiv.org/abs/1706.03741 [1] https://blog.openai.com/deep-reinforcement-learning-from-hum...


1. We use a couple of tricks for this, there's ideas from human preferences. There's modeling a reward function to maximize fun instead of pure performance, there's handicaps and a host of other tricks. Different tricks work better for different games but game developers can easily configure them

2. It can, imitation learning turns RL into a supervised learning problem though from my experience it's not really needed. It's more of a if you already have it then you can make the training a bit faster

3. Yes all the RL part becomes tuning a config file where the game developer needs to describe the task it wants to be accomplished

Regarding technical details, the infra is agnostic to the specific deep learning library you'd like to use and the algorithms are chosen based on the types of inputs (continuous vs discrete), size of the space (the end all of strategy games or a casual puzzle game) and type of game (full information vs partial information).


GDC (Game Developers Conference) publishes a lot of great talks on YouTube about game design. Here are some of their popular AI talks:

https://www.youtube.com/results?search_query=GDC+AI


I programmed some new API functions for the AI scripting language in a popular RTS game once. I think the most tedious part is exposing the game data in an intuitive/helpful way for someone in a design role. It's a bit of an odd mix of optimization, computational geometry, and finding the right level of expressiveness.

For instance, a designer might say "It'd be great if I could put a tower between myself and the enemy town, but kind of at the border of mine, but only if I don't have enough money for a cool castle and I'm not overwhelmed." There's a lot of fuzzy logic in statements like those. The API between the game and the "AI" usually has to be general enough not to need spatial information, but also specific enough not to do things that feel unintuitive or silly to the player.

It's worth noting that machine learning helps avoid a lot of these issues, but I find AI tends to be designed for an feeling or experience, so I'm not sure if things like the Starcraft AI competition bots tend to make better products. Perhaps if ML could make the building blocks for designers to design AI?


F.E.A.R. remains at the top of its class for the feel of the NPC behaviour, and it's worth understanding why.

https://aiandgames.com/facing-your-fear/


I’d love to do this as a career or sell Premade AI modules for games. Any ideas?

Would game developers consider contracting out the AI to a freelancer?


No matter how dumb you make the A.I., you will have players complaining that it's too hard or unfair. I literally had the computer make a list of all possible moves and pick one out of the list at random for the default A.I. of one of my games and I got accusations that the computer was cheating on that difficulty level on a fairly regular basis.


Watch the talk about Civilization's AI. Seriously, game AI does not need to use advanced ML techniques, and probably shouldn't if you want players to have fun. The trick is coming up with fast algorithms, with a semblance of personality to make it interesting. Difficulty is not required for the AI to be fun in most cases.


What talk is that? Last time I tried a Civ game, I was underwhelmed. That AI just isn't fun to play against, to me. And the games run too long to play against humans in a reasonable timeframe on normal settings. I'd love to play Civ with competitive AIs.


"Playing to Lose" : https://www.youtube.com/watch?v=IJcuQQ1eWWI

Personal preferences aside, they specifically talk about how to make the game "fun". For example, they try to account for things like real-world leader's personalities. "Gandhi" shouldn't just attack his neighbors unprovoked, but maybe "Caesar" should.

And why do the AIs even bother to ally/trade/help the player at all, since the player is the only enemy that really matters anyway? Because the player is also the only person that matters in terms of having fun, and making the game too hard isn't any fun.


Interesting talk, but it really pretty much confirmed a lot of my assumptions about Civ AI. He pretty much admits that they are constrained on developer resources and AI code only makes of a small percentage of the game. Because of that, they have to make do with very naive assumptions about strategy. And because of that, they have to cheat to get the AI up to a level that's even considered fun.

At first I thought he seemed to be making a lot of false dichotomies, but really, he was saying, these are my only choices, given the overarching constraint on how much effort we can put into our AI. Basically, all of the AI decisions he talked about were small scale heuristics, with no ability to really look at the bigger picture, which is what the player is always doing when making small decisions.

And of course, personalities are a great idea, and not mutually exclusive with a strong AI. One thing I very much disagreed with was when he said an AI can roleplay, but a player wouldn't do that, the player just plays to win at all costs. But earlier on he even talked about one of the player types is people who play for a narrative. Those are roleplayers. So, especially at lower difficulty levels, it makes a lot of sense for the AI personalities to trump optimal play. But that doesn't mean you can't have a strong AI in general, that makes some limited sacrifices for the sake of fun.

Likewise with diplomacy. It isn't about making the game as difficult as possible. If that were the goal, you could just make the player lose when he loaded the game. It's about making it feel like you're on roughly even footing with your opponents, so that success has meaning. To that end, I would expect each empire to act in its own best interest, and therefore treat the player the same as it would any other opponent. It's also why diplomatic options need game mechanics to enforce them. As he pointed out, if there's no mechanic to stop a player from accepting money for peace, and then declaring war again, then there is no reason to every pay for peace. You should design these mechanics with player behavior in mind, and the AI should use them the same way players do. Think about how players behave with one another, make that fun, and then make the AI behave similarly. This wasn't possible given their constraints, but it's certainly possible in the more general sense.

As far as "making the game too hard isn't any fun" that's the problem. Too hard is subjective. What's too hard for one person is perfect for another, and way too easy for yet another. If anything, to please the maximum number of people, you'd have to write (or train) a very skilled AI that can compete with the brightest players. If you can do that, it should be much easier to tone it down or penalize it to make easier difficulties for the average and casual players.


Eh, I think Civilization is a great example of an awful AI. Specifically I mean here Civ5 and they way it does combat. It is absurdly easy to outmaneuver and defeat even vastly superior armies. It'd be much more fun to have an enemy who can put up a fight.


You say "awful AI", but it's clearly a AAA selling game, where 90%+ of players play on the single player setting.

What is your goal as a designer? To make a game with state of the art AI, or to make a game that has AI that keeps people buying and playing your game?

I know which I would choose.


> You say "awful AI", but it's clearly a AAA selling game, where 90%+ of players play on the single player setting.

Yes, 90%+ of players play on the single player setting - despite an awful AI. And I say that as one of that 90%. It is certainly not the AI that made me buy and play Civ. I don't expect nor want AI to play perfectly, but I do want AI to play good enough to not make ridiculously dumb decisions.

For example, notice how often AI leaves embarked units unprotected, so they can be easily destroyed. Or how often despite having long range artillery, it will enter town's firing range (even with that artillery!). Or how with its own towns it prefers so much to fire at even slightly damaged units, before attacking siege weapons (which deal much greater damage to towns).


I've never specialized in AI but from my experience on game teams most of the challenges revolve around making the AI 'fun' to play against, which is not generally very well defined. It's also about taking feedback from designers and playtesters and figuring out how to adjust the AI to address that feedback. Machine learning techniques aren't really used much in game AI, partly due to these aspects. It's not necessarily obvious how to adjust some black box ML model in response to feedback from playtesters on balance issues or from designers on making the AI more or less 'agressive' or other qualitative feedback. Usually it's more a collection of rules and heuristics with various parameters that can be adjusted to tune the behavior and allow for different difficulty levels etc.


I always thought it would be an interesting to write an AI for a shooter like Call of Duty or Halo, which is trained on real player data, according to your skill level. Most FPS games already record match data, so the dataset is available. What might happen is that the AI might evolve as the player meta evolves, and would learn what types of maneuvers work in certain situations, it would know average accuracy for a skill level to adjust itself, etc. It would also allow the game to fill in low player population, e.g. during a low traffic time or for a player with a ping that is too high, with bots that act human-like. Maybe wishful thinking, but I always thought it was an interesting thought.


Like others have said, balancing play-ability is hard. Another problem is users may perceive the AI cheats which hurts fun.

But I'd say another hard thing is that every game needs a different approach to AI. Some need path finding, which is well documented. Others need resource balancing which is less so. The newer and rarer the game mechanic, the more novel the AI.

I would say Machine Learning is not useful except in a handful of games and if yours is not in that category, you risk spending a lot of time for little pay off. And even if in those categories, ML will be working to win not to maximize fun so they could end up being too good or to cheaty looking.


One big issue I can imagine with applying ML is that most games are buggy and poorly balanced, but the majority of your players arent optimizing machines (outside of speedrunners), and will follow “natural” paths, so its fine.

The only kinds of games that see enough time and resources to the same issues that would trap an AI in some optimal but horribly boring strategy (or rather, “unfair” strategy) would be competitive multiplayer games — and even then, only those that can classify for e-sports. That is, those games who already treat players (or rather, the wetware community optimization machine) the same as they would treat your ML AI.


I've been thinking about this a lot, too!

Honestly, I think the most challenging factor is that when you are working on machine learning for your game.... you aren't really working on your game. i.e., you're not writing storylines, creating visuals, audio, programming general interaction, etc. Machine learning (if you don't have a ton of experience) can be super time consuming and unless your game is 100% based around the sole idea that it is ML.. all of that time is taken away from the rest of your game.


Yes I agree, I think the idea here is to provide applied machine learning as a service to game devs.


I actually think there would be value in having machine-learning / AI in a game that would dynamically try to match the human player's skill without overwhelming the human player. The goal would be to maintain parity such that the human's player response would always be "I was this close, and I could have done X, Y, and Z" as opposed to "I had no chance".


A friend is working on a solution for this. Check it out https://yuri.ai


Looks cool, if not overkill for the vast majority of games. I hope I remember this if I ever get to make a moba or a hardcore strategy game.


Why is it named Yuri, if I may ask?


After Yuri from command and conquer.


IIRC, the developers of Final Fantasy XII said in an interview that the Gambit system that the player uses to set the behavior of their computer controlled party members is basically just a GUI representing the same logical structure used behind the scenes for enemy behavior.


I love that Gambit system and wish more games would adopt similar systems. The Rules based approach allows complex behavior to emerge from a relatively simple set of rules. Although this is a per-unit approach, in combination with an overarching management AI, this could be very engaging.


What I understand from machine learning is that the actual learning takes a lot of compute power and learning data. So you can't have and AI that adapts during gameplay. The AI will "just know the optimal move for any given play state.


If your game doesn't give competitors perfect knowledge of the game state, it would make sense for the AI to track previous activity of its current opponent as part of the 'play state' for making decisions. For instance, if it doesn't know where the enemy is, but the enemy has attacked from this direction every time so far, it would prepare for another attack from that direction. Or if the last time the AI made a given offensive move, the player countered effectively a certain way, it may modify its next offensive to overcome that counter. The longer the game, the more it could appear to be learning to counter its opponent, even though, as you say, it's just making the optimal move given the current state of play.


Stopping it from making a base an Antarctica which players can’t invade due to a bug


My guess is you want it challenging but not too hard.


Mixing Concepts

The biggest surprise I have found on this subject is this strange entanglement of ideas between "AI designed to kill the player" ("Military" AI), and "AI designed to make the game fun ("Game" AI). This is like comparing a F-22 stealth fighter to a 747 commercial airliner -- sure, both are technically "jets", but they serve completely different functions.

The only AI we should be talking about here is AI designed to make a game more engaging and enjoyable for the player. That may include emulating a tiny bit of behavior that resembles military tactics, but only where games use that as a topical facade -- unless you play The Sims waaaay differently than I do.

Easy Mode

There are several posts here where people assert that "they don't want a game to be hard, that would be bad". As a counterexample, I would invite these individuals to view the awards, testimonials, and revenue from the ["Souls"](https://en.wikipedia.org/wiki/Souls_(series)) series (Demon's Souls / Dark Souls). I have spoken to psychology experts at game development conferences, as well as professors that explicitly study psychology in games -- and they are still struggling to define and quantify the exact elements that came together to make this series such a hit. But there is no denying that it is an absolute favorite, with a huge following, and tremendous profits to prove the efficacy of the model.

Obviously, I'm not advocating that "every game should have a Souls-style difficulty curve". That series caters to a specific audience -- and there are an ocean of subtle differences between fans of the series. The point is: broad statements like that go directly against a mountain of evidence. We need to think about this issue with its full (tremendous!) complexity, and how to leverage some immense computing power to address it. Speaking of which...

Based on your Purchase History...

Right now, companies like Amazon and Google are mashing up the entire history of everything you've purchased, every web page you've ever visited, along with demographic and social media data about you. They can then deliver a product recommendation / advertisement / etc. within milliseconds that targets your specific needs, even for things you didn't realize you wanted / needed yet.

If that can be done in milliseconds, then the same principles can be applied to tweaking the difficulty for players in real time. You don't even need to maintain a hot internet connection to constantly feed some huge cloud implementation at all times -- some local processing combined with daily or weekly micro-updates can do the trick. For a perfect example of local / cloud balance, check out SnapChat -- that thing uses the local computing power on your phone to identify your face, and then attach various 3D objects and effects to it in real time. That's done at 20-30 FPS on a smartphone without any special dots or markers to help "anchor" the models. That's amazing. And it's all possible because the [Convolutional Neural Network](https://en.wikipedia.org/wiki/Convolutional_neural_network) models that make that possible get trained on the cloud and downloaded every now and then.

So, yes -- you can code a game to dynamically customize itself for every single player. This is the [exact point of my video series](https://www.youtube.com/watch?v=hTDSGU2vAow) about this subject. Thus far I've made over a half-dozen Skyrim mods that demonstrate some of these concepts, and our team is expanding into other games that are finally opening up their engines to 3rd parties. If folks would like to talk about the nitty-gritty of all that, I'm happy to chat.


None of those (e.g. lighting, physics, graphics) are truly difficult today with the advent of modern game tools (e.g Unity). Assuming you know basic vector math and can think in 3 dimensions, most problems related to sound, physics, graphics etc can be solved by some Googling.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: