
Enemy AI: chasing a player without Navigation2D or A* pathfinding - atomlib
https://abitawake.com/news/articles/enemy-ai-chasing-a-player-without-navigation2d-or-a-star-pathfinding
======
b0rsuk
A fascinating AI technique is used in one of the best roguelikes, Brogue. The
author called it "Dijkstra Maps". Basically it's about generating a heatmap
for all the squares on the level. You can start with 0 at player position, and
from that point use a simple floodfill algorithm, putting down increasing
numbers with each step. Then a monster simply examines all adjacent squares
and selects the one with lowest number. This has at least two notable
advantages:

1\. You only need to do the path generation once, and it scales very well with
the number of monsters. 2\. It's really good at combining several concerns,
because you can generate a couple of heat maps for different concerns and add
them up. For example one heat map is about proximity to player. Another could
be about proximity to health pickups, or proximity to cover, or proximity to
open space if a monster likes to keep distance and shoot. If a gas trap is
triggered, you can use a "danger" heat map. Then a monster can easily get
closer to player and at the same time choose the path which has fewer harmful
effects.

That's why centaur archers in Brogue are so annoying, monsters avoid traps
intelligently, monster groups avoid wasting their numerical advantage by
chasing player through a corridor.

He described it in detail in this article:
[http://www.roguebasin.com/index.php?title=The_Incredible_Pow...](http://www.roguebasin.com/index.php?title=The_Incredible_Power_of_Dijkstra_Maps)

And in case you're wondering, Brogue source code is licensed on AGPLv3.

~~~
60654
Sounds like a classic potential field implementation, and yeah, they're very
useful. There are two problems: scaling with map size (large maps or high
detail maps), and update cost in action-heavy games (if the field source moves
on every frame). But for small roguelikes it's a very good fit.

~~~
b0rsuk
Thanks for the alternative name! Helps further research the technique. The
core idea is relatively simple so I'm not surprised these are known. But
finding a name of something you know is not easy with just a search engine.

~~~
60654
Sure thing! And as I mentioned in another comment, if you google for
"potential fields" and "flow fields" in games, there's a whole bunch of papers
and talks on this.

~~~
b0rsuk
Now that you mentioned "flow field", I remember seeing it in a Supreme
Commander AI demonstration technique. Two columns of tanks can navigate
through each other seamlessly, without chaos. And in realtime. Imagine seeing
this in the days of Warcraft 1, where it was common for a unit to use left
hand rule across the entire map because the bridge was momentarily occupied.

[https://www.youtube.com/watch?v=iHuFCnYnP9A](https://www.youtube.com/watch?v=iHuFCnYnP9A)

------
simias
I'm always saddened that more works doesn't go into making fun and original
game AIs. Most AAA games that are released these days have utterly predictable
AI, modern shooters don't seem a lot more evolved that Quake 1.

It's too bad because games like F.E.A.R. have shown that even simple AI
heuristics can lead to very interesting emergent behavior. TFA demonstrates
that very simple AI tweaks can make the enemies feel more organic and
realistic.

I suppose that some of the problem is that good IA doesn't make for nice
trailers and ads (since you can just script those to do whatever you want
anyway).

~~~
hutzlibu
Yes, even a simple implementation of selfpreservation would make KI so much
more realistic. Meaning, if the enemy has low health, he is likely to run
away. Or if they see all their allies die, etc.

This is really not so hard to implement, simple games could do this for ages.

Or at least basic tactics, I think in Far Cry 1 enemies did not run away, but
were smart enough to sneak and circle around you, later in Crysis or the other
Far Cry titles, they were just cannon fooder. Makes everthing more hollywood
kaboom

~~~
PeterisP
Does having enemies run away make the game more fun for the player, or does
the need to chase them become an annoyance?

If an enemy sneaks around the player, gets in cover and concealment, and
shoots them while the player does not understand where the fire is coming
from, is it an enjoyable experience?

Smarter and more realistic does not necessarily mean better. We don't write
enemy AI for the purpose of it being effective in fighting, it's there to
provide entertainment for the player.

It's also about creating the intended emotions. Games are intentionally
designed to create a particular set of experiences. There's a niche of games
that relies on gratification of overcoming some frustration (e.g. Dark Souls
series) but the majority of gamer market prefers a 'power fantasy' emotional
experience, so many games are intentionally targeting that. If we want the
player to feel powerful, then we design so that their character can defeat
many enemies; if we want the player to feel smart, then we design enemies so
that their behavior has exploitable weaknesses that they player can discover
and feel satisfied while 'outsmarting' or 'tricking' the opponents. In most
game genres we don't want the player to feel outsmarted by the computer unless
they have made a substantial mistake that the average player is able to notice
and correct.

~~~
hutzlibu
"Does having enemies run away make the game more fun for the player, or does
the need to chase them become an annoyance?"

A good game mode does not require finding and hunting down all enemies. It has
targets, like "go there"(story continues), "blow up X", "clear area Y"

"if an enemy sneaks around the player, gets in cover and concealment, and
shoots them while the player does not understand where the fire is coming
from, is it an enjoyable experience?"

If the game gets the graphics and gameplay right, for sure! (I still have
memories from Vietcong, where you get ambushed in the jungle and don't see
anything and just die, until you learn to move in cover and watch the terrain)

And you have the muzzle flash for example. And if you do not see it ... the
fun is in getting scared and rushing to cover. Where the enemy cannot see you
anymore (if the KI is not cheating) and then move to a different position to
find him. Or you can have the kill cam, where you see upon death, where the
enemy was, that killed you.

"If we want the player to feel powerful, then we design so that their
character can defeat many enemies"

True, but there are various ways to implement this, without ZombieKI. Like in
crysis for example, where you have superior tech. Or more hitpoints, because
you play a badass. Or in general, you as a player have quicksave and load. The
computer does not.

So yeah, I know that the target audience is dumb and wants fast food, so to
say, but they also consume what is avaiable. And the standard is mostly
ZombieKI, or ultra hardcore realism like in Arma, which is clearly not for
everyone, but I really don't see, why the "AAA" games could not invest a tiny
bit more in KI that does not break immersion.

------
DennisP
It might make sense to have the monsters repulse each other. By spreading out,
they'll end up taking multiple paths around obstacles, coming at you from
different directions, so it looks like more intelligent coordinated behavior.

~~~
ghthor
I've used this pattern before in game AI's and it works well. If you combine
it with a somewhat random searching incentive the "hive" will spread out and
search for the player(s). I also let the "hive" communicate with each other if
they find a player they let each other know and then set a new waypoint for
that area. When they repulse each other they take different pathways around
objects and it looks a feels great to play against.

------
mrspeaker
So simple and obvious as soon as you see it in action. I love "hacks" like
this: really easy to code, but with a big impact - and so many potential uses
too! I'm adding "scent trails" to my bag of tricks for sure.

------
DonHopkins
This is how The Mighty Slime Mold hunts.

[https://www.youtube.com/watch?v=7YWbY7kWesI](https://www.youtube.com/watch?v=7YWbY7kWesI)

>How This Blob Solves Mazes | WIRED

>Physarum polycephalum is a single-celled, brainless organism that can make
“decisions,” and solve mazes. Anne Pringle, who is a mycologist at the
University of Wisconsin-Madison, explains everything you need to know about
what these slime molds are and how they fit into our ecosystem.

------
dfgdghdf
This technique shows how game AI is different from academic AI. The goal is to
create interesting gameplay with minimal performance overhead. This system is
just as fun a as "correct" system, but far simpler to implement and cheap to
execute.

~~~
jfkebwjsbx
It is academic AI. A lot of papers go on how to implement algorithms fast or
how to implement the best approximation within a given time, etc.

------
unnouinceput
This technique won't work if your player has teleport abilities like sorcerer
in Diablo 2 or Assassin in GuildWars. In those case, you can teleport quite
some distance and enemies in a 3D environment will get stuck on a upper slope
while a simple path algorithm will make them still chase you.

~~~
rochak
Well, teleporting is a tough problem to solve to begin with. One way to solve
it is to have enemies distributed uniformly and restrict their movement to
subsections. Once the player teleports, only the enemies in the closest
subsections will use the algorithm to reach the player.

~~~
Skunkleton
You could use a traditional path finding algorithm, letting the mobs look
around confused while the path is computed.

------
60654
TLDR: instead of doing generic pathfinding, the player avatar drops decaying
"scent" into the world grid, and enemies follow the player by doing gradient
ascent.

The first time I've seen this technique in use was in the classic SimAnt game
by Maxis, in the 90s; in research it was also explored in the ALife community.
It's a cool trick, but by itself it's not quite enough, it's good for insect
behavior but not much more.

But what _has_ been useful is combining standard pathfinding with this. For
example, imagine if one of your units dies and drops some "scent of death"
into the surrounding area - and that scent gets incorporated into A* as a
large cost value for traversing this terrain. Now all your other units will
"smartly" start avoiding the dangerous area for a while, without having to do
any expensive analysis of _why_ the unit died there, e.g. was there an ambush
there or some such.

(Google for "potential fields" and "flow fields" in games for more examples
from commercial games.)

~~~
anotheryou
"If no line of sight..." I might add

~~~
badloginagain
Makes me think you can add scent trails to many objects- like the path of a
fired arrow. Would help mitigate the "stealthy archer" problem Skyrim AI has.

~~~
enchiridion
I'm not familiar with that problem.

~~~
flqn
The "stealth archer" build in skyrim is overpowered since the enemy AI can't
usually know where the arrow came from, and they tend to aggro, look around
their immediate area, then go back to the idle state allowing the player to
shoot them again from a safe, hidden spot. Rinse and repeat, and almost any
encounter in the game with hiding spots is trivial.

------
Datenstrom
A former colleague of mine Meghan Chandarana was doing some really awesome
work which incorporated a similar "bread-crumb" algorithm for swarms
dispatching groups for tasks and navigation back to the swarm. It wasn't the
primary focus but was really cool to see work. If you want to see a
application of it for swarms her paper is here:

[https://www.ri.cmu.edu/wp-
content/uploads/2018/08/SMC2018.pd...](https://www.ri.cmu.edu/wp-
content/uploads/2018/08/SMC2018.pdf)

------
dmos62
My first thought looking at the animation was Boids [0]. The scent trail
approach is interesting because it simulates/respects fog of war (though the
game doesn't seem to use it otherwise).

[0] [https://en.wikipedia.org/wiki/Boids](https://en.wikipedia.org/wiki/Boids)

------
carapace
Cool!

I was playing around with my lil asteroid sim[1] and I wanted to trace the
trajectories of the asteroids, so I put a particle generator in the asteroid
"base class" and set it to emit sixty particles with lifetime set sixty
seconds, zero momentum and velocity, and unaffected by gravity. I bet you
could adapt that to make a "scent trail", eh?

[1]
[https://git.sr.ht/~sforman/SpaceGame](https://git.sr.ht/~sforman/SpaceGame)
but it seems I deleted the experiment. The commit that has it is
[https://git.sr.ht/~sforman/SpaceGame/commit/7cc3981631db22be...](https://git.sr.ht/~sforman/SpaceGame/commit/7cc3981631db22be3fa77dc479f111ff86f91a08)
FWIW.

------
JabavuAdams
All of the obstacles in that video are small and convex. This is the easy case
of obstacle avoidance. Basically use modified steering behaviours. If that's
all you'll ever have, then great -- don't build a system that you don't need.

If you ever move to large non-convex obstacles, like a maze -- there will be
mounting problems until it would have made more sense to use a navigation or
path-finding.

------
x0re4x
Hmm... pretty sure I already saw something like that long time ago:
[https://github.com/id-
Software/Quake-2/blob/master/game/p_tr...](https://github.com/id-
Software/Quake-2/blob/master/game/p_trail.c)

------
xg15
What I'm surprised to not have seen more often is attempts to preprocess a
map, group points of it into larger sections and then perform pathfinding on
those sections - e.g.:

\- split a map into convex polygons \- use pathfinding to find out which
polygins you have to traverse (either by making each polygon a node in the
pathfinding graph or by selecting points on the polygon's edges and using
those as nodes) \- move in a straight line _inside_ a polygon.

It seems, especially for "almost convex" maps, this could move a good deal of
pathfinding computation into the build phase.

------
timwaagh
I had this problem as well with a simple game i made. what i did instead is to
just randomize the movement a little. And that was really enough. Making them
any smarter would have made the game really short.

------
Dotnaught
If you’re modeling an intelligent pursuer, the algorithm should anticipate
future travel rather than just following.

~~~
itdagusszous
In games the goal of the developer isn't necessarily to have intelligent
agents, but to have agents that are fun to play against. Sometimes the goal is
to have them be intelligent, but sometimes having them behave in "dumb" but
predictable ways makes the game overall more fun.

~~~
dkersten
This is the reason usually given for using relatively simple techniques like
behavior trees, but, anecdotally, I find that the biggest let down in most
games is that the AI is so dumb that it 1) is immersion breaking, and, 2) get
same-y and boring very very quickly.

~~~
eru
Yes. It depends on what the game is trying to achieve.

Subset Games, the makers of FTL and 'Into the Breach' have talked about this
extensively. 'Into the Breach' deliberately has the enemies telegraph their
plans one turn ahead of time, and the game is all about interfering with those
plans.

The rest of the game's design carefully reinforces the message that the enemy
units are not intelligent. The backstory has them as basically oversized
insects.

If you have a game that pretends to give you realistic human antagonists, but
they behave mechanically dumb and predictable, that breaks immersion like you
suggest.

One big problem is that having very smart AI that's purely there to oppose you
in a zero sum game ain't fun to play against for most people. The handicaps a
modern chess or Go engine would have to give you a normal human for a fair
fight are ludicrous. And people seldom want fair fights in their games. They
want a feeling of accomplishment, but without actually putting in all that
much work.

Even hardcore games like XCom cheat in your favour behind the scenes.

There are at least two ways out of this while still avoiding the boring
repetition:

\- carefully make the NPC make believable human-like (or animal-like)
mistakes, instead of easily exploitable repetitive mistakes \- give the NPC
goals that are in conflict with the player, but not 100% so.

A silly example of the second option:

Take a game like Thief that's all about sneaking around stealthily and
stealing stuff. Now realistically, most of the guards are just hired goons.
They don't want to die, but they don't particularly care about protecting the
place. They do care about being seen doing their job, so they don't get fired.

So your job as a player could be, in addition to staying unseen, to provide
plausible distractions and reasons for the guards not too investigate to
closely.

Higher ranked guards, and owners, would be under higher pressure to perform
and won't get away with excuses. So they would be more alert.

Using the same trick over and over again would lower it's effectivity: guards
can't plausible claim to their higher ups to have been tricked again and
again.

If you are starting to become aggressive to a guard, or he learns that you
called one of his friends, the guard's priorities will change towards more
self-preservation.

A pacifist run might even earn you respect and admiration from the lower level
guards. Just like cat burglars are often admired in real life.

Seen a bit more abstract, the game now becomes one of three factions: the
thief (that's you), the low level guards and their employers. All with
partially overlapping, partially conflicting goals.

You can throw insurance companies into the mix, if you want to make it even
more complicated.

Thanks to the non-zero sum nature of the partial conflict, you can crank up
how smart everyone acts, without overwhelming the player:

Eg smarter guards might figure out a way to de-escalate that still looks like
a plausible and even courageous move to their employers.

~~~
dkersten
I think you've hit the nail on the head. I absolutely agree that "dumb" AI can
be explained away and integrated into a game's design and story in ways that
make it much more interesting and believable than typical bad AI on its own. I
also agree that by providing a flexible and interesting enough scenario, with
conflicting goals and motivations, you can improve the AI in ways that are
both noticeable to the player and add extra layers of interesting gameplay.

As an aside, unrelated to your reply, I just remembered another common excuse,
which is that the enemies only exist for X seconds before the player shoots
them, so effort on smarts would be wasted on them. For some games, I think its
for sure the case, but I also feel that in many cases the enemy only exists
for a few seconds _because_ they are dumb and uninteresting.

~~~
eru
All agreed.

Both 'dumb' and 'interesting' AI can be useful in a game, it all depends on
your game's design.

To give another silly example: Tetris by default has very 'dumb' AI that just
gives you pieces at random. You could imagine variants of Tetris with more
interesting piece selection.

For example, an AI that makes pieces as unhelpful as possible while keeping
their distribution statistically indistinguishable from true random selection.

Or there was an inversion of the 2048 game, where an AI plays the normal game,
and your task is to give them unhelpful numbers.

------
atum47
really great tutorial.

------
FZ1
Why are they calling it "AI", though? There isn't any AI or ML.

You leave a trail for the enemy to follow, and they follow it.

It's not even path-finding, it's path-following. Which is pretty much an if-
then statement.

It's a neat, simple approach, and fun to watch. But there isn't any learning,
or knowledge, or other AI.

~~~
marcinzm
AI does not mean ML, it is a broad field that is a superset and not a subset
of ML. Or as Wikipedia describes it:

>In computer science, artificial intelligence (AI), sometimes called machine
intelligence, is intelligence demonstrated by machines, in contrast to the
natural intelligence displayed by humans and animals. Leading AI textbooks
define the field as the study of "intelligent agents": any device that
perceives its environment and takes actions that maximize its chance of
successfully achieving its goals.

~~~
FZ1
> AI does not mean ML

Hence the 'or' in my statement. Neither are present here.

~~~
marcinzm
>any device that perceives its environment and takes actions that maximize its
chance of successfully achieving its goals.

This does exactly that.

~~~
FZ1
Every program that has ever existed does this. So, you're saying that all
programs that have ever existed, then, are all AI. You make no distinction
whatsoever.

I would say that the more a program thinks on its own which actions to take to
maximize its chances of success, the closer to AI it is.

If it's doing exactly what it's explicitly told, then it's not really
intelligent, is it?

~~~
serf
>>any device that perceives its environment and takes actions that maximize
its chance of successfully achieving its goals.

>Every program that has ever existed does this.

No, not every program is self-tuning, nor do they all take inputs.

