In undergrad (mid 2000s) I partially specialized in both computer graphics and machine learning, and took a video game class to try combining these skills. I have two big memories from this time that stuck with me. The first is the time a dev on Civilization 1 visited our class. He spoke about the "AI" of civilization, which he revealed to be a simple random number generator. He told us people often thought it was far more complex, but that's really all there was to it.
The second memory is when we actually built a 3d engine + game from scratch. Every week we had to add a new feature, so one week I obviously took on the AI. My partner and I were doing a soccer-style game, and I had grand visions of implementing a sophisticated AI using what I'd been learning in my other classes, like SVMs and neural networks. I started doing research, and was shocked to learn that no video games at the time did any of this. I learned that computers at the time weren't really capable of running several versions of an ML algorithm simultaneously (one per agent) while dealing with everything else - and more importantly that there was little need. I ended up spending a couple days building out a basic state machine, and it worked. Even the professor thought we added incredible intelligence to the players, who in reality were just following ~10 rules like "if player in defense mode: move towards the midpoint between the ball and the goal while keeping some distance between other players".
My main take-away from that class is that there was little need for actual AI in video games, and that I should pursue a different career path :/
And for those who are reading this, and are interested in game AI with interest in making true machine learning integrated into their characters, I gope this will help keep you inspired.
So, like you, I had grandeous ideas and vision about adaptive, machine learning approach to controlling my character in a platform-based fighting game. They would learn from previous mistakes and improve themselves, to create a truly interactive AI, one that can challenge the player beyond just memorizing his pre-programmed state machine patterns or giving him inhuman reaction times. Then I ran into the same issues, the game must run at 60 fps, and constant learning cannot be done, so i've implemented a basic AI, but i know every way it will act, nothing is amazing.
Until one day I went back and looked at my AI approach, I realize I could still use machine learning, but perhaps use a smaller neural network, and a different learning algorithm. So, after implementing a combination of reenforcement learning and evolutionary learning. I let them train for a day. Then something amazing happened, and it 's what i imagine a parent feels when their kids learns to do something: it saved itself. Initially, starting out the AI would just spam buttons and usually end up jumping off the platform and killing itself, or stray away from the edge and just not touching the control stick, but this time, he got knocked off and he saved himself.
It was an amazing feeling, I never taught it to do that, but I gave it the ability to learn to do that, and that was extremely liberating.
So i encourage people to not give up on the ML AI for videogames, I know deep mind recently teamed up with blizzard to make a StarCraft2 AI, and that looks awesome.
What you describe in your football example is often called "Behavior Tree", and there are ways to learn those using machine learning (usually by reinforcement learning, using neural networks and/or genetic programming). There was a popular video showing this applied on Mario  ... and a paper published 7 years before that video doing the same thing  (more context about this in ). I remember seeing something on Gamasutra saying that similar methods are used in some AAA games.
I made a euchre game several years ago and put in an AI as basic as you mention for Civ: all it did was pick a random card from its hand that followed suit. If they were placing the first card, it was all random. I got praise from people at how the AI surprised them with feints etc to beat them when they thought they were going to get points. Some I told, others I let believe the magic :)
More sophisticated rules based card game agents can actually look very good unless you play a huge number of hands against them (depends on the game structure though, trick taking games work well, poker agents can usually be understood quickly even though an agent that simply plays very aggressively usually looks strong for a bit).
A lot of the big, breathtaking, (recent) breakthroughs in 'AI proper' have been at the really really high level of abstraction of "how do I parse sense data". Questions like "What kinds of things are in this picture?", "How similar is this sentence to this other sentence?", or "What kind of thing is this thing?" are really asking, "How do I take this untyped, unstructured information and turn it in to something in my ontology?". This is a problem in the real world because photons and soundwaves don't come with type signatures.
In a game engine, however, you have powerful hooks directly into the physics of the world you're operating in. Beyond that, you've got hooks into the other agents that you're operating alongside. With a higher-resolution system like this, the work required for seemingly-intelligent behavior becomes less. A lot of that work is offloaded into the higher granularity of the abstraction you're working under.
To make a completely unfounded hypothetical, and invert the trope, I'm not sure we'd be as smart as we are if we always could read each other's minds and read and write directly to reality.
 To be fair, a lot of the big push from the GOFAI work in the 60s+ was more in line with the kinds of things useful to game AIs; logic-oriented/planning systems.
 I know that one way to get more 'compelling' (and less uncanny) AIs in games and in academia is to limit their ability to transgress these boundaries that our intelligence operates under.
I used to work in video game AI having studied AI academically and was also disappointed that a) there isn't a lot of horse power to spend on AI so you keep it simple and make it fast and b) the game designers and producers don't want characters that learn, they want predictable well defined behaviours. They can then compose these to make fun gameplay for the player.
But that was in the 90's. At the point I left we were using A* and various optimizations on it and real time planning for multiple moving agents is non-trivial. People were using planning algorithms to make the ais behave in a more goal driven manner. I myself implemented a state machine based on Rodney Brook's subsumption architecture. Low level basic survival behaviours (run from grenade) would override the characters more high level goal (patrol route).
Ultimately I got bored but all the tech used in the 90's has continued to evolve and I'm sure there's some pretty interesting AI going on in games like gta 5 and assassins creed where you have large numbers of people and vehicles interacting.
No doubt machine learning as an off line process to teach characters to drive like players and so on will be a fun and productive area to work on soon
You should check out Vehicles: Experiments in Synthetic Psychology. It describes an "ecosystem" of vehicles powered by solar collectors linked to their motors via simple neural networks. One of the big takeaways from it is that, even with incredibly simple networks, you can get complicated behavior that appears to demonstrate intention, when there's nothing of the sort.
Still, I consider the classic game AI to be a real type of AI. They're just different things when compared to things like Machine Learning. Despite (and your) my example, most of the time, when creating game AI you have to distill the rationale a player would have to a set of rules, and then build it from the ground up as a rudimentary intelligence that can make decisions. Sure, there's "shallower" games, but also games that distinguish themselves when good AI is in place. I keep thinking of the AI in Quake 1 bots, and when a game like S.T.A.L.K.E.R. had good opponents. It was refreshing.
I'd even say there's some beauty in it, especially when you get to some emergent behavior that you did not expect.
I think Machine Learning is all the rage today mostly because you can attack a problem using brute force, without having to understand what drives a behavior. It may be more human-like, but to me it's just a separate branch.
There are games in the 90s that used machine learning techniques, Specifically Black and White and Republic: The Revolution, which were both worked on by Demis Hassabis who would later go on to work for Deep Mind on AlphaGo (https://en.wikipedia.org/wiki/Demis_Hassabis). I also think some racing games have used neural networks and some fighting games use hidden markov models in modes where AI adapts to player strategies.
I currently believe the game industry will adopt machine learning techniques in computer graphics, animation, procedural generation, game balance, simulation (vfx), and offline tools before agent behavior. Although, if NLP and speech recognition get good enough I can see it that stuff getting used pretty widely in certain types of video games.
To me, games seem like a great place to test theory in a mode that can be unsupervised and self-grading, while also representing real world contraints.
I'm glad to see DeepMind and others make some advances.
Ex-game developer here. I was also surprised at what "AI" looked like when I entered the game industry. It's easy to dismiss what game devs do as not "actual" AI, when really it's just two definitions of the terms.
In the academic community today, "AI" implies learning, often unassisted. That's a fine definition, but it's also a recent one.
When "AI" was originally coined, it simply referred to software that did things people used to think required some intelligence. Stuff like OCR and playing chess.
As our expectations of what computers can do grew, we kept redefining "intelligence" to mean fewer and fewer things. There's a weird feedback loop here, because our informal definition of "intelligence" is often "whatever only people can do". As soon as computers could beat us at chess, "beating a human at chess" got kicked out of the "intelligence" set. That meant chess software no longer gets called AI.
In games, "AI" just means "the code that controls what likelife in-game entities do". In the fiction of the game world, these entities have real intelligence. The game's implementation of that an artificial simulation of that intelligence. Thus -- "artificial intelligence".
In practice, it often lines up with the historical definition of what kinds of code was called "artificial intelligence". Many early AI researchers were in fact using games as their testbed.
It also ends up being fairly simple. Humans are so eager to anthropomorphize that it doesn't take that much simulated intelligence to get us to see a simulated entity as acting "alive". Simpler AI is also easier to implement, easier to debug, faster to execute, and much simpler to tune.
Learning algorithms are rare in games because they more often than not implement an anti-goal. A game designer's job is to give the player a carefully balanced experience that rides the knife edge between too easy (boring) and too hard (frustrating).
It is not the game's goal to make the entities as smart as possible. They would just kick the player's ass and that's no fun.
An AI that learned on its own is very hard to tune and would likely make the game no fun.
I do think there's room in games for learning AIs. But what I think would make sense is:
1. The fitness function the AI should train for is fun, not beating the player. Instead of rewarding AIs that win, reward ones that the player says were fun to play against.
2. You let the developers train the AI, then you bake those parameters in and don't do learning on the end user's machine.
That said, the references for AI and machine learning are quite old. Particularly the machine learning parts. The only ML texts on the list are Mitchell and Duda and Hart. The former is extremely outdated at this point. That's not Mitchell's fault -- it was a nice book for learning the basics of machine learning in 1997 when it was published, but all the developments that have made ML a hot subject have occurred since then and in areas that the book simply didn't predict coming. Duda and Hart, similarly, was the bible for certain subfields of ML for a long time, but it won't tell you what everyone's been doing in the past 15 years when ML exploded onto the wider scene.
If I were to add one book, it would be Kevin Murphy's excellent text (https://www.amazon.com/Machine-Learning-Probabilistic-Perspe...). There's no one book that will give you a complete picture of the field, but his is I think the closest available and does a solid job of preparing you with enough fundamentals that you can extend your knowledge from there on your own.
I do agree with your statement in the vast majority of cases. Ultimately, fun trumps everything else when it comes to games (though what counts as fun is subjective). Even Dark Souls, which many (most) gamers consider difficult, is not difficult because of unbeatable AI. It's full of patterns (indeed, that's how you get better at the game: you recognize and respond correctly to those patterns).