Good discussions on Reddit: https://www.reddit.com/r/MachineLearning/comments/5vh4ae/r_a... https://www.reddit.com/r/smashbros/comments/5vin8x/beating_t...
( http://boards.na.leagueoflegends.com/en/c/gameplay-balance/b... )
They all pretty much just walked counter-clockwise and just used autoattacks or tried to capture points.
In pve they became so common that the game economy became completely based off them (without the majority's knowledge).
The intelligence of other bots/programs to give a player an unfair advantage of some sort has also come an awfully long way. One example is the leaps the aimbot took in Halo PC, from being easy to detect even when used by top players to being nearly indetectible (except when priority issues or other bugs/glitches appear).
With Marines usually.
Unlike a human, a bot will always "click" exactly where it intends to.
The competitions that involved humans showed humans destroyed them by spotting their patterns and beating those patterns. Also with bluffing or distractions such as having one unit do weird things around their base as the human player built up an army. The bots that beat humans will have to learn to spot bluffs and other weird patterns humans will do to screw with them. On top of all the stuff prior AI did with human-level talent. My money is on humans for DeepMind vs Starcraft although I'm happy to be proven wrong.
Further advancement in this area will require huge leaps in hardware performance. Luckily in the next few years I expect that the pace of improvement in specialized hardware for neural nets will far outpace Moore's Law.
I believe they've handicapped themselves, actually, with their shortcuts: the performance of agents is crippled by the inability to see projectiles due to the choice to avoid learning from pixels (which I bet would actually be quite fast, as learning from pixels is not the bottleneck in ALE), and likewise the use of the other RAM features is the path of the Dark Side - allowing immediate quick learning through huge dimensionality reduction, seductively simple, yes, yet poison in the end as the agent is unable to learn all the other things it would've learned (such as projectiles). I suspect that this is why their current implementation is unable to learn to play multiple characters: because it can't see which character it is and what play style it should use.
So I would not be surprised at all to hear in a year or two that human-delay-equivalent agent using raw pixels could beat human champs routinely.
In fact, RAM features are likely to be much more useful for model-based approaches, which may be important for solving the action-delay problems.
As for multiple characters, the character ID is available to the network. I doubt pixels will be help there either.
Me either. Bots for fighting games have always been easier to write or even fake for the button-mashers. This proves nothing. It's just fun. Let's see them get top 2-4% skill & kills at Battlefield 4 with shots hardwired to miss 30-50% of time, playing with weakest carbines w/ suppressors, and on weak teams. If these AI's are so amazing, let's see them similarly use good tactics in open-ended battles to win as I do with a brain injury. I'll even give them training data in form of virtual lead. :)
Thinking further afield, future models could learn to adapt their expectations to fit the behavior of a particular opponent. This kind of metalearning is pretty much a wide open problem, though a pair of (roughly equivalent) papers in this direction recently came out from DeepMind: https://arxiv.org/abs/1611.05763 and OpenAI: https://arxiv.org/abs/1611.02779 It's going to be really exciting to see how these techniques scale.
So it's cheating, presumably knowing the opponents action before the animation even starts to play.
But this is what a top player (who regularly beats both of the players tested in the study) looks like playing against a hand-coded bot:
and this is what the humans eventually learned to do:
Even if you add reaction time, a big part of Smash skill for humans comprises accurately manipulating the analog stick. The computer can just declare any angle it wants; you're not having a fair competition until you build a robot thumb that manipulates a joystick the way humans do, IMO. Otherwise a character like Pikachu can recover perfectly every time.
Most mid-level players already have a good grasp of prediction, which is arguably along the sames lines of being able to know with certainty what action your opponent is taking a few frames before he does it.
Coupling that with pretty obscene frame-lag for Smash, it's not really that much of an advantage.
As well that competitive isn't really that impressive considering how limited your actions are by banning items and more dynamic stages (see: restricting RNG). In this way, it's nothing more than a simple chess-bot. Now, if it could actually take in complex environments and multiple tools, that'd be pretty next level.
No, this is just playing games. The ground rules must be clear: you get the screenshots and keyboard input in every frame, as a normal player. If the resulting AI sucks, who cares? Failure is part of doing science.
> The ground rules must be clear: you get the screenshots and keyboard input in every frame, as a normal player.
Perhaps if you want to start from flawed assumptions/ want to create an AI that's tweak-able to appear as human. Which would be pretty useful and practical for other applications, but not competitive play.
We could go on and on about digital vs. analog, but digital is good enough for your argument and doesn't require you to spend enormous resources on a trivial pursuit.
This going in the direction of nonsensical handicaps. You don't give AlphaGo stamina parameters that artificially slow down processing speed. You give it all the tools it needs to beat a human player.
Okay, why not allow an NPC to just mess with the human player's actions then (blocking or delaying button clicks, for instance)? Surely, that falls into "all the tools", no?
IMO, the way you went about things isn't particularly compelling—your human opponents don't have white-box access to game internals, and if they did, guess what? They'd play better too.
So I agree with the GP: this is just playing games.
Are you talking about physically delaying their inputs? As in from the controller to the main board? This would fall under the same category as a player hitting the controller out of his opponent's hand -- foul play.
> IMO, the way you went about things isn't particularly compelling—your human opponents don't have white-box access to game internals, and if they did, guess what? They'd play better too.
I'm not sure what exactly you're referring to here, but I'll respond to how I think you're trying to take this.
Source code wise: Yes. If the players had access to the source code the learning curve would be significantly shortened. Though, in due time, most would have figured out the mechanics fully, or within a short deviation, in closed source. A part of competitive play is this exact aspect. Players experimenting, sharing, and building up their understanding of the game. If the source was freely available to explore, most players would stick to the "show" part of the process, i.e working reflexes and learning combat -- what most elite players focus on (since they've mastered the science of the game already).
Plus, our bot doesn't have any clue about projectiles. We don't know where they live in memory, so the network doesn't get to know about them at all.
My favorite example is Ms. Pac Man because it seems so old and simplistic. Been tried by a dozen teams and no one can beat a decent human.