Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Five at Dota 2 – The International [video] (youtube.com)
209 points by nerform 8 months ago | hide | past | web | favorite | 168 comments



There are a lot of areas that OpenAI needs to improve on. the bots generally;

1. terrible at warding and dewarding

2. are terrible at using spell spam to farm. while their aoe spell usage is great in team fights, they suck at using spells to farm when they are alone. Gyro used his ult to farm a normal sized lane and still missed half the creeps.

3. don't understand bouncing spells like the lich nuke. In fact many times lich used that spell as a poking spell.

4. Bad at non-nuky/long running ultimates. DP made really bad usage of her ult several times outside of team fights.

5. Bad at judging Roshan respawn.

6. bad at using ability runes.

7. Are bad at juking. They have the right idea to start the juke when the chasing enemy is on a high ground ramp, but they can't time the highground fog jukes properly.

8. not good at prioritizing specific heroes in team fights (cores over supports, if both look equally chasable)

9. terrible at dealing with split pushes. They prioritize defending their towers over everything. Also bad at split pushing, in general.

10. bad at properly utilizing buybacks. If some heroes buyback during the defense, it is expected that the remaining heroes should ensure that the enemies can't escape, otherwise buybacks would be pretty useless.

11. when they are behind, they simply don't have any coherent way of catching up other than taking huge team fights. Which the enemy team will deny the AI, if they are smart.

For now winning lanes and having overwhelming team fighting abilities is the only meta that the AI seems to be doing.


If a team of humans made such horrible mistakes, they would never be able to compete nowhere near this level.

Is OpenAI's team-fight "so good" that compensates for these huge mistakes and allows them to compete with pro teams?

For me as a player, the team fights are quite confusing and overwhelming. The most complex part of the game.

If the OpenAI team got the teamfights covered that well, I'm pretty much sure they can improve their farming/warding techniques as well.


Warding is a tough one. Outside of hard-coding it, how would you suggest the AI would learn the best locations for warding and de-warding? Can they surmise that an opponent's gank or quick reaction to their movement means there may be an obs ward? If so, where precisely is that?

I don't know if the OpenAI supports can play the warding mind-games, which also for me personally involve 1000+ games' experience. I see interesting wards from teammates occasionally and file them away in the memory bank. Especially "hipster wards", I like to call them, just wards that "see" the opponents but are not in the highest-probability spots like on pedestals or on ramp edges. Just throw an obs ward near a couple medium camps, for instance, and you can really gain some important intel w/o a sentry ruining your ward. Can the OpenAI team learn to do this sort of behavior?


> Can the OpenAI team learn to do this sort of behavior?

This is the core of what I want from open ai and feel like I'm not seeing. I want the AI to _reason_ that blocking the creepwave brings the equilibrium back to your tower and results in an easier lane. Not just to say "this thing works so I'll do it a lot" - but to say "if I do this then it will have this impact on the game state".

What we got was the devs specifically training models for goals like creep blocking. Which, you know, just seems a bit meh


I assume that the robots are god-level last-hitters and farmers. perfect farming efficiency is enough to beat 99% of human teams. most players struggle to get 50 last hits in the first phase of the game, nevermind actually get every single piece of eligible farm.

edit: i would also expect that the robots are deny nazis, who severely crimp the farming of anyone who opposes them in the lane.


OpenAI is significantly better than most pro players in solo middle lane early on, when playing shadowfiend vs shadowfiend

https://www.youtube.com/watch?v=5zaJ58q9vuI&vl=en


That's not really the same version of the AI that's playing 5v5 now though. The Shadowfiend 1v1 bot was near perfect at all the skills needed for that match-up, while the current bots are far worse at even some mechanical skills like last-hitting.


Yes, along with the fact that they are not really playing the same game that humans do:

- 5 invulnerable couriers per team allow for very constant poking and pushing styles as regeneration items can be ferried back and forth from the base. This favours team fighting deathball oriented strategies where you just have non-stop aggression.

- Very limited hero pool, this further compounds on the earlier issue as many good deathball team fighting heroes are on the available pool but some very common counters to this strategy are not.

Also, humans lack experience in this version of the modified "metagame" or they would likely be able to adapt. This is IMO the biggest hurdle for humans, they haven't really played this version of the game and they probably weren't exactly expecting it, the AI has trained over many "computational years", the humans have a couple of games to catch up.


They aren't playing with the courier change anymore.


Do you have a link? I would like to read the update


Unfortunately my source for this is just watching the game, I haven't seen any technical updates.


Some of these run on the assumption that "we know best."

1. May be an unimportant waste of time and money given some unknown superior strategy.

2/3/10. May be similar to Alpha Go/Alpha Zero. A win by 1% is still a win.

11. Is a known issue.


That's what you would tell human players, yes. That's what a trainer would tell you after looking at your gameplay. That's what you would then try to improve.

BUT!!

This is AI, so what OpenAI needs to improve on, if anything, may be the ability to tell this stuff directly to the AI so it could skip hours of play.

Also, maybe our view is too limited and the AI actually learns even better strategies from competitive self-play. Strategies our human way of improving would miss.


> Also, maybe our view is too limited and the AI actually learns even better strategies from competitive self-play. Strategies our human way of improving would miss.

This is generally a good point, but in this specific case we can see that no such strategies showed up. The bots did terrible, obviously bad things (like stacking wards in bad places on top of each other or having a weak support hero take the aegis). More importantly, they didn't win.


You may be missing the point here: it's AI, so it doesn't "think" in terms of strategies or tactics like you do, it just runs countless simulations, figures out what works in those, and then blindly applies that. So if the human opponents do things it has not been trained to cope with, they'll have a significant edge.

Incidentally, this is pretty much what Kasparov did when playing Deep Blue, with some success: https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparo...

Aside: As a non-Dota-playing HNer, the only one of those points I found comprehensible was #11. Gaming jargon can be pretty impenetrable to the uninitiated...


Not sure if it helps much (I'm sure you'll find plenty of better definitions online):

Purpose: - 5 vs 5 heroes over 3 lanes and a jungle - destroy the enemy team's towers in order push towards the enemy's Ancient (base) and destroy it

> warding and dewarding

wards serve to create visibility (over the fog of war) 2 types of wards: one for general visibility, and one to detect invisible heroes (but has no visiblity over FOW)

> spell spam to farm

having a hero use multiple spells to rapidly kill a bunch of Creeps sitting somewhere in the jungle

> bouncing spells like the lich nuke

- poke spells target and damage a single enemy hero - bouncing spells target one enemy hero but can jump to nearby enemy heroes as well

> Roshan respawn

Roshan is a large creep in the jungle. It drops a precious item that resurrects heroes when they die.

> ability runes

Runes are temporary enhancers (movement speed, attack damage, ..) found in the jungle

> bad at juking

when one tries to play mind games with the enemy

> buybacks

when a player's hero dies they're allowed to pay a fee to revive their hero instead of waiting (can only be used once)


I'll try to give another translation. Overall, OpenAI excelled at short term thinking, but showed weaknesses in some areas of the game that demand long term thinking. This is not to say that it was bad at all kinds of long game thinking -- it seemed to do a good job at deciding what areas of the map they should prioritize or deprioritize.

For the jargon, the Liquipedia glossary might be helpful: https://liquipedia.net/dota2/Glossary

Dota is a 5v5 game where each player controls a single hero unit. The playing field has three paths that connect the opposing bases, and an endless source of dumb computer-controlled units march along these paths to try to destroy the opposing base (in a perpetual stalemate). The objective of the game is to power up your hero units (by defeating enemy heroes and computer-controlled units) until you have an overwhelming advantage, by which point you can mount an attack on the enemy base.

> 1. terrible at warding and dewarding

Dota has a fog-of-war system, inherited from its real-time-strategy roots. There are two main game mechanics that interact with the fog of wr: wards and smokes. Wards are stationary units that can be placed at a spot on the map to provide vision around them. Smokes are a consumable resource that can temporarily make your team invisible, so you can sneak through your opponent's wards undetected.

In regular Dota play, map vision is highly valued, to the point that there is a hardcoded limit on how many wards and smokes each team is allowed to use.

For all intents and purposes, OpenAI seemed to have no idea what it was doing with these mechanics. It placed wards at low-value places (instead of high value map intersections), and often wasted them by placing multiple wards on the same spot.

> are terrible at using spell spam to farm.

Farm in Dota means collecting resources to power up your hero by defeating dumb computer-controlled units (as opposed to by defeating the enemy heroes). The slang is because it is a relatively peaceful and patient way of acummulating resources. Farming is all about establishing control of a region of the map so that your team can defeat the computer controlled units there but your opponents can't.

OpenAI showed that it was very good at using their heroes' abilities to fight enemy heroes head on, but it wasn't as good at playing the "economic game" and using their spells to accumulate resources when there was no fighting going on.

> don't understand bouncing spells like the lich nuke

A handful of spells in Dota "jump" to nearby enemies after the first one, like a hot potato. Usually you want t ouse them when multiple enemies are next to each other but OpenAI was more willing to use these spells een when they didn't give an extra bounce.

I think that this might be a side effect of the AI's self-play. OpenAI has superhuman reaction times and is very good at positioning and moving around to avoid getting hit by those additional "jumps", so it wouldn't value them very highly.

> Bad at non-nuky/long running ultimates

Some hero abilities in Dota can be used very often (once every few seconds) while others have a longer cooldown time (once every 2 or 3 minutes).

In regular Dota play, you only want to use these major spells when contesting an important objective against your enemy, because if you "waste" them, the enemy team can take advantage of the time when your heroes aren't at their full strength.

OpenAI seemed to do a bad job at this kind of long term thinking. It would use some of its powerful abilities against computer-controlled units and then not have them available for an important fight vs the humans.

> Bad at judging Roshan respawn.

Roshan is a powerful computer-controlled monster that appears at the middle of the map. It takes a team effort to defeat, but the team that does so receives a very high reward.

After Roshan is defeated he shows up again after 8 to 11 minutes. Good players know that they can ignore Roshan for these first 8 minutes but that after that they must pay close attention to the area around its cave to prevent the enemy team from sneaking there undetected and defeating Roshan uncontested. OpenAI didn't seem to have learned this timing behavior. They would constantly check if Roshan was present in his cave, even in times when it was mathematically impossible.

> Are bad at juking.

Juking is trying to "break your opponents ankles" when they are chasing you around the map, by exploiting the fog of war. Enemies can't see higher ground or behind trees, so you can use those opportunities to quickly change the direction you are running in an attempt to evade who is chasing you.

OpenAI seemed to do a bad job of timing these direction changes. Ideally you turn to move in a different direction as soon as you enter the fog of war, no sooner and no later.

> not good at prioritizing cores over supports

In the later portions of a typical Dota game, 3 of the five heroes in each team will be very strong at fighting (the cores) and 2 will be weaker (the supports). Usual strategy is that it is more important to prioritize defeating the strongest heroes in the enemy team. OpenAI was more willing to go for the weaker enemies first, but I am not sure that is a bad thing actually.

> terrible at dealing with split pushes.

The most direct way to attack the enemy base in Dota is to defeat the enemy team in a head on fight and use the window of time when they are incapacitated to attack the undefended base.

Split pushing is a strategy where you try to attack the base when you think the enemies are grouped up far away from it and can't defend it immediately. The humans successfully used this strategy to delay the game, by forcing the bots to abort their frontal assaults and retreat to defend their base.

> bad at properly utilizing buybacks.

In Dota, heroes are incapacitated for a brief time after they are defeated. In the later parts of the game it is around 1 or 2 minutes in the sidelines.

There is an option to spend some resources (sacrificing future strength) to immediately return your hero to action. This is a very high-risk high-reward play that needs to be timed appropriately to be worth the investment. The bots made some questionable plays around this game feature.


I think while most of your points are valid, some of them miss the mark.

> 3. don't understand bouncing spells like the lich nuke. In fact many times lich used that spell as a poking spell.

I'm not convinced that their use of Chain Frost is necessarily bad. They are likely still used to their five man death ball strategy, where a single kill is often enough to secure a tower. In that case, using Chain Frost on a single hero may very well have better expected value with the guaranteed kill than waiting for an opportunity and potentially getting no kills, and even worse, pushing the game late where they cannot play effectively.

> 5. Bad at judging Roshan respawn.

This is technically true, but misleading. Humans determine this more accurately simply because we have been told the actual time range, but if you couldn't look it up, you would be bad at it too. You could try it in game, but imagine you weren't allowed to start a demo match, you were only allowed to do it in a live game. Furthermore, you have no access to patch notes, so you need to keep checking the Roshan pit in case they changed the respawn time. In fact, you don't even know that it respawns and then stays there. Maybe it only shows up for 30 seconds on every even game time minute, and disappears if it isn't attacked. Given the huge advantage that taking Roshan gets, maybe it is actually worth it to stick around there if you have no idea what the respawn rules are.

> 8. not good at prioritizing specific heroes in team fights (cores over supports, if both look equally chasable)

I didn't see this in the game.

> 9. terrible at dealing with split pushes. They prioritize defending their towers over everything. Also bad at split pushing, in general.

I didn't really see this in the game either. Axe was pushing, but what could they have really done about it? If they tried to wrap around and gank him, that would just split them up and leave them open to counter attack by the much stronger human team, particularly the Sniper.

In regards to the second point, in one of the caster games, the AI sacrificed Sven in order to secure the bottom T2 tower. Sounds like split pushing to me.

> 11. when they are behind, they simply don't have any coherent way of catching up other than taking huge team fights.

What would you suggest they should have done in this game then? It seemed to me like they didn't have very many options left at the end.


Yeah but it's already sufficient to beat the cram at anything under 3k mmr. It's quite remarquable as most players are in this loW tier.


They can beat the cram out of 6-7k players (99.95th percentile), beating Team Human - https://blog.openai.com/openai-five-benchmark-results/


Isn't it impressive that they're able to compete at such high level with these knowledge gaps.


Reaction time seems to be a lot more complicated than just 200ms flat. There were a lot of superhuman reactions that game which really made it difficult for humans to start a fight. At the same time though, increasing much more than 200ms might give a big advantage to humans in other situations. It will be interesting to see if they can find a better way to model it.

All in all I am very impressed by the AI. So many complex strategies that it is employing(grouping for towers, lane swaps, etc...). But at the same time, it really does look at the moment like this is only close because the AI has vast advantages from mechanics and teamfighting.


The human would probably be able to blink/cancel if they were expecting it and therefore able to focus on a single thing prior to an event. The bot can focus on everything simultaneously and doesn't really need to expect anything. It gets a signal, it responds within 200ms, no problem. You could program that analytically.

So I would say the superhuman-ness isn't in the number of actions taken, or in the response delay, but in the massive attention bandwidth. I believe they've attempted to even the playing field in the first two, which are easily quantifiable, but I don't know about the latter.


To be fair, that's a little like saying Deep Blue had an advantage because it could try out thousands of possible plays simultaneously. That's true, but what makes humans good at Chess for example is that we have a really good "intuition" at which moves are good, and therefore we can prune non-promising branches in the decision tree better.

Similarly here, the AI can definitely do a lot more things at once, but each individual thing they do isn't very smart. For example, they waste money on useless wards or waste time sitting in front of Roshan. We can of course keep pushing the goal post, but I think if the AI can win with the given constraints, it's still a huge accomplishment.

Even more importantly though, it would be interesting to see if the AI is able to come up with new strategies and techniques that weren't known before.


The main situation that this was apparent is Euls on Axe when he blinks onto a target.

Axe's Beserker's Call has a 500ms cast time. Euls is instant. Bots have a 200ms reaction time. Humans have 200-300ms.

The problem is the human doesn't just have to react to Axe blinking on top of them and decide to target him with Euls. Humans also have move their mouse cursor onto the Axe, which for a human is hard to do in 200-300ms (the time they have after reacting to the Axe blink).

A comparable situation that happens a lot is using BKB or Manta to react to a similar initiation. Pros can hit this counterplay much easier because they only have to press a keyboard hotkey, rather than move their mouse to a target first.


One thing to keep in mind is that humans have to process the game from the image on the screen and input through a mouse and keyboard. We have to move the mouse to react to things. The computer is super-human in part because it doesn't have to do these things. It will be interesting to see if they can translate their learnings to bots that react from the image on the screen rather than the API.


"attention bandwidth" is a great term that really solidifies my fuzzy thoughts on how to characterize the bots, thank you!


Can't they "just" add a cost to each API call ? Wanna know the position of an enemy on the map: 50ms. Target a hero/creep/tower: 75ms.


Just make them drop the API and require use of computer vision on the same UI human players have to use.


Yeah, this is the thing that I wasn't expecting when I first saw this reporting. Preferably with some lag to cursor moves etc.


Saying 200ms flat reaction time is "human level" is really unfair. That is raw reaction time when a is human paying attention for a stimulus, like in a reaction time test.

Lets look at casting a spell in a Dota. First I need to perceive my target. Then I need to move my mouse to the target, which is error prone. Then I need to press a button and click. Even for a human with extremely fast reaction time, this targeting phase will put you above 200ms easily, even if your perceptual reaction time was only 200ms.

The OpenAI bots don't have to target. They process their input, and take an action in just 200ms.

There may be situations where a human can make predictions, pretarget something and get a true 200ms reaction time, but in the general case 200ms is super human by a significant margin.


Hm. I used to run an experiment at an Engineering Fair at school. Oscilloscope, random pulse, trigger button, time the reaction time. Some fast kids could get it down to the double-digits, even 20-30 ms, pressing that button after seeing the pulse. If I remember right.


That sounds like an interesting experiment. Is it possible they were predicting the pulse? I could imagine a scenario where they would predict and at times get it wrong and press before the pulse, but at times the timing could work out just right such that after accounting for all human latency, the press landed just after the pulse coincidentally. What other factors did your experiment account for?


Hmm. Using the site that was the first Google result[0] to test myself and I could never get anything below 220ms. Could you be misremembering by a factor of 10?

[0] https://www.humanbenchmark.com/tests/reactiontime/


Those were some exceptional children.


Yeah it can't have been so low can it? But it was double digits. Anyway, the fastest ones would put their eye right up to the scope, so the pulse would have the largest signal right into their eye. I thought that was clever.


My guess is there was some tell that they were using to predict the timing.


The humans also have the hardware input lag, possibly tens of ms.


So three major differences between this and the last big match for OpenAI Five which explain the different result in favour of the Humans:

1) The Humans, Pain Gaming, is a professional team with players that regularly play and practice together and attend major tournaments. While they are the weakest team at this event and are arguably there only because of a regional qualifier system (they are from South America and the strongest team in an up and coming region), they are capable of taking games off some of the strongest teams due to their unpredictable and dynamic play. They are substantially better than the team of ex-pros and commentators that last played against the AI.

2) They removed an important limitation in the game, which was the 5 invincible couriers, and replaced it with a single killable courier that the five agents have to share (just like in a normal game of dota). The AI was able to use the single courier effectively, but did occasionally let it die a bit too often. What is important about this change is that it invalidated a strategy of the AI which was to continuously ferry consumable healing items to enable relentless aggression after gaining a slight edge in the early game. The AI adapted to this change by playing more cautiously in light of this standard resource/logistics constraint. It adds strength to the argument that as the OpenAI team introduces more complexity and resource constraints back into the game that the AI's weaknesses start to become more apparent and exploitable by better players.

3) The heroes composition was drafted in advance to be as even a match as possible, and a coin flip was made to decide which team got which draft. This is important as the version of the game is a subset of the game, and the humans evidently didn't understand the meta game in this weird subset during the last event. From the AI's perspective this was as even a draft as possible given the very small hero pool. In the last event the AI predicted 70-95% win confidence before the game even started due humans not understanding the drafting meta with a pool of 18/115 heroes and no bans.

Still, really impressive performance by OpenAI as at least for the early game and some of the mid game it was a close game with lots of good plays by both sides. The fact that OpenAI can be comparable to a pro team in what is nearly a full game of dota is really impressive.


> 1) The Humans, Pain Gaming, [...] are the weakest team at this event and are arguably there only because of a regional qualifier system...

I think it's totally disingenuous to call them the weakest team at the event. Two other teams performed worse in the group stage and one tied Pain's score, but still made it into the main event through sheer luck. Pain has won games against every single one of the strongest teams there.

If you follow pro Dota, you know a lot of these games hinge on how a team happens to be performing on a certain day. Just this past May, Pain played in one of the top Dota tournaments of the year with 9 other teams who also happen to be at The International with them now -- and Pain came in 3rd place, ahead of Fnatic, OG, and Mineski.


And you just selectively quoted me to remove all of the qualifications and praise that I used to make it appear that I'm unduly harsh.

In that paragraph where you selectively quoted you removed the clause where I said: "...they are capable of taking games off some of the strongest teams due to their unpredictable and dynamic play", which is is no different to the point you are trying to make.

Fact is, they came last, they were lucky to attend given it was a surprise that their region was given a slot for the first time ever, but proved themselves to be a capable team from a region that is starting to get international exposure, and you are deliberately misrepresenting me.


For context: OpenAI is playing against a professional team this time, unlike the event a few weeks ago where the humans were skilled but not active pros or used to playing together.

This team isn't the strongest (they placed 17-18th during this TI) but OpenAI will play again tomorrow and the day after, presumably against progressively better teams.


> his team isn't the strongest (they placed 17-18th during this TI)

Oh, is that all :P. Assuming a 2x buffer for players who are good but not competitive, there are maybe 200 humans in the world who are better at DotA. If mastery is 1 in 10,000, these people are masters.


No, there are way more than 200 players better than Pain, all due respect to them. That’s clear by looking at leaderboards. But team dynamics and synergy in this game matter so much that they are definitely in the top 20 of teams regardless of their individual skill.

Seeing OpenAI do so well against this team impressed the hell out of me.


A really big factor making up your ranking on the leaderboard is your ability to play in a chaotic game where your team and yourself aren't on the same page. Just taking the top players from the leaderboard (who aren't already on a top team) and putting them together makes up a team that is good enough to win against other similar random teams (hence their high rank) but can't win against actual teams, which is evidenced all the time in the open qualifers where actual teams that consist of lower ranked players beat them. TI open qualifer / regional qualifer / actual event placements are way better indicators on how good a team is than pub leaderboards.


The way TI is structured they invite one team from South America. So Pain is likely not in top 20, but still a very good team.


I know how TI is structured, but I’m confident they’re top 20 anyway. Which team not at TI is better, do you think? Complexity? Navi? I don’t think so.


Many believe that it’s an exponential curve so there’s a substantial difference between the first and twentieth place teams.


Even if this is true, pain is one of the unpredictable teams that can take a game off anyone. They went 3-1 against Liquid at the Birmingham Major.

Actually, "weaker" (top 20) teams regularly do take games off better teams so I don't think there's much evidence for an exponential skill increase between teams.


Looking historically at team ELO and how squads have gone on 27-series win streaks, this is probably right. Though I think the gap between "low-tier" TI teams and TI winners is getting much tighter. Group B at TI8 was so close the difference between upper-bracket and lower-bracket was 1 game for most of the teams.


*Elo, not ELO :)


lol, good catch. Gotta give Arpad his credit :)


Seems like playing one of the best 200 players in the world is a reasonable proxy for a top tier player.

I'm unfamiliar with DOTA rules, but does anyone know if there are any limitations on the openAI team? e.g. Things like keystrokes per minute, scroll speed, etc


Yes, as the sibling poster said, there is a 200ms reaction speed restriction on OpenAI. Also note that OpenAI is using the Dota 2 bot API (its input is not pixels, and its outputs are not mouse actions and keystrokes), so it has more precise information than the human team on things like unit coordinates and can target its actions more exactly.


This 200ms restriction seems really slow, but it isn’t. It feels extremely fast, to the point where some heroes and tactics used by humans are useless or impossible. For example, one of the humans played a hero called Axe. It was literally impossible to land one of his skills because it takes 400ms to use it. I’ve seen a lot of professional Dota but I’ve never seen Axe calls being dodged so perfectly and consistently.

If this game was balanced around AI rather than humans, this game would look very different.


200ms is a good measurement on how fast a skilled human can react with a single keypress like activating BKB or phase shift. Players like VP.Noone can be seen doing such reactionary moves even faster than 200ms. Double clicks will already take longer due to slow fingers, which would be the self eul's to avoid axe's call. The moves where eul's was used on axe or blink dagger to escape, well those require players to also move the mouse accurately, which takes even more time.

If OpenAI wishes to have human-like mechanical limitations to make things more about strategy, then they should definitely start adding some sort of action performing delays to actions based on roughly how long a skilled player would take to do them.


This made me think that on top of having a 200ms in actions perhaps the AI can use an action queue where each action will have a delay of 50 ms. This will be a good way to simulate the human latency in using fingers to send the keystrokes. So, if a the DP want to use eul, she will end up taking 200 (base latency) + 50 (first click) + 50 (2nd click) = 300 ms. Still would be able to dodge but now it is much more competitive.


The consistency is definitely a factor, but even in just the top 80th percentile it's quite common to happen.


These were the limitations for the previous OpenAI game:

- Random Draft using a pool of 18 heroes (Crystal Maiden, Death Prophet, Earthshaker, Gyrocopter, Lich, Lion, Necrophos, Queen of Pain, Razor, Riki, Shadow Fiend, Slark, Sniper, Sven, Tidehunter, Viper, or Witch Doctor)

- No summons/illusions

- 5 invulnerable couriers, no exploiting them by scouting or tanking

- No Scan

Not sure if this match has the same limitations, but all heroes in the current game but one (Axe) are in list above, so maybe they added a few new heroes to the allowed pool.


They didn't have the invincible carriers and had more heros this time around.


> had more heros this time around

They had a fixed draft from the above pool with 1 new hero:

Crystal Maiden, Death Prophet, Tidehunter, Gyrocopter and Lich

vs

Lion, Necrophos, Sniper, Witch Doctor and Axe (the new one).


One thing to note is that there is a significant difference between five top-200 players and a team composed of the same, in terms of coordination and familiarity with playstyle.


During the last OpenAI showmatch the bots had an artificial 200ms reaction time imposed on them, I assume it's the same here but I'm not sure.


Thanks.


I don't believe this is the case, as OpenAI was able to hex (an immediate disable) Earthshaker before he got off Echoslam (a spell with no cast delay except the time to click the key).


The ES player didn't queue his abilities so there was delay between blinking and casting echo. Someone counted the frames and it was well over the 200ms minimum.


Here is a post from the Dota 2 subreddit discussing the timing with proof that OpenAI's reaction was over the 200ms minimum:

https://www.reddit.com/r/DotA2/comments/94vdpm/openai_hex_wa...


Yeah. I agree. It was over the 200ms minimum but it was artificial. No human could reasonably perform that type of action as relilably. And that has nothing to do with learning performance.


You can queue abilities to instantly cast them after the previous one finishes, so it is actually quite reasonable for a human to perform that type of action quite reliably.


They probably meant the reaction, not the blink and echoslam.


Did he blink in? Is there a delay there?


There's no delay, but when people looked at the replay, Earth Shaker was actually visible when he thought he wasn't. The AI that disabled him had precast his spell on the out of range Earth Shaker, so when Earth Shaker blinked in to range, the precast spell went off in the following server tick.


That's not what happened. The 200ms delay worked as intended. The reddit thread counting the frames can be seen here: https://www.reddit.com/r/DotA2/comments/94vdpm/openai_hex_wa...


That makes sense, thanks.


In general, the differences among a top echelon of human expertise are vastly smaller than those between humans and other kinds of agents, animals or AIs. We all share similar cognitive architecture for handling complex domains after all.

Once OpenAI Five defeat 3-4 pro teams convincingly, it could be presumed from the observation above that they will defeat any other top teams within a year at most, if sufficient resources are used for improving the AI. An exception is when a fundamental hidden weakness is found in the AI.

Another confounding factor is major rule changes where the game essentially becomes a different one.


> Once OpenAI Five defeat 3-4 pro teams convincingly, it could be presumed from the observation above that they will defeat any other top teams within a year at most, if sufficient resources are used for improving the AI. An exception is when a fundamental hidden weakness is found in the AI.

Not sure how you intended this, but the hero pool constraint is actually an enormous advantage for the AI. The current hero pool isn't, by any means, random. I don't know OpenAI selected it, but it's highly conducive to OpenAI Five's strengths--deathballing, mechanics, teamfight coordination, etc. It lacks essentially all of the heroes that humans would normally pick to counter that strategy.

Playing on an unrestricted hero set would test OpenAI Five in entirely different ways than playing on the current hero pool would. I would not be comfortable betting on your statement at all.


> I don't know OpenAI selected it, but it's highly conducive to OpenAI Five's strengths--deathballing, mechanics, teamfight coordination, etc. It lacks essentially all of the heroes that humans would normally pick to counter that strategy.

Isn't it the other way around - openai adjusted to the hero pool to come up with its' strategy, because it was trained on this hero pool.

If hero pool had only late-game melee carries - I assume openai would come up with a strategy that works with that.


I don't think so. It's quite easy to imagine that a computer would outperform humans in strategies that require precise execution and coordination in teamfights. In fact, it's not specific to OpenAI; even the built-in Dota bots' strengths are generally teamfight execution.

My guess is that if OpenAI 5 trained under a different hero pool, it would be weaker than it is now. It's very hard to predict how much weaker though.


> The current hero pool isn't, by any means, random.

Some of the restrictions they've discussed - like initially they couldn't handle with illusions, summons or invisibility at all. They obviously dropped the invis restriction.

My feeling is that the AI just learned that of the given pool, 'deathball' was the best (and specific heroes like Gyrocopter/Sniper were the most effective carries).


Overall a very good game. Definitely smart on PAIN gaming's part to delay the game and wait until the mid game (where OpenAI traditionally excels the most) is over in order to rely on their stronger carry. I think the lack of 5 couriers really showed where OpenAI had a crutch the last game it played. Excited to see the improvements as this goes on, and I suspect that next year we'll see the OpenAI team beat the #1 ranked human team.


Loved the strategy used by Pain. They recognised that fighting OpenAI wasn’t working so they actively avoided fights. They recognised strategic shortcomings, like the AI’s reluctance to pressure lanes and exploited it. What disappointed me a bit is that the AI didn’t make a concerted effort to win the game when it was strongest.


This type of playstyle adjustment by Pain is just standard Dota strategy tbh. When your team is losing teamfights, you avoid teamfights unless you know you have an imminent advantage (terrain, towers, initiation, numbers, etc.). A big part of avoiding teamfights is splitting up and putting pressure on lanes so that the other team has to split up to defend their towers. If they are split up and defending their towers, they can't group up and force fights at objectives (Roshan, towers, w/e) that your team would have to group up to defend.

Not to take anything away from Pain--just making it to TI is enough to prove that they're excellent Dota players.


The bots were using their ultimates pretty liberally. Maybe their "bot meta" is very group-intensive, so if they see one hero there's a good chance there are other heroes nearby? Other possible reasons: Opportunity cost of not using ultimates, getting a single kill late game can mean victory, bots penalized for games going later or their games just not running long in general?


I think it’s becoming clear that the network is overfitting to team fights (and other small duration action). OpenAI use of cooldowns in the last 20 minutes was inexcusable. It became obvious OpenAI had no working strategy.


Is OpenAI's expected win chance available anywhere? Only the bot reaction time against initiations was keeping them in the game for the last 20 minutes, so it might have been the desperate flailing typical of bots when they are losing.


The bots are actually calculating their expected win chance continuously throughout the match, even while the heroes are still being picked. During the OpenAI Five benchmark, the AI estimated its win chance at over 90 percent when it got to pick its own heroes, but estimated a mid 20% chance of victory when it was deliberately given bad heroes.


DoTA has come such a long way from being just a warcraft 3 mod. All those random pub matches I used to play with w33, analyzing all the patchnotes and .w3g replays before youtube was a thing, to shitty garena pubmatches, to a full fledge esports remake, to going to my first Dota TI2 international 2 years back, and now AI is being thrown in the mix. I wonder what the future of DoTA will look like now


Yeah. Dota's the one game I've consistently played my entire life. Seeing it bloom and blossom from just this thing I knew & enjoyed into the whole primetime, then merging into broader tech...this is awesome!

Crappy garena, banlists, vs 5 ai in wc3 when no internet... :')


But the popularity is a bit declining, isn't it?


Going to go off on a tangent here.

Yes and no. By raw numbers it certainly is in a state of decline. But put in context, it may have been inevitable. It has everything going against it.

DotA is a team game.

It's incredibly complex. Not easy to pick up. Mastery requires 1000's of hours. And it's miserable to play at the start.

The community is toxic. So random teams in a public match when you are new to the game compounds. Even if you aren't new, one way or another random teams can quite often be a frustrating experience that you just don't feel you have control over.

Compunding even further is the fact that once you start a game you are locked in for 35-55+ minutes. That can be 35 painful minutes of feeling crap.

Compare that to rising battle Royale games, and you get solo games you are allowed to be bad at and slowly get better in. And the time you spend in it is directly related to how much fun you have. And the games have a definite time cap. And the mechanics are simple. This holds true for a lot of other up and coming online casual multiplayer games.

Point is, Dota is destined to be niche as a game that is played.

What i would love to see is how the game can change externally to become more accessible to viewers. From client changes to community changes to changes in presenters, valve needs to step up to help people participate as spectators. The potential for the game to generate significant money lies in their pro circuit and it looks like valve is really stepping up their involvement here. Hopefully this continues.

The (somewhat poor) parallel is something like rugby or squash. Fairly arcane rules. A lot of people don't remember when they last played the game. But they can still be heavily invested in watching.


^ This

DoTA is incredibly complicated, and has more compounding interest towards tactical strategy + gameplay + teamwork + skill execution, more so than battle royale.

In battle royale, if you played awful you just die. Then you start a new game. With DoTA that death impacts your game in the long run, putting your team at a disadvantage, since you just fed their carry. But comebacks do happen though

Going to DoTA TI:2 internationals, where all the major esports teams compete is different than anything else you've ever experienced.

Its like going to the superbowl. Except, everyone knows how to play football, usually on an average / above-average level. But, instead of a few hours, its 8+ hours, for 7+ days. It gets EXTREMELY intense, just because you because you can relate to how difficult manuveurs are, and it isn't dictated by things like how much weight you can squat in football (For faster sprinting / agility, etc). There's many little minor mental and physical gymnastics that DoTA2 pro players do 100xs better than your average player.

Comparing this to Fortnite competitive mode, its not the same. Its more closely tied to watching something with less strategical depth, such as tennis. Tennis is still exciting for its own reasons, but its mostly skill based execution, and mostly a solo game. Even if you played 2v2.

I watch both fortnite and dota2 competitive every so often


Wait… the big news here is steam.tv. I heard the rumors but didn't know it was up?!

(it seems to work only for their own event at this time tho.)


They announced it a couple of days ago: http://blog.dota2.com/2018/08/the-main-event-with-new-steam-...

It still have limitations and bugs, yesterday it just wouldn't load for me, but it works now, and you are unable to watch yesterdays content.


The UI is very wonky, definitely looks like it was rushed last minute to release for TI. That "Watch with friends" button is pretty messed up if you're logged in. Also if you're logged in and "close" the stream tab, I don't see any way to get back to it. Definitely a work in progress.


A mod changed the URL from https://steam.tv/dota2/ after this comment was posted, presumably because the live stream was over (https://news.ycombinator.com/item?id=17824373).


I didn't even look at the URL, I assumed this was a twitch link. Makes sense they would showcase it on their flagship game though


Looks like it's built with React


We are playing with regular couriers for the first time! And against a pro team.


Pain isn't the toughest of opponents at TI. Will they be playing vs a team that is currently still alive in the bracket? So far it seems Pain is kind of controlling the game at least much better vs the caster team (99th percentile)


When OpenAI played caster team, there was 5 invulnerable couriers that further increase their strength using death ball strategy.


They're still the ~19th best team in the world.


No. TI is not a meritocracy. Pain came from the weakest region in the world. Without the SA slot, they wouldn't even be in TI.


Based on the name, you are replying to Noxville, who runs DatDota and knows these statistics in detail. Pain is 20th in the world according to their Glicko 2 rating (https://www.datdota.com/ratings) right now, he's not pulling it out of his ass because they were at TI.


And you know that, how? Have you seen them fight against other region teams that failed to qualify?


Maybe, like me, he follows the pro scene.

People complain a lot that Europe is stacked (meaning stronger teams) compared to regions like NA, and they get the same amount of slots a TI. South America is probably the weakest region.

I think a lot of people following the competitive scene would agree with what he was saying - that Pain is not top 20 in the world.


I follow the pro scene too, nobody sensible would say they are not top 20, can you name a team that deserved to be in TI other than pain here from any region?

> People complain a lot that Europe is stacked (meaning stronger teams) compared to regions like NA

Really? Go ahead and name an EU team that is not playing in TI that should be here. EU is top heavy and they get to TI easily, other than that not much. SA is weaker region for sure, that's why they have 1 team from their region and they deserve to be there.


How can I tell looking at the screen who is winning? Is there a score somewhere? Maybe the left side sliders?


The best things to look at are the gold difference (below the top bar) and the towers remaining (on either side of the top bar). But neither fully captures who's winning. Dota is remarkable for how possible comebacks are, so it's often hard to say with certainty who's winning at a certain moment.


Net Worth Difference is the best single indicator, but there are additional factors like tower difference, item purchasing decisions, hero composition, hero aliveness & buyback status, aegis, etc. Some specific heroes also wildly differ from the norm for each one of these factors. The game is very complex :()


Real noxville?


Its not always exactly obvious. But generally speaking, number of kills + towers destroyed is always a good indicator. Another good indicator is overall networth, the team winning usually has 3 out of their 5 players at the top 5 ranks in money value.

One thing you can see is which team is constantly defending and which is constantly attacking. That's a pretty good indicator too.

No guarantees though, games have been known to flop and be unpredictable, mostly due to respawn timers / buybacks lategame. Late game is super critical on who gets first kill generally


As others have said, there is no single indicator that tells you who is winning a game of dota. Net worth gold difference is usually, but not always, the best default indicator of who has an advantage. Kills and Towers are also a good secondary indicator. However, there are times when all three of those usually reliable indicators aren't relevant.

You can see net worth gold difference by looking at the text that appears in the top center which normally say something like ">1k" for one of the two teams. Its something they recently just added to the spectator UI.

The sliders you mentioned on left usually show individual player net worth, so you can see where resources are being allocated on the team. The "observer" (a camera man I guess) picks which stats to overlay in the top left over the course of the game depending upon which single indicator is most relevant.

In the early game those sliders normally show "last hits/denies" per player, which is about collecting early game resources. In the mid/late game its usually net worth per player. In the late game it starts looking at a thing called "buyback status", which is an ability all players have to spend a lot of their resources to rejoin the game early after getting killed. Using "buyback" is very expensive, scales with networth, and has a 5 minute cooldown.


A rule of thumb for determining who is winning is the delta net worth: the money icon that appears under the kill counter at the top for the team who's ahead. (as of posting, OpenAI is ahead)


In general its the number of towers they've destroyed + number of kills.

Not always a great indicator but you can tell who's behind.


Outfought the humans in teamfights, other than that not sure how much long term strategic thought was shown.


It was frustrating to see it do so many things well but make strange decisions otherwise

- poor warding. Some wards were simply wasted. (Wards are small, invisible, immovable units that grant vision) - using powerful long cool down spells to earn gold instead of keeping it in reserve for fights - not recognising that one of the lanes needed to be pushed out. Eventually this bit them in the ass. - using buybacks where none were required


I don't know if it's still the case, but the OpenAI developers said that the reason why it sometimes wastes wards and smokes is because it doesn't know how to drop items, so it'll just use them to free up inventory space if it wants to buy something new.


I played dota for half a year not knowing you can doubleclick to switch which ward is on the top.

I always disabled joining wards, but when I forgot - I warded like openai, because I had 3 obs wards on top of a sentry and needed to put a sentry somewhere :)


The warding was interesting. It had 3 wards basically stacked on top of each other near the Roshan pit at one point. On the one hand, it apparently learned to ward areas that were of high strategic importance. On the other hand… it had 3 wards stacked on top of each other.


the poor warding isn't as big of a handicap with robots as it would be with humans

for humans, wards give you extra time to react to enemy movements and arrange ganks, check runes, etc. but the robots don't really need the reaction time bonus, and the information bonus may not be very valuable if they arent proactive about pushing and ganking.


OpenAI looked bad in the mid-game and non-competitive in the late-game. The team fight stuff is frustrating since it’s just learning perfect rotations where computers should obviously excel. Their continuing use of cool downs on creeps in the late game was baffling.


Totally agree. I was very hopeful after seeing the benchmark match, but openai definitely looked a lot more naked today. Long term (5-10 mins) looked really weak when it really mattered. It just wanted toget team fights and capitalize on it to take down objectives later.

I think I saw somewhere that they are training to consider a time frame of a minute or so, so it makes sense. But it's also slightly disappointing since these games have long term strategy when teams are evenly matched.

Mechanics and positioning looks really strong still. Decisions less so.


Looks like OpenAI just lost Match #1.


Hooray, humans not obsolete quite yet!

Very good play from OpenAI though, even though it ultimately lost. It looked very scary how fast it could switch between dominating 5-man teamfights to splitting up and ganking or pushing down towers.


Updated the ruleset with regular couriers! Will be a lot different from the earlier matches. I'm guessing humans can win if they find "cheese" plays that the bot has not found during it's practice.


At the current state, openai five stands no chance against stronger teams (eg liquid lgd) of TI8.


Their reaction time seemed way too fast. It was entertaining to watch though.


It's actually internationally delayed reactions to make it more fair (200ms of delay added).


Watching the match there seemed to be a crazy amount of instant Euls and what not.

I doubt a player is about to react to a blink initiation and click Euls/Hex in the same amount of time. It'd be a lot more fair for them to calibrate against the reaction time of pro-players across the same scenarios. (I doubt pros can hit 200ms consistently)


That or there's something like 200ms "windows" in which API sync occurs - so if someone Blinks next to you you can react within 1 tick (33ms) if the timing is right.


200 ms is much better than 99.99% of humans though. One specific example - one of the humans playing Axe tried to cast a spell that would taunt an enemy into attacking him. It takes 400ms to cast this. That’s not enough time for all but a handful of humans to react, but OpenAI managed it with ease. He couldn’t land the spell until he purchased an item that granted him invisibility.


>200 ms is much better than 99.99% of humans though

I think this is bit of a red herring: while that's true, OpenAI isn't competing against 99.99% of humans -- they're competing against the top 0.05% of Dota players that very likely have a much, much lower average reaction rate.

To throw numbers at it, https://www.humanbenchmark.com/tests/reactiontime/ has the top 10 people at ~110ms[1], while I (at a measly 1K MMR in Dota) can pretty easily average ~220ms (and they report human average around. I imagine pros in the scene have honed these reflexes to be far superior than the average human.

[1] Also worth noting that they discard all reactions <100ms, so we could have some <100ms prodigies that actually have faster reaction time than the 100ms reported on the scoreboard for them. (These numbers are spread over 5+ trials, so "getting lucky" is a lot less likely.)


That's what OpenAI claims, but there have been too many insta-hex/euls moments for me to believe that. Players are shift-queing blink+AoE spells and getting hexed in between the two abilities. That's way faster than a 200 ms reaction time.

I think OpenAI might have a bug with how they're adding reaction time.


Someone did an analysis on this because it seemed like insta hex the last time they played, but they were all pre-queued and the 200ms turned out to be accurate.


Yeah my guess is that perfect timing-awareness and 200ms reaction time looks a lot like 0ms reaction time. Just start the action 200ms earlier - that'll work in enough cases that you'll look like you have perfect timing.


Interesting that the draft was predetermined (unlike the previous exhibition, which was live), allowing OpenAI to train ahead of time on a restricted set of matches. I wonder how long that was?


Since Saturday they said.



> OpenAI is a non-profit artificial intelligence research company that aims to promote and develop _FRIENDLY AI_ in such a way as to benefit humanity as a whole.

I find it amusing & ironic that they're pursuing games of efficiently & strategically killing enemies as examples of their successful progress. ;-)


> I find it amusing & ironic that they're pursuing games of efficiently & strategically killing enemies as examples of their successful progress. ;-)

Except that the amusement and irony would cease to exist once you have some idea of what's happening behind the scenes.

The AI doesn't know it's "killing enemies". For it, it's just something that results in the increase of a numerical reward signal.


Is that supposed to remove the amusement and irony? "We're creating friendly AI! Well, it thinks it's friendly, since to it, everything is just a numerical reward signal."


Obviously. I understand the accomplishments and applicability of the tech.

Stepping back a bit and simply looking at the context, though, it is an amusing contrast.


Is this five different OpenAI bots cooperating in the game, or a single engine controlling five boots?


It's five different instances:

https://openai.com/five/#how-openai-five-works


5 separate bots.


rigged IMO


well, there's constantly empty chairs on the face cam whenever it's on the OpenAI hero, so something must be fishy ;)


Citation needed?


Sam Altman is a jokester.


Look at the poster name.


This time, OpenAI is playing a normal game without restrictions. Team Human got thoroughly trashed last time vs OpenAI in a game with hero restrictions.

Go OpenAI! I For One welcome our new robot overlords.


Not quite a normal game with no restrictions because it's a pre-selected hero pick from a limited pool instead of a full draft, but it's still very cool.

Interesting that OpenAI seems to prefer deathballing, but it makes sense: its main advantage over humans is probably tactical and in teamfights, and 5-man maximizes your options. The human strategy should probably be to split push, but one of the commentators (who is also a pro who played against OpenAI earlier) says that is very difficult because OpenAI can apply pressure everywhere.


I recall reading somewhere that the real advantage AI had was in it's ability to come up with bizarre strategies that confuse human opponents.


I'm skeptical that you read this somewhere. AI doesn't "come up" with strategies. It's likely something that's been discovered in training and then mechanically repeated. But it doesn't "come up" with strategies out of nowhere.


"Come up" is my short form for the process in which AIs are built.

The actual point is that AI's advantage were using tactics and strategies that human opponents would find unintuitive, or even counter-intuitive.

How AI comes up with those tactics is not at all relevant to this thread.


Why doesn't it come up with strategies? The program is doing a massive search over an action space, of course it will find things there.


It's a dumb semantics argument about using the phrase "come up."


Dropped the first game. Pretty funny that the last openAI bot attempted to feed mid.


Not familiar with dota but I watched the tail end of the stream, what does this mean?


Since players get gold and experience points for killing an enemy hero, bad mannered players who want to ruin a game for their own team will repeatedly run down the middle lane to feed gold and experience to the enemy team, allowing them to win faster. It looked like the AI was doing that, but other people who played against the AI said it's more like a last ditch effort to try and keep the enemies out of their base when the AI doesn't know what to do when it's on the verge of losing.


dx87 gave a good explanation.

"Feed" is a term in Dota when players (aka Heroes) die without any benefit to your team--such as destroying an enemy spawn (racks/Barracks), killing another team Hero (hopefully more one). It's "feeding" as the other team members nearby will get gold and experience for each kill. As there is a respawn timer, the character who is dead will not get Exp nor Gold during that count-down resulting in a character that is disadvantaged as it will be under leveled compared to the rest of the heroes.

In the early and mid games of Dota, not gaining experience and gold is a major setback. Dying has a huge penalty.

The term "feed" is used with players who are learning the game, or those who are low skilled, and are haven't yet mastered some of the main aspects of playing. They're dying with no benefit, boosting the other team.

"Mid" simply means 'middle lane.' DOTA has 3 lanes, top, middle, and bottom, and short hand refers to them as top, mid, bot, respectively.

If your team is losing, boosting the other team to get the game over with is poor sportsmanship. A single team battle could easily shift the game into the losing teams favor.

In sports like Football, you may see teams pull their starters to 1. prevent injury and 2. give other teammates experience, but you'd never see someone purposefully help the opposing team win.


Can the bots learn from the previously played international matches?


for a bot, they sure are doing pretty well versus the top 5% of the skilled dota players, assuming that only the top players around the world can enter the main international event.


It’s 0.05%. But also the game OpenAI plays is very limited. It’s a very small game compare to the real Dota. They just got rid of a small limitation of having 5 invulnerable couriers and already being punished hard (since their push timing being delayed). And Dota is much more complicated than just team fighting and death ball.


Top 1% or less, sorry for being pedantic


But this bot can defeat a bunch of heralds and guardian players.


If you want better quality watch the viedo by dota 2 rapier


Since maybe others will be looking for it too: the link above is for the live show - for the recording of the match (which is over) there's a link on Youtube: https://www.youtube.com/watch?v=Y2EQCE9LRXE




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: