The thing to realize about academia is that the vast majority of people in it have never experienced the real world. They went straight from grade school to college to working in academia, and stay there until they retire or die.
Academia is literally all these people know, and they've been sheltered their entire lives. No one should be surprised that this produces strange results.
Obviously there are exceptions, but there's no denying it produces some interesting world views.
On 1) photo manipulation. If "dangerous" means that random, relatively harmless fake facts easily spread, I'm inclined to agree. I'm sure I've been tricked by a whole host of random facts which are probably actually BS based on random stuff I've read online.
On the other hand, if "dangerous" means that a malicious agent can systematically manipulate the public via propaganda a la 20th century Stalin, I think this is basically impossible. Given widespread knowledge of photoshop, changing someone's mind about something significant with a doctored photo seems difficult (it is hard enough to change someone's mind with the truth!). Doctored photographs aren't completely harmless in that they can reinforce existing held beliefs (confirmation bias) and random fake facts aren't totally harmless either, I just think that the danger here is far below what has been implied in other places. In a nutshell, the effects are indeed wide spread, but perhaps not so insidious.
On 2) there being no fire alarm, I'm actually very thankful to OpenAI for raising this discussion. While I disagree with their decision not to open source, this discussion was certainly worth having.
> On 1) photo manipulation. If "dangerous" means that random, relatively harmless fake facts easily spread, I'm inclined to agree.
No, I think it is even more insidious what they were referring to. I think they meant that Photoshop has enabled and continues to enable unrealistic physical standards of beauty and BMI (mostly for women) that are transmitted in advertisements and news articles.
In other words, Photoshop has enabled an alternative visual reality that is not realistic.
But still much less dangerous than stuff like nuclear fallout or orwellian propaganda? This isn't to say that unrealistic beauty standards aren't a problem, just one that isn't worth trying to stop the march of technology for. Also, I think advertisements are more to blame here than photoshop. Do you think the ancient Greeks and Romans look like Michelangelo's David or any of their other statues? They weren't bombarded by ads though.
Can't disinformation campaigns lead to instability and increase the risk of nuclear weapon use?
I think the damage that disinformation can cause to democracy is real and we've already seen some of its effects without the ability to automate these things.
> "dangerous" means that a malicious agent can systematically manipulate the public via propaganda a la 20th century Stalin
You don't need total control to succeed.
The same way that early VOIP calls didn't need perfect sound to be better than normal dial up.
All you need is an economic, efficiency, quality or quantity advantage to create a new weapon/tool.
India is facing a spate of lynchings over child kidnappings - driven by whatsapp forwards that combine videos from brutal mexican gang killings of children, pakistani safety warnings among others. I've seen them, they are being disseminated to people who have never had to harden themselves to the internet or behavior on the net. (There are many people who don't have knowledge of photoshop, and getting the knowledge to them is Hard. Even getting the knowledge to them can be used to weaken Facts instead of propaganda.)
As a result villagers have lynched strangers, mental handicapped people, widows and anyone they suspect of kidnapping or potentially harming children.
And I have NO idea, why or who is cutting up and making these videos to share on whatsapp. Evil manipulators, people who are doing it for laughs, well intentioned but
misguided good samaritans? As a result of such forwards, whatsapp rolled out a limit on how many times a message can be forwarded - a feature they have pushed worldwide recently.
We don't need perfection to do damage, we need efficiency. We already have other systems which can make up the difference in results.
While you are right that the market only has room for 1 president every four years, this isn't true in economics. Startups create wealth (at least they are supposed to) so while there probably is a limit to how fast the economy can grow, we certainly are not hitting that limit anytime soon.
Starcraft II pros are probably much further from perfect play than Dota2 pros (simply because the game is harder). As a result, having perfect micro is a much larger advantage in Starcraft than it is in Dota 2.
While that would be amazing if true, I'm pretty sure if you take away the stalker blink micro AlphaStar loses hands down to humans. This isn't taking away from Deepmind's victory at all, but I think micro was what made the AI come out ahead in this one. In many of the games, Mana had much better macro only to lose to blink stalkers.
You play the game as it's written. Come back with another version of StarCraft that isn't so micro-intensive and we can see how the AI does on that.
Chess and Go don't have any form of micro and AIs are nevertheless dominant there.
I'd say, give AI development another year and I wouldn't expect there to be any kind of game, in any genre, that humans can beat AIs at. Whether it's Chess, Go, other classical board games, Civilization, MOBAs, RTSes, FPSs, etc.
> Chess and Go don't have any form of micro and AIs are nevertheless dominant there.
Yes, but chess and go have a tiny problem space compared to something like Starcraft. People want to see an AI win because it’s smart, not because it’s a computer capable of things impossible for humans. If the goal was perfect micro they could write computer programs to do that 10 years ago.
Then maybe we need a better game than StarCraft to test this on? Some kind of RTS that's less micro-heavy, perhaps? Maybe even an RTS where you can't give orders to individual units at all, like the Total War series? You can't fault the AI for winning at the game because of the way the game itself works.
Even if you limit the AI to max human APM, it's still going to dominate in these micro-heavy battles because it's going to make every one of its actions count.
> Even if you limit the AI to max human APM, it's still going to dominate in these micro-heavy battles because it's going to make every one of its actions count.
right, and we saw that with the incredible precision with stalker blink micro. There are many ways you could make it more comparable to humans. They have already tried that by even giving it an APM.
> You can't fault the AI for winning at the game because of the way the game itself works.
But it does make the victory feel hollow when it wins using a "skill" that is unrelated to AI (having crazy high APM with perfect precision because its a computer). Micro-bots have been around for decades, and they are really good. The whole point of this exercise is to build better AI, not prove that computers are faster then humans.
It would like if they wanted robots to try and beat humans at soccer, and the robots won because they shoot the ball out of a cannon at 1000 KPH. They win, but not really by having the skills that we are trying to develop.
I just can't help but feel that nothing AI does will ever be good enough according to this mindset, i.e. true "intelligence" is by definition things that computers cannot do.
Beating the world champion in Chess was, at one point, considered an impossible achievement for computers. Now it's considered so routine it doesn't even count as AI according to many. And in a few months when AlphaStar is beating top human players without having to use APM or viewport advantages, what will the next goalposts be?
The point is, it's like being impressed by a calculator because it can multiply two massive numbers faster than we can... no shit, that's the whole reason we use computers, because they calculate faster than we can...
There's nothing impressive in coding something that can execute something far faster than a human, or be so accurate and beat a human. There were Quake 3 bots that could wreck any human alive 10 years ago because they react in milliseconds and shoot you in the head perfectly. So what? It's obvious a computer can do that. It's like being surprised that a bullet beats a human in a fight, that's by design.
I would be impressed if a computer learned from scratch without knowing anything about the game beforehand, about the controls, or anything else, with ordinary human limitations. Using vision processors to look at a screen to see the inputs and controlling a physical mouse and keyboard. That would be impressive. But watching a computer do perfect blink micro at 1500apm is just underwhelming, since that isn't new tech, you could hand code that without deep nets.
> The point is, it's like being impressed by a calculator because it can multiply two massive numbers faster than we can
Yeah, exactly. And when calculators first came out, people were very impressed by them. They upended entire industries and made new things possible that had simply never been possible before with manual calculation. When you're pooh-poohing the entire computational revolution you might want to take a step back and reconsider your viewpoint. It only seems not impressive now because we were born in a world where electronic calculation is commonplace and thus taken for granted.
If you don't find this achievement impressive, then go look at some turn-based game where reaction time is eliminated entirely that computers still dominate at, like Chess or Go. The AIs are coming. Or give it a few months and they'll come back with a version hard-limited to half the APM of the human players and it'll still dominate. It's clear which way the winds are blowing on this. People who bet against the continued progress of game-playing AIs invariably lose.
> Or give it a few months and they'll come back with a version hard-limited to half the APM of the human players and it'll still dominate.
And this is exactly what is being argued here. Let's see that in particular, not a demonstration that computers are faster than humans. Of course they are. Whoever argued that, ever? This has been known and envisioned even before calculators were invented.
What people here are arguing with you for is that we want human-level limitations of the controls for the AI so it can clearly win by better strategy.
> I just can't help but feel that nothing AI does will ever be good enough
It can be good enough in a certain problem space, such as chess. But unlike chess or go, which are purely mental games, Starcraft has large physical component (vision, APM, reaction time). It can make it hard to determine when it has “mastered” this RTS. Like you said, it may be a few more months (years?) before AlphaStar can master Starcraft on “mental” level. The physical component is trivial for a computer, so mastering that is not much of a milestone.
Depending on how you define Chess, seeing the pieces and physically moving them is part of it as well. Chess-playing AIs haven't been required to have robot components because that's not the interesting part of the challenge of Chess. I'd argue the same is true of StarCraft, even more so, given that it's an innately computer-based game in a way that Chess is not. It seems arbitrary to require the presence of an electronic-to-physical bridge in the form of a robot only to then operate physical-to-electronic bridges in the form of a keyboard and mouse. Just let it run via the input devices directly. Give it some years and humans will be able to do this too.
In other words, this isn't an interesting handicap to apply.
> It seems arbitrary to require the presence of an electronic-to-physical bridge in the form of a robot only to then operate physical-to-electronic bridges in the form of a keyboard and mouse.
It's not at all arbitrary. SC2 match is won by a combination of reflexes and physical quickness with which the actions are executed, and strategy.
The whole point is to even the playing field in the area of the physical limitations so that only the strategy part is the difference. You know, the "Artificial INTELLIGENCE" part?
Is a AI that wins at Starcraft only because it has crazy high APM really going to help get to the next X? We could have built that 10 years ago. All it proves is that computers have faster reflexes then humans. That won’t help them become problem solvers for the future.
You seem to forget the way it learned to play every part of the game (not just micro fights). That is, not by having any developer code any rules, but simply by "looking" and "playing".
That's the great accomplishment and nothing like that could have been done 10 years ago.
What makes this interesting is if they can make a computer program better at Starcraft strategy then a human. How they did that is irrelevant. If having developers code rules makes a better AI then deep learning, then the former is the most impressive solution. What they did is a great accomplishment and the AI they created was amazing, but I feel like the faster-then-humanly-possible micro makes any accomplishment hollow, because that is really nothing new.
If they beat human performance in this (non-AI-building) field by humans painstakingly coding rules for specific situations, then that's cool I guess but not groundbreaking, because the solution doesn't generalise.
If they beat human performance in a field heretofore intractable by software by throwing the basic rules and a ton of compute at an algorithm and then waiting for six weeks while the algorithm figures the rest out by itself, then that absolutely is qualitatively different.
The reason being, of course, that if they can find an algorithm that works like this across a wide enough problem space then eventually they'll find an algorithm which will work on the question of "build a better algorithm." After which, as we know, all bets are off.
If you think the how is irrelevant you are completely missing the point of this exercise. Maybe to you only the result matters but for every other task and humanity the how matters.
Simply imagine next taking on a different Game like one version of the Anno series.
If developers did it by hand, you need 50 devs sitting there for probably a couple of months, figuring out the best, rules their sequence and putting them in. That is about $20 Million just to get a similar AI for the next game.
Compare that to download all available replays, requiring maybe 2-3 data scientist to get the data into shape, renting some compute in the google cloud and you get the same or a better result for probably half a million $.
Watch and learn from data alone is why modern machine learning is considered a revolution and novelty. Buying compute time in the cloud is in comparison (to devs and hand coding) dirt cheap and the results are often better.
Deepmind is not working on this problem for the benefit of gamers or the Starcraft community. Making the perfect bot is not the aim. Tackling the next hurdle, next hardest problem in machine learning is. On the way to become better at generalizing the learning algorithms.
Speed of play is a fundamentally important gameplay mechanic of any real-time game. One of the main reasons the pros are better than amateurs at these types of game is because they play and react faster.
And yes, of course computers are much better at doing things more quickly than humans. It's not even remotely close for us. The AIs are clearly better. It's not cheating either; they are legitimately better at it than us.
It sounds like you're simply objecting to pitting people up against computers in real-time games entirely.
So all they really proved is computers are faster then humans. I knew that before this started.
The Deepmind team knows the challenge isn’t to beat humans at Starcraft. That is trivially easy with the advantages you mentioned. The challenge is to be better at strategy then a human. That is why they tried to add artificial rules to make the AI have similar physical limitations to a human (emulated mouse, rate limited actions, emulated screen and visibility). There have been micro AI bots for years that could out preform any human. They knew they weren’t just trying to build another micro bot, because if they were it wouldn’t be much of an accomplishment.
> The Deepmind team knows the challenge isn’t to beat humans at Starcraft. That is trivially easy with the advantages you mentioned.
It's not trivially easy at all. No one had come close before. It took an entire team of ML experts at Google to pull it off. These hard-coded micro bots you're referring to didn't holistically play the entire game and win at it. They're more akin to an aimbot in FPSes, not a self-learning general game-playing AI.
This is yet another in a long string of impressive AI achievements being minimized through moving the goalposts. It's facile and it's boring.
>It's not cheating either; they are legitimately better at it than us.
This is not 100% true, the AI still skips the mechanical part (it doesn't have a mouse, keyboard and hands) in this particular case. This alone can introduce insane amounts of additional complexity, and will make AI to not be pixel precise.
yup. you could have 200 apm, but as long as your clicks and button presses are perfect, you are going to win against someone with 800 but is super imprecise.
blink stalkers are basically perfect for an AI because of the precision they can blink them around.
I assume you’re joking, but just in case you aren’t, Scrabble bots have outperformed top humans for 20 years with little more than a basic Monte Carlo tree search.
In the TLO matchup, the ai wins with an army of disruptors, and unupgraded stalkers; ofc, TLO wasnt playing his best (in terms of micro or race), but it was still doing well with a micro-lacking unit (outside of blowing up its own army repeatedly)
Dealing with perfect blinking is basically impossible, since you can blink back your units right before they die. Stalkers are balanced around the fact that HUMANS have limits to how well they can micro.
While the "skill cap" on blink stalkers is extremely high, there are many hard counters that can stop even perfect blink micro. MaNa won because he went for one these. Immortals are the perfect hard counter to stalkers because
- cost-for-cost, they are more efficient in a faceoff (resources)
- immortals are space-efficient dps (damage per second) in a battle. In a given battle, an army of 4 immortals is far more likely to all be in range of an enemy and doing damage than an army of 8 stalkers bumping against each other trying to get to the priority target
- immortal shots do not have projectiles, but are instant. No matter how perfect your stalker control, once an immortal targets a stalker, it is guaranteed to take 30+% of its hitpoints in damage.
The last point is very important. Once MaNa had 3+ immortals, even with perfect blink micro, a little bit of target fire and timing micro on MaNa's part allowed him to slaughter the stalker army one stalker per volley, while it takes them longer to clean up the immortals (especially with shield battery support).
Another thing glossed over in this discussion -- AlphaStar did more than classic blink micro. It did a very technical maneuver (the casters briefly allude to it) of triggering the barrier on one immortal with a single laser, then focusing all fire on an immortal whose barrier was already down from a previous iteration of this tactic, and then walking away until the barrier has worn off (while blink-microing weakened stalkers). Repeat. This is a detail of increasing the efficiency of trading stalkers with immortals that humans don't often even think about, let alone execute (because good blink control is often more impactful). That AlphaStar came up with this shows that it's not just about perfect execution of micro, but also perfect understanding of micro.
There was a "perfect zergling micro vs siege tanks" bot some time ago that would micro lings away from the one that was being fired at by the tanks, thereby negating all the splash damage. The effect was insanely powerful.
But as you say, showing that a bot can have perfect micro is not very interesting. Of course a computer can have better control of well defined tasks like moving a unit away just before it dies, especially doing so for many different units concurrently. What is interesting is the wider strategy and how the computer deals with imperfect information.
The interesting part to me is that, as far as I understand, the AI figured out this strategy by itself, basically deciding that it would be a good way for it to win games, rather than being specifically programmed to do it. That's actually pretty cool!
Other than that, I agree, and am also much more interested in what happens when you have a more level playing field (using camera movement rather than API, limiting reaction times and CPM, etc). I look forward to future matches where this happens.
I think there is some debate about what the neural net did and what was hardcoded. So far all starcraft AIs consist of hardcoded intelligent micro ruled by a neural net that picks one out of less than 100 possible hardcoded choices. And things like "expand", "scout", "group units", "micro" are hardcoded outside of the neural net, part of the API in fact. When the researches said they only used 15 TPUs for 14 days on LSTM, this makes me think they really narrowed down the search space of the neural net and hardcoded a lot of the micro or at least trained separate micro nets.
Not really. The version which learned from scratch was scrapped as it didn’t work at all. This version learned by observing pros. So it didn’t learn by itself, it imitated and perfected pro players.
It was not programmed to do the thing, but all these tactics were in seed replays, from which the agent started its learning. So, it actually not figured the move _by itself_, only found it useful.
This is really impressive. Even though the camera hack / uncapped APM limits (only the average was capped) made this version slightly unfair since the ability of an AI to micro insanely well with stalkers is basically unbeatable, I feel confident based on this performance that Deepmind will release a superhuman AI with lower EPM than humans very soon from now.