Lots of people contribute what I imagine are amounts of CPU Power/money to the Leela Chess Zero project.
Would love to see Alpha Chess vs Leela Chess.
 I've caused terrible confusion by melding Leela Go and Leela Chess when Leela Chess was originally forked from Leela Go and that's basically when similarities end.
Edited for a bit more clarity.
This is also how Stockfish got to be the #1 engine. By being open source, and having the testing framework (https://tests.stockfishchess.org) use donated computer time from volunteers, it was able to make fast, continuous progress. It flipped what was previously a disadvantage (if you are open source, everyone can copy your ideas), into an advantage - as you can't easily set up a fishtest like system with an engine that isn't already developed in public.
With a tool like this, instead of waiting for a Go pro to visit our local Go club to review our kifu, I can have my game reviewed move by move until the end (not only the first n moves).
I can still have questions for pros, but they would be more specific.
Now, playing a superhuman intelligence bot can be unfun. Now matter how much effort you put in a move, you will just keep making your outcome worse with every move.
Another important use-case is that the AI can also tell you if a joseki is actually joseki, and how to refute a bad joseki move.
Perhaps you were confused because Leela Chess Zero was forked from Leela Zero (neural network Go engine by Pascutto) but it includes Stockfish's move generation logic.
Glaurung was pretty innovative at the time.
But Kasparov and others have given up on the idea that a human provides any unique insight into chess anymore. Computers are just better.
In 2014 a heavily handicapped Stockfish beat the 5th ranked player in the world (Nakamura) under tournament conditions despite no access to its opening or closing books and a one pawn handicap.
What he has given up on is a single human beating a computer.
When he recently came out of chess retirement he didn't talk about it at all in either 2017 or 2019:
There's nothing recent about him on https://en.wikipedia.org/wiki/Advanced_chess
I can't imagine a human doing anything besides making things worse or even.
Right now we have essentially two top tier engines -- traditional brute force with alpha beta pruning (stockfish), and ML (leela). Both alone are incredibly strong, but they are strongest and weakest in different types of positions. A computer chess expert, who knows what kind of positions favor stockfish and what kind favor leela, could act as a "referee" between the two engines when they disagree, and when they are unanimous, simply accept the move.
Ten years ago, a grandmaster driving a single engine could typically beat an equal strength engine. I don't think that's the case anymore.
But I think if you have someone who is an expert at computer chess -- not so much a chess grandmaster, and you gave them Leela AND SF, and let them pick which one to use when in the case of conflicts -- they would score positive against either leela or stockfish in isolation.
Larry Kaufman designed his new opening repertoire book by doing exactly this -- running Leela on 2 cores + GPU, and stockfish on 6 cores, and doing the conflict resolution with his own judgement.
The human can certainly no longer pull his own moves out of thin air, though.
It is unlikely that Chess is any different. Any superficial understanding by a human of which move is 'better' is just ignorance of the issues around evaluating a position. If you have statistical evidence that is something. 'But I think' is not evidence.
It might be entertaining to have a human involved. It isn't going to help with winning games.
I can absolute guarantee you that a human (who is an expert in computer chess, someone like Larry Kaufman) + engines will beat a single engine over the long run. With current tech and computing power, this is ONLY because we have brute force (with alpha-beta pruning) and ML engines that are at near-equal strength, and have strengths and weaknesses in different types of positions, and that those strengths and weaknesses are understandable.
If we did not have AlphaZero, I don't think the human would be able to add anything at all currently.
Source: I’m a correspondence chess international master
The few and rare times an engine gets funky is usually in end-game positions where the engine can’t seem to find a sacrifice to win the game and will output a current position as drawn. These cases are few and I very much doubt that a human would be able to find these moves in an actual match.
Now if you’re talking about the way the chess engine learns, it can learn in two different ways: without human help (learning completely on its own giving it nothing but the rules which is how AlphaGo works), or with human aid (through chess theory accumulated over centuries of human matches that these engines have built in as part of their evaluations). Things get very interesting.
I’d recommend you to look up a few games between AlphaGo and Stockfish, which embody these two different philosophies and battle it to the teeth and bones. The matches are brilliant. I would say though that it seems like AlphaGo (learning the game entirely through scratch without human help) has seemed to triumph more times than Stockfish and with the nature of these systems, I’d suspect it to continue that trend.
However I agree that the games between AlphaGo and Stockfish are really interesting. It strikes me that the AlphaGo version of chess looks a lot more human; it seems to place value on strategic ideas (activity, tempo, freedom of movement) that any human player would recognise.
It's kind of crazy how AlphaZero has managed the success it has. Stockfish calculates roughly 60 million moves per second and AlphaZero calculates at only 60 thousand per second. Three orders of magnitude less yet its brilliance is mesmerizing, tearing Stockfish apart in certain matches.
Not to be too picky, but it was AlphaGo _Zero_ that learned from the rules alone. AlphaGo learned from a large database of human played games: "...trained by a novel combination of supervised learning from human expert games". 
AlphaGo Zero, derived from AlphaGo, was "an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules". 
Agadmator's youtube channel covers a bunch of those. https://www.youtube.com/watch?v=1yM0D1iZLrg
Pawn structure? BAH! King safety? CHARGE!
And then 75 moves later Stockfish is in zugzwang.
And still he lost against Kasparov. Which doesn't happen now, top engines haven't been beaten by humans since ~2006.
Source: I’m a corrspondence chess international master
I might be misunderstanding your claim, but how can humans playing correspondence chess beat Stockfish or Lc0?
Relatedly, can you give some examples of novel non-engine lines that turned out to be better than engine lines?
- fortresses . One side has more material but the position can't be won by the superior side. As the chess rules declare the draw only after 50 moves without captures or pawn pushes, current engines can't look this far away and continue manouvering without realizing the blocked nature of the position. Some engines have been programmed to solve this problem but their overall strength decreases significantly.
- Threefold repetitions . The engine believes the position is equal and move the pieces in - let me say - pseudorandom way. Only at some point it realizes the repetition can be avoided favourably by one side. Also this topic is frequently discussed in the programming forums but no clearcut solution has still emerged.
If you are looking for positions where human play is still better than engine's, the opening phase is the most fruitful. Most theoretical lines were born by human creativity and I doubt a chess engine will ever be able to navigate the intricacies of the Poisoned Pawn Variation of the Sicilian Najdorf  or the Marshall Attack of the Ruy Lopez . Neural networks engines are strategically stronger than classical AB programs in the opening phase but they suffers from occasional tactical blindness. Engine-engine competitions often use opening books to force the engines to play a prearranged variation to increase the variabililty and reduce the draw percentage.
What is the evidence that it isn't a hardware or software differential between the players? I can't think of an easy way to ensure that both players started with computer-suggested moves of the same quality.
For example in a given position you could have 3 moves
M1 - a calm continuation with a good advantage
M2 - an exchange sacrifice (a rook for a bishop or a knight) for an attack
M3 - a massive exchange of pieces entering into a favorable endgame.
If the three choices are so different, the computer usually can't dwell enough to settle on a clear best move. Instead the human can evaluate the choices until one of them shows up as clearly best (for example the endgame can be forcefully won). In these cases the computer suggestion becomes almost irrelevant and only a naive player would make the choice on some minimal score difference (that can unpredictably vary on hardware, software version or duration of analysis). So the quality of the starting suggestion is somehow irrelevant if you plan to make a thoughtful choice.
Computers are now as much better than Magnus Carlsen as he is better than a moderate amateur.
If even the best player overrides a move he's much more likely to be reducing the strength of the move than increasing it.
Source: I’m a correspondence international chess master
I wasn't thinking about correspondence but what was the latest large cyborg correspondence tournament?
Just realized that correspondence chess is cyborg chess, I didn't know computers were legal in correspondence chess, but it makes sense now. Reading about it, it sounds like it's less about knowing chess, and more about understanding the applications you're using.
Regarding the argument of "knowing chess", it depends on you definition. I often use this analogy. Correspondence chess is to tournament chess what the marathon is to track running. They require different skills and training but I guarantee to you that a lot of understanding is involved in correspondence chess, possibly more than in tournament chess.
"In terms of actual cost to DeepMind (a subsidiary of Google’s parent company) to run the experiment, there are other factors that need to be taken into account, such as researcher salaries, or that the quoted TPU rate probably includes a healthy amount of margin. But for someone outside Google, this number is a good ballpark estimate of how much it would cost to replicate this experiment."
And you base that claim on what exactly? Leela Go is trained by the community, which donates self-play resources. Just because you outsource your cost to volunteers doesn't mean it's free!
In order to get to a realistic estimation, you'd need to get the average cost for electricity, hardware cost (proportionate to use), and of course opportunity costs.
Since you cannot do that, I'd argue that you have no clue what the true training cost of these projects compared to on-demand/cloud costs really are.
To provide an order of magnitude: There are about 20m training games. Iirc a V100 could complete one game in something like a minute. So that's 300k-ish hours of value. Obviously, while V100 were the fastest, other GPUs were more cost efficient.
Just replace the self-play TPU resources with commodity hardware or even just cheaper GPU compute providers and you'd reduce cost 10-fold by just not using TPUs. Same goes for the number of self-play games.
That still doesn't change the estimate itself. IF the other projects would've used Google TPUs, they'd well have been around the same cost as the estimate.
I really don't understand what you're trying to argue against here.
How is Leela Zero stronger if it's the same calculations and less compute time?
Minor improvements to Google Books OCR might not be worth much, whereas better search result scoring would be worth lots. An automated system would decide where it was most efficient to spend the TPU's. Management would set how many dollars a 10% performance improvement was worth.
I'm sure the reality is a bunch of middle managers arguing over why their team deserves them more than another.
That's a short sighted, immediate benefit or bust mentality. Not to mention that projects have a ramp-up time where they are not profitable yet, but still very valuable strategically.
I don't know how much less, but if you were to do fully pre-emptible at this scale I wouldn't be surprised if you could get it down to one-tenth the price. I wouldn't suspect the same of other more generic resources like CPUs that have a much lower price point to begin with, but the TPU sticker price seems very high with lots of headroom.
And in comparison to large tech company R&D budgets, the amount cited in the article is a drop in the bucket. Consider the fact that Google spent $26 billion in R&D budget in 2019 alone . Microsoft spent almost $17 billion .
A Go champion might have trained for 8 hours a day, for 15 years (age 5 to 20). That is about 40 000 hours.
In other words, machines required 137 times longer to learn the game, and at twice the power consumption! There is still a lot of room for improvement.
"AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves".
AlphaGo was followed by AlphaGo Zero (which is the topic of this article) which did not use the process that you describe, it used only the rules of the game and the winning condition.
There are for example, other NNs also being trained to play Go, should all unsuccessful attempts be counted into the machine total? The comparison is almost impossible then.
The space of possible games is huge (infinite?), but only a tiny subset of these games could reasonably become a popular game for humans.
E.g. it's not an arbitrary random coincidence that the scoring rules for each grid intersection in go are the same (I mean, it could vary in an arbitrary pattern), it ensures that the ruleset is small enough so that humans can learn it.
It's not an arbitrary random coincidence that the playing of go involves pattern recognition on some level, since that's what we're good at and find interesting in many games.
It's not an arbitrary random coincidence that in Mario game after jumping the sprite falls back down eventually; that's reusing the priors from real world physics.
I know Josh Tenenbaum from MIT  works on this, see for example :
- How to Grow a Mind: Statistics, Structure and Abstraction 
- Steps towards more human-like learning in machines 
Wondering if there are other researchers exploring similar questions.
And we're starting to bump against fundamental limits of these apparati. Most modern neurobiology uses genetically encoded fluorescent sensors read out by rather expensive 2-photon microscopes. The sensors aren't as crisp as one wishes - there is a huge subfield dedicated just to deconvolving these fluorescent sensor readings into what the neurons are actually doing. And there's only so much further the 'scopes can be pushed.
The point being: it's really quite difficult to overstate just how overwhelmingly complex the brain is and how far we are from understanding even little really specific bits of it, let alone the whole thing.
That being said, the redwood center for theoretical neuroscience does some excellent work bridging the cutting edge of theory neuro and machine learning - towards the larger picture of how the brain works. You might be surprised at how 'rudimentary' the questions we're trying to solve in that domain are. Most work focuses on the visual system - far easier to study something when you have a good idea of what it's supposed to do (as opposed to, say, cortex).
I am not aware of anything resembling a grand theory that makes experimentally verifiable predictions. I am pretty sure I would have heard of such a thing if it existed.
But humans aren't spiders. We've got the big brain, it's kind of our thing
>KataGo's latest run used about 29 GPUs, rather than thousands (like AlphaZero and ELF), first reached superhuman levels on that hardware in perhaps just three to six days, and reached strength similar to ELF in about 14 days. With minor adjustments and a few more GPUs, starting around 40 days it roughly began to match or surpass Leela Zero in some tests with different configurations, time controls, and hardware. And finally after about four months of training time, the current run may be wrapping up fairly soon, but we hope to be able to continue it or begin another run in the future.
This comparison is a bit unfair. Humans are the result of evolution on a grand scale. Human Go is the result of millennia of gameplay. A human does not become grand master in isolation.
AlphaGo is the result of an evolutionary tournament style competition of a much smaller duration and breadth. AG is also a population, not just one agent, and it would be silly to take just one agent and evaluate it on its own as if it could be created without the others.
Should we include the human costs as well in AG, why just the electricity and CPU?
For example, I expect that the training required to go from 7-year-old child to Go grand master requires a completely different number of bits of information than the training required to go from blanks-late NN to NN Go Grand master. I also suspect that the difference in what is being learned may well dominate the difference in training efficiency. Both the prior knowledge and the mechanism of learning are so different that I doubt you could get a meaningful comparison based on current understanding.
You should remember that we have no idea basically how human beings actually learn things, and no idea how much prior knowledge we have encoded. Just for an example, I once saw a documentary that claimed chess grandmasters seem to recognize valid chess positions using the parts of the brain that usually recognize faces. Assuming that was true (I'm not claiming it is) perhaps a part of their chess learning consisted in taking a built-in face recognizing NN and training it to recognize chess boards. How much did the built-in knowledge of recognizing faces help? I don't think it would be possible to calculate.
A huge question I didn't even realize was "bits don't relate very directly to a NN ability to perform a task".
Now, the bot has many advantages. It never sleeps, never gets distracted, never dies and can be copied to another system to obtain a copy of the bot with the same playing performance.
The bot is also more accessible. Any player now can train with a bot, all day if you want, for almost free. You cannot do that with a professional.
If you would ask someone to learn Go, but only present him rules of the game, he'll likely be weak player (although probably with some original strategies).
Their running cost estimate of a single TPU in a machine with 4 "TPUs" is based off the price of a cloud TPU v2-8, but a v2-8 is actually 4 ASICS on 1 board.
Also, because of the date of publication being around the time v2s were announced, and the fact that the TPU is only used for inference and GPU is used for training, I think self play was likely done on TPU v1s, which use 5x less power per ASIC and so are likely much cheaper
I also think the way they calculated the number of TPUs required is wrong, it looks like they assume 1 machine with 4 TPUs makes 1 move in 0.4 seconds, but since making 1 move only requires a forwards pass through a moderately sized CNN with 19x19(tiny) input, 1 TPU should be able to make thousands of moves in parallel per second.
I've also heard rumors that AlphaStar (https://deepmind.com/blog/article/alphastar-mastering-real-t...) was essentially put on hold because it was too expensive to improve/train. The bot wasn't able to beat StarCraft champions and _only_ got to a grandmaster level.
(Sure, u need to somehow roll forward and rollback the StarCraft world, but for Atari using MCTS was shown to be an order of magnitude more efficient )
I have also seen comments that the search width is too large, or maybe academic purity consideration?
At the last Blizzcon they had it around. The setup wasn't ideal, so Serral (won world finals in 2018, reached semifinals in 2019) wasn't really happy with how he played, but it won
This was also a version where they'd worked on preventing its ability to micro at quadruple digit apm
I'd even argue that they missed their goal by a long shot if their system isn't able to play arbitrary maps - every human player can do that no problem.
Still what they have accomplished is a miracle.
For others: It's $36M.
Also nobody mentioned the title is inaccurate so I guess it's just pedantic "thou shalt not change zhe title" rather than "title was misleading/clickbait"...
"AlphaGo Zero showed the world that it is possible to build systems to teach themselves to do complicated tasks."
It didn't do any such thing. The game of go has a huge number of potential moves and outcomes, but the rules themselves are trivial, the board position can be measured in a handful of bytes and gameplay always and only progresses in one direction. And judging a good vs bad outcome is just a matter of comparing two numbers.
Go is challenging and interesting for humans, but it's not remotely as "complicated" as driving a car or translating a language.
Google isn't operating with that cost, unless we assume that they are prioritizing AlphaGo to the point where they lose such customers 100% of the time.
It's way more likely that AlphaGo is trained on spare time, the cost for the hardware is sunk anyway, so only the cost for upkeep is real.
Not quite, power is quite expensive and basically all modern computers use far less power at idle than going full bore saturated with multiply-add instructions and perfect memory streaming.
Having said that, I agree that there is a substantial cost efficiency gain if they can schedule it during periods of inactivity.
I've been interested in the application of AlphaZero to chess. It's sad that this many resources were devoted to something which we can't even use to play chess as of now. Leela (the open source reengineer) is really strong, but the crushing results presented in the AlphaZero paper never materialized. And this article just shows how hard they are to replicate.
It seems to me that, if you only take it as a marketing operation, it has been already very valuable.
> Over 72 hours, 4.9 million matches were played.
One of this claim must be incorrect or misinterpreted, I highly doubt they used so many TPU's as the article claims. That would be not only impractical but also it would raise a lot of other issues like networking, disk speed... etc...
My statement is not against this article, if anyone can confirm they used so many TPUs in parallel feel free to post it
Playing 4.9 million matches of ~100 plies each at 0.4 seconds per ply is 196000000 seconds.
That's < 1000 TPUs. Sounds big but not too-large-for-google big. But other comments here say that the 0.4 second number is also wrong (and in fact significantly lower).
We don't run on those fancy V100 cards though, just regular old gaming cards suffice, and I suppose if we bought the "industrial" nvidia versions it would a take a bit longer to recoup, but still definitely within the year.
Anyway what I'm saying is that it's probably possible to to this a lot cheaper than 36M, though maybe not in such a short time. Our startup is extremely cash intensive, and I bet machine learning companies are as well (I suppose machine learning experts aren't cheap ;)), so if we can put in some work and safe a big portion off our hardware costs that really goes the distance.
How modern startups start out: Spend $50,000/month to run hundreds of microservices on a managed Kubernetes cluster
I remember my last chat with one of such guys. They insisted that the company wasn't up-to-date because we didn't run our app inside containers and didn't develop our own AI/ML systems...
"non-trivial" is a bit of a red herring here. Playing go is pretty trivial compared to something like walking or scratching your face. Winning go may be non-trivial compared to those in some ways but it is very trivial in comparison in other ways.
Scratching a face is a matter of fine motor control.  is an example from 2011 which did this, as well as face shaving.
Walking is slightly tricky because it's such a dynamic system, but is now human level, and there was never really any question that it would be possible.
On the other hand, the state of the art in Go systems before Alpha Go (the one trained off games, not Alpha Zero) couldn't beat competent amateurs. No one had really considered the learn-from-zero-knowledge approach of Alpha Zero even for easier games like chess.
The main problem for bipedal robots that makes them still impractical is the hardware expense (wheels are simpler and cheaper) and the power supply required, so for most use cases it's more efficient to use something other than a bipedal robot and there's limited business application and future revenue in scaling up research demos of bipedal walking to practicality, so most people who are working on walking algorithms are doing so in simulated virtual environments (where we have algorithms that can learn walking and running "from scratch" through experimentation) and not building very expensive hardware.
Current self driving car technology is sufficient for most purposes, except to actually drive on roads. So for those walking robots, can they run or even walk through a crowd without hitting people? A normal 15 year old human can do it, and that is the level you need to be to release it among people.
Here's Boston Dynamic's robot doing Parkour and gymnastics: https://www.youtube.com/watch?v=_sBBaNYex3E
It's not national level gymnastics but it's better co-ordinated than most humans.
Boston Dynamics use control-systems style robotic control. This is different to ML-style control where the system learns to perform tasks.
But that's different to "pre-programmed sequence". They don't program the individual servo movements for each movement - instead they give it the motions to perform and the control-systems balance the robot automatically.
(This is what the OP implied by the word "algorithms" anyway right?)
Given the experiment lasts for just days, this actually sounds pretty impressive I think.
Many humans studied the game for a big portion of their lives in order to get Go knowledge where it is.
However, if you want to reliably make an AI the best in the world at a range of complicated tasks, can you reasonably expect this to be cheap?
I know some companies are doing that, but I think looking at AlphaGo or AGZ and making it go faster should be an interesting problem in itself.
>The power consumption of the experiment is equivalent to 12,760 human brains running continuously.
But the problem is this "brains" unit on AlphaZero doesn't seems to take into account of GPU, CPU and Memory involved. It only took the TPU numbers.
Then there is another problem.
> a TPU consumes about 40 watts,
The TPU referred to was a first Gen TPU built on 28nm running at 40W, more like a proof of concept. Currently Google is with Cloud TPU v3 , The latest-generation Cloud TPU v3 Pods are liquid-cooled for maximum performance. And each TPU v3 is actually a four chip module. . If a single chip is 100W that is 400W per TPU.
Edit: Turns out Wiki list TPU v3 as 250W. . Not sure if that is 250W per chip or 250W for 4 Chips.
That is on the assumption they are very high powered and hence would require liquid cooling. Although that might not always be the case.
So adding CPU, GPU, Memory, and TPU figures. That original estimate of 12,760 human brains may be off by a factor of 10 if not more.
Still pretty impressive. Considering we now only get about 1.8x improvement with each generation node. We would get about 19x by 2030. ( Assuming the same algorithm ). Which means AI is good, but human brain on its own is still very much magical in its efficiency :)
Correct me If I am wrong on the numbers.
My other questions is, that was how much energy it used to learn Go. But what about energy it used during the Game?
How would AlphaGo Zero perform if it was limited to 20W?
I bet the much higher cost was the PR team, including the film team, press support, TV team, travels, inviting the expert Go players, building the stage, and such. Estimated 100.000.
Not counting the man hours, they were just doing their normal job.
Because renting them out generates no revenue, right?
"Maybe around 20.000"
At least the article used a formula for the calculation. You just picked a number at random.
And the title was "His much did it cost" not how much it would cost.