Hacker News new | past | comments | ask | show | jobs | submit login
In two moves, AlphaGo and Lee Sedol redefined the future (2016) (wired.com)
92 points by kaycebasques on Nov 17, 2023 | hide | past | favorite | 83 comments



For those who missed what happened back then, the AlphaGo Movie is really worth watching: https://www.youtube.com/watch?v=WXuK6gekU1Y

Not very technical, not about Go tactics either, but it's just a very well-done movie about the people involved.


I enjoyed it too. Surprisingly good, considering it was basically about a computer program, they managed to make it a human story.

(I wouldn't worry about the criticism from the know-nothings below; I doubt a single one has ever had the slightest involvement in making a film so they're just ignorant loudmouths)


It's a surprisingly engaging movie even if you don't care about Go or AlphaGo. Which is kind of impressive if you ask me. A movie that's worth watching if you don't care about the subject matter.


I found it very not worth watching. It seemed just Deepmind PR. Very little substance, it was just soap opera grade material.

If a person likes soap operas, they could enjoy the movie.


Only on HN will you see a recounting of a massive achievement of humankind dismissed offhandedly like this.


I'd say Zero Dark Thirty was a bad movie about a major and important event.

Calling a movie bad doesn't diminish the original event. It just criticizes the movie itself.


Eh, the barb about "maybe you like soap operas" wasn't necessary and doesn't do the comment any favors.


In that phrase when I said "you" I didn't mean the person I was responding to. I mean the "general you" - a hypothetical person. I didn't occur to me it could be interpreted differently. I've edited it.


zero dark thirty was pretty good, what was wrong with it?


In the long run the development of AI is far more significant than relatively minor skirmishes of American Imperialism.


> tekla 10 minutes ago https://news.ycombinator.com/user?id=tekla

> Only on HN will you see a recounting of a massive achievement of humankind dismissed offhandedly like this.

I'm not dismissing an achievement of humankind. I'm dismissing the PR piece they put out about it.

Do you struggle to see the distinction?


I’ve seen it twice. It was great. It is a documentary

Sure, I’d like it if they discussed the algorithm and the code but you need to entertain a regular audience.


Same, I've seen it twice. It's all about that moment when they realize that "mistake" and then, it's a "God" move, and they can't believe it. History was made in that moment. They realized computers can have intuition and think like they do.

I started showing people ChatGPT when it first came out, they shrugged, they didn't get it. Most people still don't get how important generative AI is and will be. Eventually, they'll have that moment too.


> I started showing people ChatGPT when it first came out, they shrugged, they didn't get it. Most people still don't get how important generative AI is and will be. Eventually, they'll have that moment too.

I have seen ChatGPT. Until they fix the error rate, I see it as a novelty. A toy you can’t count on nor offload responsibility to.


I use it constantly throughout the day for my work. The error rate is fine; just like talking to a person. You have to assume that they are wrong sometimes.


One difference between ChatGPT and people though is that when they don’t know something, the latter usually just tell you they don’t know while the former makes up BS.


I think you have a circle of highly intelligent people around you. I live in Austin and I have all kinds of folks around me and there are plenty of people that are willing to spew bullshit and back it up. I think it's good to have a barometer for bullshit.


Lmao, humans literally spew bullshit on an hourly basis. People constantly make up or parrot false information whether we realise it or not.

And technically an opinion is not a hard fact and us humans have plenty of opinions.


I'm extremely glad they didn't, it was perfect the way it was. That topic absolutely demanded a humanistic presentation. It's a tragedy in subject matter and tone, not a film meant to educate the viewers about the ins and outs of how AI works. There are plenty of resources for that elsewhere, the team behind this movie did the right thing to treat it with appropriate weight and not try to drown that all out with technobabble.


The commenters here saying "humans cannot win anymore", "there is no chance that humans can beat the best Go AI anymore" are apparently unaware that this is no longer true.

See https://arxiv.org/abs/2211.00241 and https://goattack.far.ai/

The best Go programs have a flaw that allows a good, but not championship-level, human to defeat them, by creating a group that encircles another group, which apparently confuses the AI's method of counting "liberties", which determine whether the group lives or not.

Some appear to dismiss this as just a "trick", but it seems to me to point to a more fundamental deficiency in the architecture or training method.


Afaik this is already mitigated in the newest KataGo networks, and Chinese engines are much stronger and plausibly don't have this issue. Also, if I remember correctly, that attack only works on KataGo because the weights are public, so it would not work against the strongest closed-source engines.

They achieved a <10% win rate against other engines https://goattack.far.ai/transfer#contents, so the strategy is not that generic. Edit: actually it was 66% against another bot, https://goattack.far.ai/human-evaluation#human_vs_lz4096 but they had to bring the visits down to 4096, which I assume means that at a "normal" visit count the bot would still win.

Still, that paper is extremely interesting, consistently triggering suicidal behaviour in a super-human bot.


Although technically correct (the best kind), that is a weak point. In a more formal sense, "playing" a 2 player game means something like approximating the move in a game between 2 min-maxing oracles. Humans can't do that anywhere near as well as KataGo.

A human can, technically, beat a superhuman Go AI. But the process is clearly playing the opponent rather than the game, the moves are obvious weak moves. Humans aren't winning by playing good moves, the challenge being posed to the AIs isn't intimidating at all and they will defend against it sooner or later.


It's hard to describe the insanity that took place in Korea during this game.

The beauty in a computer saying "fuck you, I'm going to win, this isn't a poetry slam, all I need to do is beat you by a single point" and demolishing a the best opponent humanity had to offer.


See, that's the other essential aspect of this moment that most people miss, or dismiss as a weird quirk or whatever. AlphaGo's utility function was to maximize its chance at winning Go, NOT to win quickly. So it got into a position where it knew it would win, and then dicked around indefinitely because why not. Honestly, more than a few people here probably find this too relatable.


My favorite part of watching the match was how the computer tripped everybody up with its endgame. It would have a large lead, but slowly it would concede one point after another, which no human would do. At some point someone explained that it does this because its goal is not to win by a large margin but instead to win by any margin no matter how low but with maximum probability.


"In the last days of humanity we came to the bitter realization that there had never been any chance of winning against the machines. We thought we were just one breakthrough away from winning at any time, one brilliant inspiration and we would win. And that's when we realized how truly evil the machines were, they let us have hope where there was none"


I went to a talk by Rob van Zeijst about the match of Lee Sedol and Alphago and he was saying that after move 37, the push along the fourth line would have been better for Lee Sedol. And also, move 78 should not have worked, or at least not that well. This was also noticed by the real time commentary at the time, IIRC.


"I thought DeepMind was just prediction engine; at Move 37 I realized that Machine is creative, at least within Go."

—Lee Sudol (9d)


A winning strategy against the AI that Ender Wiggin could have thought off and executed would be:

1. The human has to play perfectly. 2. The human has to play perfectly and quickly.

The premise being that an AI with less time to calculate its moves could result in an advantage to the human.

https://en.wikipedia.org/wiki/Time_control#Byo-yomi


It’s just a matter of upping its compute resources. Not some major gotcha.


Maybe we should replace chess clocks with a power use counter, and you get a certain amount to consume through the game.


Katago (and Leela) and “blue dot” is changing human vs human games too. Interestingly enough, I will say this about Katago: you can consistently beat it with +6 handicap. I don’t think you can say that about pro players.


Yeah this speaks to the fragility of current machine learning techniques. It wasn't trained to play handicap go, so it's quite bad at it.

Similar to what is being called hallucination in LLM area


Nit: KataGo does train on handicap games up to 5 handicap stones. In the KataGo write-up "Accelerating Self-Play Learning in Go" from 2020, Appendix D says that KataGo trains on handicap games with up to 3 handicap stones. Looking at the current version of the KataGo code, nowadays it trains with handicap up to 5.


How would a pro fare that never played handicap go ?


Probably the same way top 10 GMs do giving Queen odds. They’ll still stomp most lesser players.


If you're 2d or so you can beat pros at 6 stones. Is katago more beatable than that? I know it's not amazing at high handicap, but I wouldn't think so.


i recently watched a video where chess grandmaster magnus carlsen seemed to be extremely adept at recalling previous games played by previous master. prima facie it appears he’s the most advanced at recalling and computing potential future moves (in parallel). that seems to be something computers will definitely beat you at. especially given that chess (and most rules, move-based games) are path dependent, aka the space closes down quickly aka moves towards the end are more critical than at start of the game.


Can someone explained the move it made and why it was such a great move?


The Alphago move seemed like a bad move to humans but then ended up being amazing many steps later.

The human move seemed like an incredibly improbable move by Alphago, and ended up giving the human an upper hand.


Move 37 from AlphaGo is very interesting. I was watching the game live and was stunned to see it. It is the kind of move you might see from people who have just barely learned the rules of the game and learned about a "shoulder hit" and tried to implement it improperly. These beginners would naturally be told "do not shoulder hit on the 5th line". A little bit about these terms:

Lines in Go are counted from the edge of the board. Here's a visual of the 3rd line for example: https://senseis.xmp.net/?ThirdLine

The 1st line is uninteresting. The point of Go is to surround territory. You cannot surround any territory on the 1st line. Players try to avoid playing on the first line until the end game.

The 2nd line is called the "line of defeat". It really only "catches" 1 point of territory (the point on the first line). If players take turns playing on the 2nd and 3rd lines next to each other, with the 2nd line player taking 1 point of territory, and the 3rd line player taking no territory but outward influence, it is considered a great victory for the 3rd line player because center influence is generally counted as worth 2 points per stone of influence. This is a loose count, because it's not actually any real points, but generally accepted as reasonable. Here's a visual: https://senseis.xmp.net/?TheSecondLineIsTheRouteToDefeat

Side note: "Influence" is the term used to describe how stones facing toward the center affect the flow of the game. They don't give direct points, but a skilled player can use their influence throughout the game to control the direction of the game and thus gain points in the future.

The 3rd line is the "line of territory". Each stone here gets about 2 points of territory. Players are usually happy to be able to make moves along the 3rd line, especially if they can do so while doing something else, or while maintaining control of play.

The 4th line is the "line of influence". Similarly to the 3rd line, players are often happy to be able to play moves along the 4th line because stones on the 4th line will be advantageous throughout the game. While plays on the 3rd line often don't give influence (because their influence is easily countered by 4th line plays by the opponent), players are happy with the territory they provide. Similarly plays on the 4th line don't give territory (the territory can somewhat easily be scooped up by plays along the 3rd and 2nd line), but players are usually happy with the influence the 4th line provides.

Thus the 3rd and 4th lines are the most common lines for play. 3rd line for when a player wants territory, 4th line for when a player wants influence.

The 5th line is very much approaching the center of the board. https://senseis.xmp.net/?FifthLine . While it gives similarish influence to the 4th line, it is even easier to scoop out territory from under it. Usually players avoid playing on the 5th line unless there's a specific reason such as strengthening a position or pressuring an opponent. It's not an unplayable move to play on the 5th line in general, and some players experimented with playing more on the 5th line, but it's not considered as valuable as the 3rd and 4th lines.

A shoulder hit is a tactical move where a player pushes their opponent from behind. Usually it turns into a move where both players end up trading moves along 2 different lines. https://senseis.xmp.net/?ShoulderHit

As such, shoulder hits have historically been very common on the 4th line. This happens when a player has a stone on the 3rd line and their opponent plays an attack move on the 4th line diagonal to it. Often both players will take turns from there strengthening their position along the 3rd and 4th lines. The 3rd line player takes territory and the 4th line player takes influence. This is often considered a fair trade.

But AlphaGo played a shoulder hit on the 5th line. This looks like a rookie mistake because that forces the opponent to take territory on the 4th line. If both players take turns building from there, the 5th line player gets "2 points" of influence while the 4th line player gets "3 points" of territory.. for every stone played on these lines! This is the kind of move that is commonly told to beginners "do not shoulder hit on the 5th line". It is a mistake. Most people just learn not to consider it.

I hope this helps :)


Very interesting! Thank you!


Would be nice to have a diagram of the moves in question in the article lol


> Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But at the same time, it's flawed. It can't do everything we humans can do. In fact, it can't even come close. It can't carry on a conversation. It can't play charades. It can't pass an eighth grade science test. It can't account for God's Touch.

That didn't age well


I think this still holds some water. The go bot was excellent at beating really good go players because it had a ton of data on high level go games. When it encountered someone intentionally doing a moronic strategy, it was trounced. It just didn't have enough data on bad players, so it lost to an obviously flawed strategy.

I think that's the huge flaw in all of these ml systems. They don't build fundamental understanding. We're brute forcing it in a way, but perhaps we're losing something in the long tail.

E: https://arstechnica.com/information-technology/2022/11/new-g...


This was a flaw in the original AlphaGo, but the subsequent AlphaZero (https://en.wikipedia.org/wiki/AlphaZero) trained entirely from self play with no prior information. So essentially it _does_ build fundamental understanding.

I think the ability to learn by self play (essentially in a closed room without external training data) is where the line between "fundamental understanding" and "regurgitating information" from these AIs lie.


There isn't really any difference between self play and no self play in terms of "fundamental understanding" and "regurgitation". It's the same training scheme just with different data.



I understand that the next test is about passing an elementary school science test? It is good to put cincrete goals. A few time ago computers couldn't beat amateur Go players.

Not saying that computers will think or not, just saying we have new challenges before the Turing test.


But humans failed to find an algorithmic solution for Go. All they could do is to throw a lot of data and get a bunch of coefficients without discovering underlying rules.

Same with drawing images and understanding language: this is not solved yet.

This is like showing an answer on an exam but failing to explain how you got it. I doubt you can get away with this.


Well, we humans also failed to find an algorithmic solution for our brain playing Go. I mean, the way AI and our brain work are mysterious. Surely at different level/layers but both share the mistery.


i'm not sure i understand you there.

> But humans failed to find an algorithmic solution for Go.

sure we have algorithmic solutions for go, they're just not very good.

> All they could do is to throw a lot of data and get a bunch of coefficients without discovering underlying rules.

that's not completely true either, the special thing about ~alphago~ alphazero* was that it learned by playing itself instead of learning from a pre-recorded catalog of human games (which is the reason for its - for humans - peculiar playstyle).

now i'm not sure how you're arguing a neural network trained to play go doesn't understand the "underlying rules" of the game. to the contrary, it doesn't understand ANYTHING BUT the underlying rules.

explaining why you did something isn't always easy for a human either. most times they couldn't say anything more concrete than "well it's obviously the best move according to my experience" without just making stuff up.

*edit: mixed up alphago and alphazero


By "underlying rules" I meant not rules of Go, but a detailed, commented algorithm that can win against human. Not a bunch of weights without any explanation.


It is possible that there is no algorithm that is understandable by normal humans or humans at all in the sense of the typical algorithmic approach of quick sort, etc.

In other words, the algorithm is very long for a relatively reduced programming language.


Wait, we know the underlying rules, we have an explanation. You can read all the coefficients.

We don't understand the explanation, though it is correct. Not sure if the problem here is with the capabilities of the examinee or with the examiner.


Imagine if you go to your work at the bank tomorrow and instead of a well documented, maintainable and formatted code see a gibberish. And your neural coworker tells you that it is just a problem with your capabilities if you cannot understand it. He just refactored it to improve performance. That's the situation with machine learning today.


The thing is, it's not gibberish. A sufficiently small language model can be understood by humans: https://twitter.com/karpathy/status/1645115622517542913

The explanation is perfectly sensical, just too complex for humans to understand as the model scales up.

The thing you're looking for - a reductive explanation of the weights of a ANN that's easy to fit in your head, does not exist. If it were simple enough to satisfy your demands, it wouldn't work at all.


Yet, when a master player makes a decision what move to play, they often have concrete reasons for it, that they discuss in after game analysis. They evaluate some advantage or chances higher than others or some risks greater than others and calculate specific sequences ahead to be sure to solve a subproblem correctly and base their decision on that.


Banks don't typically attempt to solve P=NP problems.

Meanwhile things like stock markets attempt to with things like partial future prediction, which means all possible outcomes are not calculable in finite time, hence they use things like ML/AI.


Not only has GPT4 passed an elementary school science test, it is outperforming on tests better than 95% of the world. https://twitter.com/emollick/status/1635700173946105856?lang...


Will Smith's memorable character in I, Robot: Can you compose a symphony? Can you turn a blank canvas into a masterpiece?

Sonny: Uh... yes?


Sonny: Yes! Rapidly and repeatedly. [then add the actual quote:] Can you?

Human: ...

Oh how the tables have turned.


"Cool. Create a 4x4 matrix of symphonies, with 'vivaciousness' on the X axis and 'dramatic themes' on the Y axis, then listen to them all and tell me which one's best."

Sonny: "...oh god"


2/4 (generously) is not bad for 7 years! And its not like we are ever going to get to 4/4... I give it a C- minus on the Evergreen Scale.



Honestly that's pretty bad.


In one release, OpenAI made all the people at DeepMind feel forgotten about


I'm sure the folks working on protein folding are losing sleep over role-playing chatbots.


For me, that move in game 2 marked the official start of the Singularity, although in one sense it's all the same exponential curve we've been riding since, well...

That's the neat thing about exponential curves, you always feel like you're at the fun part of them.


Maybe for you it did, but not for the rest of the world.

The singularity is defined as a moment in time when the A.I. improves itself to such a degree that humanity can no longer keep up.

Throwing more compute power vs a single opponent is not the same thing. How would this computer fare against the top 10 best players collaborating(or even top 90-100)? I would bet it would lose big time.


There is no chance that humans can beat the best Go AI anymore, since the paradigm of AlphaZero (which was trained in the absence of human game records, and beat the version which beat Lee Sedol essentially 100-0)

It is unlikely, also, that a committee of players would be significantly better than a single master, due to lack of coherence -- but that's an interesting idea! I wonder if a committee of the top 100 go players playing a game by vote could beat someone in the top 10 more than 20-0 or something; i doubt it -- it might even go the other way (that the single player would win the series)

I don't think this counts as the real "start of the singularity" because Alphazero was not able to (or capable of) altering its own algorithm, but rather just adjusting its weights.

Something more akin to being in the long march toward general AI.

As a personal note the whole issue of large LLM's capacity for intelligence, beauty, humanity, morality, logic, etc etc was softened in my mind and heart by witnessing with rapt attention this epochal shift in computing.

I had held Go up as a paragon of human brilliance and beauty -- to see that standard fall was a complex process of grief and discovery for me, which I feel has better prepared me for understanding and appreciating the emergence of LLMs


It has been tried in chess at least. https://en.wikipedia.org/wiki/Kasparov_versus_the_World


This is a different kind of setup. I'm not sure if the idea of 2-3 super GMs able to consult with one another has been tried but given the estimated rating difference I doubt it would matter. The difference is estimated around 800 points, or the difference between a strong untitled player and Magnus.


Like many other crafts and arts where machines can do better, Go has a deeper role in being transformative for the human learning it -- in the case of Go, developing strategic thinking, being able to make decisions balancing long term and short term gain, uniting reasoning and intuition, an arena for exercising emotional equanimity.

Winning, I think, is secondary to this. It's a useful measure of how one has progressed in that transformation, but I think the lessons and principles from Go that I can apply in guiding my my day-to-day life are more valuable.


4-vs-1 games have been played by the strongest players in my club. They say it added around one stone of strength.


  How would this computer fare against the top 10 best players collaborating(or even top 90-100)? I would bet it would lose big time.
It would destroy them in the same way. The marginal value of each additional human brain quickly approaches zero (or perhaps even negative as the team tries to communicate/collaborate).

The top 10 chess grandmasters in the world working together could not beat the best (or even "mediocre") chess engine. Not even close. They're practically playing different games.


>How would this computer fare against the top 10 best players collaborating(or even top 90-100)?

Against the AlphaGo of 2015? They might win, but probably not (I think you're overestimating how much collaboration would help). Against today's AlphaGo/KataGo/FineArt/etc there's literally zero chance, even with a two stone handicap. Same goes for 100 GMs playing collaborative chess against Stockfish.

(that said, I agree calling this the singularity is overkill)


Deep Mind's big thing with the game of go, and in general to be honest, is Reinforcement Learning (RL), a branch of ML that until very recently was mostly ignored by industry, and only now gets love because of its perceived utility in some parts of the GenAI tooling chain.

I think even with that in account, RL has only reached a tiny fraction of its potential. We have focused so much on supervised and unsupervised learning for so many years, and then been wow'ed by LLMs we have only see RL start to impact industries in self-driving/flying vehicles, and forget about all the other potential.

The thing about RL that people don't seem to understand is that it is mathematically proven to find the optimal control policy.

In the context of go, that means the only way it can be beaten is through variance (or "luck", if you prefer). As there is no dice or random element in go, the top players in the World basically have to be optimal in every move to get a draw. And then again, and again, and again.

And that's the best they can do if the RL algorithm has stopped learning - it's found an optimal strategy, and it can't be beaten, only matched.

Think about all the optimisation and control problems out there that could benefit from this. And yet still we seem to think it's like supervised/unsupervised learning and only "accurate" to ~90+%, and so it doesn't get the attention it deserves.

Or perhaps I'm a dreamer and an optimist and you're right.

I would happily take the other side of that bet though. At even money, I have all the EV, I'm confident of it.


I agree with taneq actually, just after the match I attended an impromptu lecture by a professor who also had some go knowledge, at the time I really felt like I was witnessing something new and important. In retrospect I still think of that as the kickoff of the current AI wave.


At Go? Humans cannot win anymore, the AIs are _very_ good.

Katago can give handicap stones to pros and win. It's as much better than pros as pros are better than unserious amateurs.

It's not even a matter of compute power, katago is very good with 0 playouts.


I don't know much about go, but AI is so much better at chess than humans there is no number of humans you could throw at the problem to beat the engine.


You are correct. The closest thing there is is called Centaur Chess where a human and computer work together. At least as of 2021, a human and computer combination could outperform just a computer indicating that a human still provides value to a chess engine.

I believe in Go that is no longer the case, however. A human provides no additional value to a Go engine.


> How would this computer fare against the top 10 best players collaborating(or even top 90-100)? I would bet it would lose big time.

That's why you shouldn't bet money on things you don't know about...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: