Hacker News new | past | comments | ask | show | jobs | submit login
Go master Lee Se-dol says he quits, unable to win over AI Go players (yna.co.kr)
640 points by partingshots 62 days ago | hide | past | web | favorite | 540 comments

I sympathize.

It used to be that you might be able to believe that there is some kind of art behind go, some sort of abstract beauty to it, and that the pursuit of this beauty is the path to being good at go ...

But the defeat of the tactics born from this mindset by MCTS at least for now lay bare the fact that the path to being good at go is actually to probabilistically sample the phase space of the game and perform impossibly mindless statistics on the game outcome an enormous number of times ...

To top it off — there is almost nothing “about go” to learn from watching alpha go play ... I imagine that attempting to analyze alpha go’s victories would produce an unending sequence of the feeling of never gaining any new insight “into go”.

The analysis of go is now about optimizing algorithms — which _is_ interesting — but I don’t think it’s interesting for the same reasons that someone might’ve been passionate about go in the past ...

> It used to be that you might be able to believe that there is some kind of art behind go, some sort of abstract beauty to it, and that the pursuit of this beauty is the path to being good at go ...

All but the last phrase are still true. Pursuit of Go for beauty is still pursuit of Go for beauty.

I will never play any instrument as well as a sequencer. I believe algorithms will make beautiful Jazz improvisations in real time, in my lifetime. But playing is still beautiful.

I will never run as fast as a horse, much less a bicycle or car. In my lifetime cars will drive themselves more safely and faster than I can drive a car. It will still be enjoyable.

I sincerely hope my children will live better lives and make better choices than I did. My life is still beautiful most days...

It is our attachment to being smarter than machines that makes us unhappy. We fear evidence that we’re not that special. But we do not need to be special to be happy. We do not need to be special to find beauty in our pursuits.

One day, a bot will live on HN and have hundreds of times more karma than I have. It won’t diminish the beauty of my best contributions. Perhaps the humility of seeing this happen will actually make me a happier person.

In case you are other readers are interested, what you describe is an old philosophical debate spanning mostly morals but as you point out, also the pursuit of happiness and other areas. It is the opposition between consequentialism (=what is important is the result of your actions) versus deontologism (=what is important is what you are doing, not the result).

I think that as machines become smarter and smarter, it is good that we dive a bit more in our philosophical world and reach for what makes us human.

It took me a long time to understand that if you remove the pure rational intellect, something precious still remains: the driving force that shapes our morals. There is not rational reason to like survival. To love our children, to do what we call good. The resons for that are moral and are, I believe, what defines us as human at the core.

Ian Banks was spot on in calling a future civilization a "Culture". Because when machines take over the productive work, defining our values is going to be our full time activity. Machines may participate, but why would they add anything we consider valuable to the core?

That's going to be an interesting transition and I am happy that I'll probably live to see it!

>I think that as machines become smarter and smarter, it is good that we dive a bit more in our philosophical world and reach for what makes us human.

That's a recurring theme in Ghost in the Shell, at least the anime series. That as the line between human and machine is blurred, the general trend is in the direction of homogeneity, but here and there you still see people doing "deontological" things, ostensibly as a grasp at uniqueness to preserve self.

An interesting thought occurs to me though, with the proliferation of various NN architectures, and the idea that such nets, if scaled and installed to power humanoid robots, would effectively learn different heuristics after training on chaotically different data, it's quite possible that machines will also gradually evolve individuality and something rather close to personality.

Perhaps individuality is an intrinsic, emergent property in any system of generally intelligent and learning agents. Like a 3+ body on steroids, millions, if not billions of dimensions.

Also interesting to consider that the emergence of uniqueness among a system of said agents seems to increase entropy, a presently unique to life property given that everything inanimate in the universe does exactly the opposite. The more I think of general AI, the more I have trouble distinguishing it from the only other sentient intelligence we know of.

On the other hand, insects are also kind of like NN powered robots, and they don't (necessarily?) have individuality. Especially not ones that form colonies, like ants or bees.

I don't know if we have an answer to the question of whether insects have personalities. But perhaps their intelligence is not general enough, they're not, as far as we know, sentient. Maybe that's the difference between an automaton and a soul

I don't think it is very hard to make a personality or an individuality appear. Feed GPT-2 a lot of opinnonated texts about itself and you will already see a bit of it emerge.

> There is not rational reason to like survival. To love our children, to do what we call good.

Of course there is and very well described in evolutionary science

This is an is/ought confusion. You can explain why something is that way, but you can't explain why it must be that way.

You can explain why our survival instinct evolved so it is tempting to jump to the conclusion that we ought to survive but it is a fallacy (called the naturalistic fallacy): saying that something being natural means it is good.

The difference is that this AI agent was built to beat humans. Horses were not built to run faster than humans.

You have to remember the specific comparison and the art that humans tried to partake in to master go.

Humans do not run to beat horses. We never compete. That would be silly.

Cars and humans also do not compete at the 100 meter dash.

If humans and robots compete at 100 meter dash than the olympics will similarly lose its luster.

> Humans do not run to beat horses. We never compete. That would be silly

I hate to be that guy, but:


It is very silly indeed.

2004 2007 a human won. Wow.

The race's setup is horse-friendly as well: Shorter than a regular marathon by a few miles and on flat terrain. A longer distance on hillier terrain would be won by humans much more often.

There was an interesting article a couple of years back, positing that the years that humans won coincide with unusually warm weather. The takeaway was the humans have a superior cooling system for efficient long distance running, compared to most animal's superior sprinting capabilities.

I believe only humans and dogs hunt via stamina. You can walk a horse to death or simply chase an antelope until it overheats and dies.

'AI' is not an adversary. It is simply a tool created by hundreds of humans coming together.

It's like watching 1-v-100 boxing match. Of course the 100 are going to win.

Enjoyable sport has always been about ~similarly matched opponents. When we have DeepMind AI Go vs. MindDeep AI Go, that's when things get interesting.

I interpreted breathoften’s comment differently. As someone who’s played chess at a very high level, I feel like I understand people who play these games for the art behind it. It’s difficult to describe, but chess for me is beautiful because to win you must be patient, careful, understand what is and isn’t important, etc. playing chess is like learning virtues. An algorithm playing chess doesn’t care about any of that. I wonder if it’s similar to the difference between meditation and acid. One gives you time to understand and the other just gets you there without explanation.

People need to experience a sense of competency and power as a part of feeling satisfaction with their lives. It is not the only route to satisfaction, but in my opinion it's one of the few roads to great satisfaction.

I agree with the sentiment that one can't be good at everything, and that there is a radically high-performance exemplar for anything, but for machines to dominate in almost every category of luxury endeavour -- yes I can see that as demoralizing. Even the useless things we spend our time on, we can be no good at that.

About jazz improvisation: I knew people at the creative labs research center back in 2003 that told me a researcher once showed them a program that did just that. It was able to improvise « the way player x would » by just listening to it, and it would continue the impro in the same style.

Imagine a world where no human contribution matters because a different species already thought of it. No improvisation on any instrument can be novel because a machine already played it exactly like you would have. A world where your children look to a philosopher machine for guidance, because it is wiser, kinder, and deeper than you could ever be.

Human existence will still occur, however, our species's defining characteristic will be no more consequential than a meadowlark's greeting of the sun every morning. Beautiful things will occur, but they will be ghostly imitations of the creations of some other being. Humans will create nothing, except, possibly, more humans.

It's not necessarily about your success vs. the machines. It's about every human who will ever live in the future's impossibility of success vs. the machines that is depressing and inevitable.

> Imagine a world where no human contribution matters because a different species already thought of it

You are begging the question in in the original sense of the phrase "begging the question."

You start by stating your conclusion as an axiom! You state that "no human contribution matters because a different species already thought of it" as if that is necessarily true.

It isn't.

We choose what matters. I play Bach, badly. Bach already thought of that music, and many thousands, possibly millions of people have played the particular pieces I have worked on (The first suite for unaccompanied cello in Gmaj, the Prelude to the first Fuge in Cmaj from Book One of WTC).

Does my playing not matter?

I say it does. Furthermore, Bach's music can be encoded as a number. All numbers already exist. Bach did not create that number any more than I created the number that encodes this comment.

Does my comment not matter?

You find this general idea depressing, and so do many other people.

But it isn't "depressing" in an absolute sense. That's just a word we made up to describe a feeling many of us happen to have.

Your playing doesn't "matter" in the way Bach's original composition "matters." One of these things has been remembered for hundreds of years, and one of these things would never have been remembered unless you reinterpreted the work in some novel way that reached millions of people.

The fact that Bach's compositions can be encoded as a number doesn't mean that it wouldn't take a genius of Bach's level to deserialize that particular number, which could literally be deserialized into a representation of any arbitrary thing in existence with the correct algorithm, any less novel when it was created. The same holds true of your comment. Just because I declare 1001 represents a beautifully unique masterpiece, once it is decoded, does not make that masterpiece actually exist.

Bach was important. In a world with general AI no human will have the ability to be important in that way ever again. At first AI will create crude imitations of human art. Then it will create hundreds of billions of creative works that are more human than human. Then it will create artworks that surpass our ability to comprehend. I don't know about you, but the inability to do anything novel as a species, to learn anything new, is a terribly bleak possibility.

Does the fact that we were the ones who created them in the first place count? Why should we feel depressed at one of our own creations? Why be depressed by looking at a car just because it can move faster than us, while in actuality it was created for that purpose?

'AI' is not an adversary. It is simply a tool created by hundreds of humans coming together.

It's like watching 1-v-100 boxing match. Of course the 100 are going to win.

Enjoyable sport has always been about ~similarly matched opponents. When we have DeepMind AI Go vs. MindDeep AI Go, that's when things get interesting.

That's kind of what we already have though, given that these things are often trained by playing against themselves these days

Well said !!

> I will never play any instrument as well as a sequencer. I believe algorithms will make beautiful Jazz improvisations in real time, in my lifetime. But playing is still beautiful.

Having dug quite deep into algorithmic music generation of all sorts, as well as having studied Machine Learning as my uni masters, I still believe that you still need an actual artist to make a music generator program.

A machine simply isn't going to figure out "swing" if you don't tell it to. And swing is one of the easiest things. Yes if you look just at note generation, I think algorithms can go a long way. But the subtleties of timing and timbre, it can only imitate in context. Which definitely is good enough for many purposes, and I agree with your prediction that algorithms will be able to generate beautiful music, but I also think there will always remain an "edge" for the artist. If only to discover novel things that are also cool, and then working those out in order to fully express the coolness of that new thing.

I'm also thinking about all the evolutions the many genres of electronic music have gone through the past decades. New and novel "sounds" (or moods, or styles, etc) are still being create/discovered. It is that process, that I don't think we're there yet. Yes the algorithm can probably generate beautiful psytrance, lo-fi hiphop beats and if I'm generous probably eventually also really complex (and jazzy!) stuff like squarepusher.

But what I'm not seeing happening any time soon (barring any breaking general AI type of advances) is if you give the algorithm a TB-303 for the first time and see if it figures out acid house. Yes you can probably teach it the origins of neurofunk DnB (think 1999 Optical & Ed Rush's Wormhole album) and produce super awesome dance music. But I don't see how it could ever develop what happened to DnB beyond that. Wavetable synthesis didn't really exist that way back then, and the bass came from more classic synthesis like the Reese bass. Nowadays, what you can do with a wavetable synth VST like Serum, almost defines what modern DnB sounds like. That particular sound was evolved and shaped through the genre of drum'nbass and became part of it, a new style of synthesis, heavily facilitated through the particular UX controls of these synth plugins, which again caused the author of Serum to amplify by creating his vision of what this UX should be like, is almost like the birth of a new instrument, together with artists having to learn the correct style to "play" this instrument. Which has settled enough now that it is appearing in other new genres. Yet is also still developing. And that is just one genre of music I happen to be somewhat familiar with, I'm sure similar examples can be named in many other genres (for instance I don't know much about the history of dubstep).

Those evolutionary steps, invention of truly novel things, for the foreseeable future, I don't think AI is there yet and artists do still have an edge, even if it's a very thin one.

>> But the defeat of the tactics born from this mindset by MCTS at least for now lay bare the fact that the path to being good at go is actually to probabilistically sample the phase space of the game and perform impossibly mindless statistics on the game outcome an enormous number of times ...

This is a rationalist opinion that is attractive because it appears unbiased, unwilling to make any concessions to human nature and to recognise its limitations.

And yet- this same opinion overlooks the greatest source of scientific wonderment in the victories of AlphaGo and family against the best human players.

Which is to say: that human players play Go (and Chess, and Poker, and Magic: the Gathering etc) very differently than machines. In particular, human players do none of the extremely tedious, extremely computationally intensive maths that computer players have to do. Human players don't perform MCTS, neither do they train by self-play for many thousands of human-years.

Somehow, humans can play Go and Chess and all manner of board games _without_ having to do any of the hard work that only computers can do reliably. We are not particularly good at those games- but we can play them well enough that beating the best of us still takes huge amounts of computational power.

How we do this, why we are even capable of doing this and what other benefits it confers to us: _that_ is the interesting set of questions. That a big machine can outperform a human ... we have known this since ancient times.

  There are more things in heaven and earth, Horatio,
  Than are dreamt of in your philosophy.
  - Hamlet (1.5.167-8)

> In particular, human players do none of the extremely tedious, extremely computationally intensive maths that computer players have to do.

This sounds like a Chinese room argument.

Performing computationally expensive maths doesn’t make the computer intelligent.

But that says nothing about the intelligence of the maths itself.

I see how it seems like a Chinese Room argument, but I saw it more as a statement about how much more there is to figure out about how the human mind does things, that we need to build such particularly powerful machines to defeat it.

But I didn't say anything about machine intelligence. My comment is about human intelligence.

Is this true? From what I've read about Alpha Zero/etc, in both Go and chess, it's going interesting move sequences that people hadn't considered viable before. That certainly seems like an interesting thing to learn.

Also, sure, you can dismiss it all as statistics. But how sure are you that what's happening with humans isn't in some form? I'd also say that MCTS is something people kind of do too in games: look a few moves deep and try to judge the value of that position, which is definitely more interesting than simple RL/bookkeeping/stats.

No, it isn't. The top humans are much better at Go than they were four years ago, largely due to learning from the new engines. If it were all just about sampling the phase space zillions of times, this would not be the case.

This is interesting to me ... how exactly is this assessed ...?

Do people keep around versions of “alpha go year 2017” and play against it in order to measure human improvement over time?

If the basis for observing improvement has become “I can beat old versions of the ai more reliably than I used to be able to” or if I have learned to beat players who have not studied alpha zero I suppose that’s a form of usefully learning “about go” by analyzing the games played by alpha zero ...

I wonder if we might ever arrive at a point where human vs fixed-year x ai performance at go pretty much stops increasing over time ...?

I admit that I do not have a quantitative measure to support my claim (as you note, constructing one is difficult). But qualitatively:

1) People have learned a lot from new engines: joseki (corner patterns), general strategy (e.g., moves on the side are now considered less valuable, making large moyos (largely empty space loosely surrounded by your stones) is less attractive because AIs have demonstrated that they're more invadable than previously thought) and are able to actually explain the new principles in human terms.

2) All go professionals now play in the new style, to some degree; ones who tried to continue in the old pre-AI style performed badly.

So I am comfortable claiming that human play has improved by learning from the new engines.

Is there a name for the new style?

I'm not aware of a single "official" name for it that everyone uses, like the Hypermodern style of chess in the early 20th century. In English, people say things "AI-inspired" or "AlphaGo style" (although a lot of the ideas come not from AlphaGo directly but from the public engines that followed in its wake).

The sequences are viable because alpha go assessed state much deeper than humans do. Doesn't mean humans will be able to utilize them correctly.

Nah at least in chess all the grandmasters lean heavily on the chess engines, including studying top games between the best AI

> Nah at least in chess all the grandmasters lean heavily on the chess engines, including studying top games between the best AI

They may use it for training and analysis, but they don't play their style - they play an inferior style (not to dismiss their achievements).

I am not sure what you mean by "style", but top chess grandmasters certainly play in a different way today compared to, say, 20 years ago. Current play is much more concrete (less based on general strategic principles) and players are willing to take greater risks for rewards such as material gains, since AIs have shown that there are often many more defensive resources than was previously thought as long as the defender remains tenacious.

I don't want to break your ideal, but the description of go you make died decades ago. For a long time now, top go has been about fighting in the chuban, and there is no grand theory about that, only grinding your mind to read more combinations than your opponent.

Another correction about what you said is that computer go has brought tremendous evolutions to the pro scene. Josekis that were accepted for centuries have been challenged, and people have learned why. The value of sente has been emphasized, and pro players that have not learned are being pusshed out. In that aspect, computer go has brought change of a magnitude comparable to the shin fuseki movement.

Finally, it is expected that the availability of strong programs will bring a wave of better players, much like what happened in chess. I for one look forward to this happening.

Interesting - I definitely have only a surface understanding of the go community and had not respected that the community embraces the game’s “requirement to grind the combinations”.

I had always thought there was some element of recognizing its “impossible to ever even remotely approach a true grinding of the combinations” for go — and that somehow players _did something else_ effectively when they played at a high level. It’s that “doing something else effectively” that the defeat of humanity by go algorithms challenges ...

Would be super Interesting if there could ever be a reversal which might allow humanity to beat go algorithms once again ... is there any evidence from the strategies that the go community has learned by analyzing the new go algorithms that this could be possible? Or is there just more and more evidence over time than humans will never be able to compete effectively at go again?

It is indeed impossible to evaluate every possibility, especially for humans. Still, midgame fighting is all about evaluating as much as possible and to estimate an unfinished position.

It is not imaginable now that humans will beat a machine in the future, but it is also undeniable that humans progressed from computer go. The evolution of joseki (game patterns considered fair for both players), especially corner joseki are material evidence of that.

> perform impossibly mindless statistics on the game outcome an enormous number of times ...

But, while you sit there waxing creative, your brain cells are likewise performing a mindless task on an enormous scale.

> the pursuit of this beauty is the path to being good at go

There is art! It's just an emergent property of learning the game. Seeking art doesn't make you a better player - that's the trap one can easily fall in any sports. It's the opposite - efficient strategies solidify as art.

And this will happen to all of humanness in time. Everything we think is unique about us, and our endeavours, will be reduced to optimization and learning algorithms eventually.

The main issue in determining man or machine in each situation eventually will be which has the lower TOC.

While it may be true, this is, of course, conjecture. Certainly many skills that seemed "uniquely human" have turned out not to be. That does not mean that all such abilities will be amenable to replacement by, as you say, "optimization and learning algorithms." Machine authoring of great art, a problem which may be at least as hard to solve as Artificial General Intelligence, does not seem to be on the horizon any time soon, for example.

In the same way as the Go master (perhaps) feels disillusioned by being beaten by the AI, I think people in general will not accept humans being replaced with machines in some "special fields". In those fields, the customer/user, will not experience the same utility if a robot is doing the service compared to another human, even if they are indistinguishable.

I think an interesting issue here is that in the (far) future, many services could be performed cheaper by AI/robots and in such a way that the customer is unable to tell wether a human is involved or not. And in this future, humans will probably be a premium service.

Take motor sports for example. We can probably now/soon replace F1 drivers with algorithms and cameras, but nobody would pay 1000's of USD to watch them drive around in Monaco. If it would turn out someday that the drivers had been replaced (for safety reasons or whatever) without telling the fans, the outcry would be tremendous. And even if outcry does not always equal "true utility", I think it highlights my point: humans made of flesh and blood risking their lives or performing extraordinary feats have an intrinsic economic value that can't be replaced.

as early as the 90s, automated control systems in F1 resulted in cars (i.e. Williams FW14) that to some extent drove themselves better than any human. indeed, many of the systems used have been specifically banned since then.

Hmm, I don’t think an ai could use optimization and learning algorithms to “learn” to DM D&D. For that, the ai would have to simply be an i.

Eventually it can do your example of unique human intelligence whatever that example is.

Another example of unique human intelligence: drown in existential dread.

I can do that, you can do that, but will a computer be able to do it?

I've written this in another comment but I'll repeat here. What you're asking really boils down to a combination of whether a human brain can be simulated and whether human intelligence is merely due to the physical brain (or do you believe in the existence of some intangible consciousness that cannot be replicated by a machine). Assume you believe both to be true, then your simulated brain is surely able to drown in existential dread because it's capable of no more and no less than the human one.

I mean, you joke, but existential dread might be an adaptive response to a hostile environment.

...a situation we might want to simulate for training purposes

Ai won’t, but a machine with true intelligence will. That was the point of talking about ai needing to be i.

So you're defining "AI" to be anything that we can currently program a computer to do, and "I" to be anything we can't yet? That doesn't seem like a useful distinction to me. Unless you're using "I" to mean general (artificial) intelligence, in which case you should probably use the more well-known term.

No, please don’t straw man my point. I’ll assume you know what ai is and that you understand there is a huge difference between that and human intelligence. I am arguing that ai will never be able to DM a D&D game. For that, a computer will need human intelligence.

But AI definition is still target-in-motion, very blurred.

Why the but? I didn’t say anything that disagrees with that statement.

I feel like it's far too blurry to make claims such as "will never be able to DM a D&D game".

Re: "all of humanness"


(If you don't want to click the link, there's a joke there that machines may have a hard time "being to cool to care about stuff.")

Except Calvinball of course

Perhaps the one 'special thing' piece humanness is that we can and tend to automate ourselves. (i.e. we're lazy) :)

That depends on whether "Everything" is finite or infinite.

> there is almost nothing “about go” to learn from watching alpha go play

That's not true.

Q: How much better is AI now compared to when Lee was playing against AlphaGo?

"It has increased enormously. It is only natural for the pros to lose a couple of spots. That's why most of the pros study baduk with AI."

Korean News Interview: http://news.khan.co.kr/kh_news/khan_art_view.html?artid=2019...

I’m now curios, can a human design a game that a computer can not beat a human at.

This reduces down to the question of whether it's possible to simulate a brain (or an equivalent formulation, such as is consciousness somehow extraphysical). I believe I'm right in saying current consensus says that this is possible.

Theoretically possible, but so far nobody has done it, or knows how to...

The Turing Test game? (For now at least)

Just put a scoreboard on captcha.

I think alpha zero can self-train to mastery or super-human performance at any two player game with perfect information?

I’d be curious to know if that’s incorrect ...

I'd love to see a blinded Dixit competition between humans and AI, judged by humans. Of course, humans and AI could get an advantage by playing lots of Dixit. But if the cards are generated specifically for that competition, things could get quite interesting.

Maybe something like Nomic https://en.wikipedia.org/wiki/Nomic

This hasn't been true since 2015, when the software became decisively stronger than human players.


Actually humans are now able to defeat the bot that won the Arimaa challenge. In games like this: http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=628806&s=w

However, there is also a new bot that has been trained using the self play method and has been crushing the bot that won the Arimaa challenge. In games like this: http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=628534&s=b

Most likely the bots are still ahead.




That's a playlist of 31 self-play games analyzed by Michael Redmond 9p. They are plenty interesting to study.

I don't follow AlphaGo but there is a LOT to learn from AlphaZero in chess ... engines are already changing the way chess is played.

whats being lost here I think idk is the fact that the robot lost once because lee de-dol exploited a bug in alpha go. lee seems to play it down as “just a bug” but it seems like that might be a pretty good strategy.

I've spent so many years playing and hearing about how Go will not be solved in our lifetime that the day AlphaGo won 4/5 games v Lee Sedol marked my personal timeline into what came before and after. I walked around all day in a daze watching people as they go about their daily lives as if nothing had happened.

I've heard it said that man landing on the moon was like that for them, but I didn't understand as it was the only world I knew.

Now I can appreciate that these were the firsts of many singularities yet to come in AI and space exploration and I hope to live to witness a few more (but not too many).

I have an opposite view: it's not that shocking that AI has advanced a lot. It's a lot more shocking to learn that humans aren't as great as we hope to be. Also, our prediction sucks. I mean, who said that go is such a difficult problem that it would take a lifetime to solve? Sounds like intellectual arrogance to me. Sure, the problem space is huge, but it's well-defined and homogeneous. There was a time that reciting a long text or multiplying large numbers is considered a humanly intelligent thing, only that it wasn't. Alan Turing used to think that AI is good to humans because it teaches us to be humble, and I think we're kind of getting there (for certain domains). On the other hand, things like self-driving will remain unsolvable because the problem is fundamentally ill-defined; we don't even know what is a good driving.

(Edit) To those who think self-driving is a well-defined problem: it can be in some remote areas, but imagine driving in bustling city streets with kids, bicycles and dogs. The driving problem becomes a communication problem.

> On the other hand, things like self-driving will remain unsolvable because the problem is fundamentally ill-defined; we don't even know what is a good driving.

Humans arrive safe and unhurt (as much as possible, especially while human drivers remain on the road) at their destination with minimal violation of the locality's established rules of the road. No?

(Though now that I've written what amounts to a utility function, I fear what sort of paperclips may come out of it.)

Drivers (both AI and human) may face problems that are essentially ethical trolley problems. While many of these choices are clearly artificial to the point of ridiculousness, the one that gets me best is "should a self-driving car drive itself off a cliff, killing its only passenger, or hit and kill some >1 number of pedestrians?". While the external observation may be "minimising deaths is preferable, so drive off that cliff", are people willing to use a vehicle that might intentionally kill them as an intrinsic part of its operation? Or will market forces result in self-driving cars that make more selfish choices being more popular, potentially resulting in suboptimal prisoners-dilemma style results?

There are also different ethical norms in different cultures about preferences (https://www.wired.com/story/trolley-problem-teach-self-drivi...). While these are edge cases, they're the edge cases people are worried about, and the source of the ill-definedness: "unhurt as much as possible" implicitly chooses some ethical tradeoff that people can easily have different answers to.

Also such meek and suicidal cars would get abused to no end. Just imagine all the assholes today that pass cars in turns with bad visibility or bike in crazy ways. Today they are still paying some attention, because they may easily get killed if the other drivers don't notice quickly enough what they are doing. With meek AIs on the road you can do anything (as long as you bunch up in large enough groups).

> minimal violation of the locality's established rules of the road

I wish. I've come to the conclusion that the only true rule of the road is: don't crash. As long as no actual collisions occur, people are totally fine with doing whatever they want and bending the rules for their own convenience. I can no longer predict the behavior of other drivers. Even something as basic as the turn signal is unreliable since people are forgetful.

I think you are right. In fact, thinking some more, it would seem that the whole point of the ‘rules of the road’ is simply to prevent crashes - we have just built up a whole load of protocols in order to achieve that

Self driving is much simpler than playing Go. The hard part is getting the sensors working properly, in all conditions, even when the car is twenty years old and dented.

I have been working on AI/ML since 2007, and this is what I had read too (repeatedly). That Go is incredibly hard to conquer for a computer, and it will be a long while. So when this actually happened, I was shocked/surprised.

I’m glad I’m not the only one. I was totally floored, and I struggled to explain the significance to non-go players.

My respect for AI increased drastically that day, and (honestly) I developed a small amount of fear due to how AlphaGo’s style of play was not understood particularly well (e.g., some of the moves would absolutely be called “slack” if played by a human).

You both speak of the day AlphaGo won 4/5 matches, yet the matches were played over a series of days. Which of those days was the switch flipped, then? For me there was a significance in day three, but it was mitigated somewhat by the (to me) surprise of day/match four.

IIRC, it was day 3 for me as well, and I had the same (minor) let down on day 4 (resign?!?!). That said, I imagined the resignation was a fixable flaw in the AI, and this turned out to be correct.

I can’t actually remember where I first learned about the match. It may have been HN, it may have been in an AGA e-mail, or it may have been some tech-oriented magazine/web site in English or Japanese. I am certain it wasn’t match 1, because I reviewed earlier matches, and I remember the let down of match 4, so must have been match 2 or 3.

For me, day 3 was the stunner even though the writing was already on the wall.

Day 1 was a great surprise but I was still left wondering if it was a fluke. Day 2 showed that it was no fluke and I started to get a sinking feeling. I guessed that Lee Sedol would lose the third game and win the fourth after the pressure of the Korea and humanity was off him.

Go still isn't solved (neither is chess), we just have a machine good at knowing which parts of the search space are worth checking.

I think achieving superiority over humans is practically solving the problem though. Solving chess or go by going through a complete search space seems more like a hardware/computational goal than a practical ml/ai goal.

It all hinges on your definition of "solved".

"Solved" in the AI/game theory has a very strict definition. It indicates that you have formally proven that one of the players can guarantee an outcome from the very beginning of the game.

The less-strict definition being thrown around here in the comments is more like "This AI can always beat this human because it is much stronger."

I think most people discussing this mean the later, less pedantic option. I mean, that’s the spirit of AI. Can we make it think like a human, or even more so. We are the yardstick.

That is a silly mis-use of the term and that is not being pedantic. A problem isn't solved just because you beat the existing solution (i.e. human players). As long as there is the potential for a better solution that can beat your solution there is work to be done.

You don't have to go through the complete search space if it turns out optimal strategies are sparse. What do I mean by that? Take a second-price auction: the dominant strategy here is to always bid your true value. Meanwhile, the search space for this would be any real number in between 0 and your true value. What does this mean for computational games like Chess or Go? It may mean while the search space is exponential, there may exist computationally trivial strategies that work. I would compare this to Kolmogorov complexity, except instead of having a program as your output, it's a strategy.

Any substandard statistical model fitted to by a simple computer program is superior to what an unaided human could achieve with pen and paper, but few of them can claim to practically "solve" the problem because they are better than crude fit heuristics proposed by humans who are not good calculating machines.

An algorithm can't claim to have "solved" Go, when future versions of the algorithm are expected to achieve vastly superior results, never mind any formal mathematical proof of optimality. What it has demonstrated is that humans aren't very good at Go. Given that Go involves estimating Nash equilibrium responses in a perfect information game with a finite, knowable but extremely large range of possible outcomes, it's perhaps not surprising that Go is the sort of problem that humans are not very good at trying to solve and that computers can incrementally improve on our attempted solutions. Perhaps the more interesting finding from AlphaGoZero is that humans were so bad at Go that not training on human games and theory actually improved performance.

We've just created a tool people can use to play Go better than a person without the tool. Until something emerges from the rain forest, or comes down from space that can throw down and win I'd say Humans are still the best Go players in the known universe.

That's like when the whole class fails a test, but the prof. grades on a curve. Someone gets an A, but not really. edit: some grammar.

"Solved", in this case, means "computers can play the game at levels no human can beat."

That's not the normal meaning of solved in regards to game theory.

I believe with respect to game theory, solving a game like Go would require finding a strategy that obeys the one shot deviation principle. The result would rather be boring to watch however, because the conclusion of every game played under this strategy would either be draw, or based on which player starts off first.

[1] https://en.wikipedia.org/wiki/One-shot_deviation_principle

But it is what "solved" means in deep learning. Common terms very often acquire different technical meanings in different fields. Or even technical terms! Whether a single hydrogen atom is a "molecule" depends on whether you're talking to a physicist or a chemist. And "functor" means something very different in Java programming patterns than it does in math.

I have to concur with the other poster here. You're not using the term in the usual way. Traditionally a game is called solved when its search space has been searched exhaustively or there is some other analytic solution that allows you to determine who wins and who looses in every constellation. Tic-tac-toe is solved, for instance.


Regardless, we need a term for “computers can play the game at levels no human can beat.”

Well, don't use one that has an existing meaning in that exact context.


This is the performance vs understanding dialectic. A bunch of humans built a machine that is superior at chess, but that machine can't teach humans what it knows.

Humans can and are absorbing some of what the machine has demonstrated.

I think Kasparov has it right when he says that the best player isn't a human or a machine, but a human using a machine as a tool. The machine can help the player optimize and reduce mistakes, but machines don't yet know how to ask questions and explore in the same way. Maybe they never will.

There's a name for this approach, they call human-AI teams "centaurs". It's a fascinating concept. I am deeply curious if eventually that will be outstripped by pure AI too. I believe so.

Optimal strategy for human-AI teams has been really close "defer to the computer for every move" for a while now. They're only interesting because they're adversarial.

Chess computers teach by sparring rather than lecture. Humans still learn.

Alpha *, at least, also learns by sparring, so there is a nice symmetry there.

Reminds me how sometimes geniuses cannot translate the way their mind works to non-geniuses. It's such an implicit talent that it's not even .. "reified" in their upper brain. It all happens in cache.

For me the two moments so far have been seing computeurs winning at Go and knowing that basic quantum computeur are out there (even if they are not useful yet).

I think this is a strange reason to retire and as the article points out it might also simply be due to the legal conflict he is currently in with the KBA.

Chess engines have been defeating humans for 20+ years (and are overwhelmingly stronger for a long time), but that hasn't diminished the interest in competitive chess, because the human element of competition and struggle and deep fundamental appreciation for the game is what makes it worthwhile pursuing.

AlphaGo can play go but it cannot appreciate the beauty of the game (at least as of yet, and I don't think it would make the game worse if it could), and so I don't think there's a meaningful conflict between humans and machines.

If someone invented some sort of superhuman math proving engine tomorrow it would not diminish the beauty of maths and I don't think anyone would quit the field. Just like in chess it ought to motivate people to understand their field better.

> AlphaGo can play go but it cannot appreciate the beauty of the game

On the contrary, appreciating the game is the core of what AlphaGo does. In order to search the tree of moves it learns how to play (expand search) and how to evaluate (cut off branches of search). I believe it might appreciate the game on a deeper level than humans, in its own unique way. Of course it can't appreciate the social aspect of the game and all that comes with it.

That’s being way too anthropomorphic. You could just as well say it aims to minimise the pain of Go, but neither interpretation is warranted.

AlphaGo doesn't appreciate the game at all. It is just trying to survive in a hostile environment. You might as well say that your gut bacteria appreciate the food at your favorite restaurant better than you do.

Put another way, Chess is literally a matter of life and death for AlphaGo, because chess is all it knows. It has no exterior context for which chess is a metaphor.

> AlphaGo doesn't appreciate the game at all.

It's 'appreciating' the value of various states and moves, in light of a vast trove of experience.

And it doesn't even know the name of the game it is playing.

AlphaGo doesn't know chess at all....

What does it mean for humans to appreciate the beauty of the game anyway? It's when humans find certain moves and games pleasurable?

> Chess engines have been defeating humans for 20+ years (and are overwhelmingly stronger for a long time), but that hasn't diminished the interest in competitive chess

Is that true? I feel like chess was a bigger deal in the past. Among my peers, poker and computer games seem a lot more popular.

I've been an active chess player myself for a long time and for the last 8-10 years not just with the advance of engines but also online streaming there has been a lot of renewed interest. Saint Louis has become a big chess hub in the US, China has become a major player, Anand has rekindled interest in India, Carlsen in Europe and I would say today it is more popular than it has been in a long time, in particular in Asia.

As to the direct influence of engines the other innovations aside, it has definitely forced players at the very top to re-evaluate the chess metagame, find weaknesses in traditional openings and shook up strategies. For the strongest players engine evaluation has become a useful tool providing new insights. When people watch chess tournaments these days on the internet most websites will provide parallel engine suggestions and commentators use engines to take hints for their commentary.

In my opinion, engines have made the game more competitive at the pro-level and more accessible for casual viewers.

Well, I was never a grandmaster, but my amateur interest in Chess was killed completely after I realized it was impossible to beat computers. Then I switched to Go... and now I don't have a game to play anymore.

This seems weird, I don't know you would want to link your enjoyment with a game to it's unsolvability.

Like don't you enjoy a game to enjoy a game? You can't beat Carlsen either, but you enjoyed the game at your level. Now computers are Carlsen +1, but how you enjoy the game shouldn't be affected. Especially since deep blue won in '97 and the game is still very alive and well, it hasn't been killed by computers but enhanced. Coupled with the multitude of good chess sites and resources out there, it's a better time than ever to enjoy the game.

effort has to be matched with reward, and chess takes a lot of effort to get good at beyond a certain level. It's actually a big issue with many things, the "middle" of artists, people who do sports, music, and more is hollowing out versus consumers and pros.

I'll (probably) never beat Carlsten, but he _can_ be beaten. For some reason that seems to make a difference.

The global population increased, and more people took up Chess, but they're further distributed, so locally it seems like it has cooled when globally it's more popular than ever.

By AlphaGo/AlphaZero finding more effective/balanced moves than humans it's redefining beauty whether it appreciates it or not.

A simple script could win at FPS games every single time but it also hasn't diminished the value of the game as long as none of your competitors are using it.

This is very true, I never thought of it. Maybe the difference is that we always knew that AIs would easily win in FPS games. Whereas Go, Chess or Shougi were considered as a proof of human intelligence, for a very long time. Discovering, after all these years, our history, that a machine can now beat us at these games may be the major difference.

Maybe for these games to keep popularity, we just need to update our perception of it. The same way we do with FPS games. Yes, we know a bot would do better - but that's not what matters.

Another thing, as said in other comments, is that we can learn from bots. New strategies, new patterns. AFAIK, this is not happening in FPS eSports scene.

"I think this is a strange reason to retire"

There's nothing strange about becoming demotivated to study and compete at something extremely taxing both emotionally and mentally when a machine can beat you after an illustrious career.

so the future of humans is beauty :)

Isn't saying AI is the future of Go like saying cars are the future of sprinting? I mean who cares if machines are faster/better/smarter than us at any particular task? That is true in millions of ways where humans still enjoy competing against each other. Maybe what's needed is just a perspective change where we stop thinking of Go as being against any other agent and make it, like every other human competition, against other humans.

It's bizarrely "hip" or "woke" or whatever right now to anthropomorphize "AI". "AI" is light years away from being anything more than a tool. AI doesn't "play Go" in my view as much as people play go, via AI. Is it really shocking we've built a tool that we can use to play Go better than somebody without said tool?

If Lee Sedol says "Okay AlphaGo, let's play!" and sits down at a board what happens? Nothing happens! AlphaGo has no agency. AlphaGo is an extension of human agency. AlphaGo isn't better than humans at Go. Humans with AlphaGo are better at Go than humans without AlphaGo.

Another way to look at your metaphor is that research on exercise physiology has shown the enormous importance of rest and proper nutrition during training. Prior to cars, getting from place to place and access to optimal nutrition were both mediated by transport over long distances.

AI is the future of Go because it enables those new perspectives and new processes by which human players can learn. AI is smart-dumb, looking for patterns beyond the human capacity, but limited by the data on existing human players that has been provided to them.

Strava hasn't made running races pointless. I can compare my runs to others but that is a very different metric then beating them in a head to head race.

> AI is smart-dumb, looking for patterns beyond the human capacity, but limited by the data on existing human players that has been provided to them.

I think you misunderstand how AI in games like go now works. Most of the advancements recently have been from the AI playing itself, oftentimes without any database of human moves at all.

> limited by the data on existing human players that has been provided to them.

Except it isn't. In various games, new strategies have been found by playing AI vs AI. It's also possible to create AI players by self-play with no knowledge of human matches.

Technically AlphaGo and especially AlphaGo Zero create their own training data. So they’re not limited to data from human players.

Do these AIs actually have much of a strategy? Is the strategy mostly correct evaluations of positions and optimized search?

The search space of Go is way too large for dumb traverse of the tree, even with high end optimizations.

What makes recent breakthroughs in AI agents playing adversarial games possible is the fact that deep neural networks are able to develop patterns that yield short- and long-term strategic planning. And the ability to self train without human intervention to reach unprecedented training levels.

Sprinting is a weird sport. I think it exists because running is a natural human skill that many people care about beyond running contests, because they can't thrive with complete dependence on machines for locomotion. In most people's lives, there are times where the ability to run actually matters.

But the ability to play Go never matters outside of playing Go, and we know who is the best at playing Go.

Ah yes, the ability to use your brain competitively is completely useless in the day to day lives of humans. It's much more important to everyone's day to day life that they can occasionally sprint 30 feet to the bus because they snoozed their alarm one too many times, and that is of course a completely comparable skill to a 100m dash.

I don't think this is about GO, but about humans becoming useless faster every day against machines in virtually everything and how that is going to affect society and us as humans.

That’s exactly why the master shouldn’t have quit.

Am I the only one reading this article with the take away that the ai is not his primary reason for retirement? I understand that the title has its own conclusion but it seems overly sensational to me.

> He actually quit the KBA in May 2016 and is now suing the association for the return of his membership fee.

> "... [I] have something else to do," he said, asserting his only dream for now is to rest and spend time with his family.

Edit: Meant to include the part where he's planning a high profile set of games against another ai.

I've heard many times that people come to Hacker News because the comments are better than the articles.

I have to say, I'm just continually puzzled by this. In this thread, yours is the only comment where it appears that the commenter read beyond the first few paragraphs. There are numerous comments by people speaking authoritatively on go and AlphaGo, who have clearly not studied either significantly.

In a way what you describe is consistent.

If people like the comments better than the articles, it's not so surprising that they like to spend time reading and writing comments instead of reading the articles.

I think it's because few of us truly care that a "go master" exists and him stopping play isn't meaningfully important to rest of us.

Okay, but I come across this all the time on any topic I have some knowledge of: the articles might be bad, but the commentary and discussion is always much worse--it's clear to me that most commenters don't read the links. There are only a handful of commenters whose comments are worth reading.

From what ive been reading on online go communities, they seem to mostly agree with you.

Lee's issues with the KBA are not a secret and he has discussed possibly retiring for some time now. He has given multiple reasons as to why he was considering retiring and while ai might be one of them, saying that its _the_ reason feels very clickbaity.

He was handed the perfect excuse.

> Even if I become the number one, there is an entity that cannot be defeated

Hard for me to empathize with this argument for his retirement. If we can't outrun a car, does that make running competitions pointless? The existence of AlphaGo doesn't diminish the triumph of being a number one human player in any way.

I would counter with the fact that in physical endeavors it's apparent to us that we are not #1 - our household cats are more agile than us.

It's in matters of intellect that humans still believe they are #1.

AlphaGO's achievement in another field would have similar effects, e.g.:

- An AI that diagnoses sickness better than any doctor

- An AI that generates text which humans believe more beautiful than any other poetry created

- An AI which creates classical arrangements the likes of which we compare to Mozart

I would imagine that in any of those situations some doctors, authors, and musicians alike would be devastated.

> I would imagine that in any of those situations some doctors, authors, and musicians alike would be devastated.

You don't even have to compare yourself to AI for this mentality though. There are people who choose not to compete in things because they don't believe they'll ever be as good as other humans.

I assume must composers don't go into music thinking they are going to be as great as Beethoven.

I believe there are many studies that show that if you only do something because you think you're good at it, you're likely to drop off. I imagine it's also why you're supposed to praise children for being hard working and not for being smart or talented.

I assure you, plenty of musicians have sun-sized ego's

As a person who likes music, making it, listening to it, breaking it down and hacking it...

Making a classical arrangement that evokes a particular expression in the listener is the job of the musician. If an AI system helps you explore the possibilities there, it's more like a studio musician that's able to improvise. You're still the person, the human, the emotional filter, that picks "This sounds right" or "This doesn't" for a particular situation. It's a judgement call. An emotional one.

An AI might be able to fake it, communicate with it, but it will never replace humans choosing the sounds that please them more than others. Humans communicate through music. It wouldn't surprise me that an AI would be able to as well. I don't think it would necessarily write emotionally strong music, not without human training.

Edit: I guess what I'm trying to say is, sure, computers might be able to make music. Ask any guy who messes with modular synthesizers. But they're a tool. The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself. It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."

>It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."

I'm not so sure. I often go into threads on HN and realize that every idea I could come up with on the subject has already been expressed better than I could do it, with greater expertise, and cited sources. I don't comment in those threads. If AI bots could populate a thread with every likely human thought and argue it with depth and sophistication in a well reasoned, yet carefully approachable and well-explained way, well then... again I don't think I'd feel like I would be adding much value by participating.

And yet, here I am, bringing up something no one seems to bring up in the thread. One would also logically come to the conclusions that disparate AIs with disparate interests would find different things to express, to make music about, to draw about.

What distinguishes music written by AI from music made from humans? I have a story to tell. If the AI has a story to tell, one that speaks to our human emotions, it might make good music. But the point is to communicate. Even if you take, for example, someone else's words, fit them to a different model in a different field, viewpoint... You might get interesting things. You could make a cover of someone else's song, with your twist. Adding your emotion to the melting pot. AIs might be good at that, just like that, but only through communicating. Just like us. We have no idea whether they'll be better than us at doing it, or merely equivalent. We have no idea what is lossy in our sharing of mental models. Perhaps it is an unsolvable problem, which we will find out in the same way we found out about Gödel's Incompleteness.

It seems to me like we fail to understand how unique we are. We are in a unique position to shape what comes after us, and we are blind to how much we unconsciously select for things. We have an innate mental model of "humanity" we are trying to transmit to machines, and I am not sure we fully grasp it well enough to make sure we are creating something like us. We fail to do it properly to humans, sometimes, who actually do share most of our instincts and habits. Something entirely different from us? Color me skeptical.

This kind of debate only highlights this, to me.

What your comment suggests to me is that good composition requires an agent with a world model and generalized task-solving ability, along with a personality. I think developing the world model and task-solving will be the hard part, and if we can do it, it won’t be that hard to make it have a personality too. That’s just another task.

What my comment is trying to suggest is that AIs are not proven to be different from us. They might not have one "ultimate" form. They might be just like us humans. Diverse.

>>>The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself.

I think this is the key; if you're making music for your own reasons, no AI (or Mozart) would stop you. But if you're trying to make money at it, or desperately want listeners, you may eventually be on the "losing" side.

Would it? Popular music sees major paradigm shifts every few years, and AIs only really generate things based on observation of existing patterns, at least as far as I can tell.

As far as recent examples go, Lady Gaga and Lorde were major breaks from what was prevalent at the time they started releasing music, and then spawned artists trying to emulate them.

A pattern implies that it can "infer" something in the future.

If we oversimplify and compile a list of traits about "the world" as it was in the past that allowed a new genre or artist to flourish, AI could predict that in the future. It isn't like the paradigm shifts just happen in a vacuum.

Granted there are probably millions of little things that lead to this, stuff like the shared experiences of an entire generation coming of age, political climate, trends in other industries, etc. Not that I believe it will ever happen to an accurate enough degree, but theoretically I don't see why it could not be possible to approximate given time and resources.

A lot of those things are completely random and non-predictable, to be honest, no one can predict which paradigm will win and take over for the next decade. Especially since when a game-changing paradigm comes, it is usually not received well universally at all, until the moment it takes over the public conscious completely, and then the switch is flipped.

If you feed an AI a bunch of modern car designs and ask it to design a new car, it will design you something like a modern ford or honda/toyota, but it will never design something like a Cybertruck. Which I believe will be the next paradigm shift in the design of trucks (that has been super stale and stagnant for at least the past 20 years), but this is yet to be seen.

For an example with music that has already happened and became apparent - Kanye West's "808s and Heartbreak" album from late 00's. On release, it had very polarizing reviews, most of which were skewing towards "really weak and weird". Fast forward 10 years, most of hip-hop and pop music is directly influenced by that album, most of top 50 albums use similar patterns and methods used in that album, and critics have made a complete 180. So now 808s is hailed as one of the biggest (if not the biggest) paradigm changes and influences in music of the past decade as a whole, as well as the best album by Kanye, despite at the time being called the worst. Imo an AI trained on music of 00's that came before 808s would have never been able to come up with something like that, but it totally could've come up with another top 100 song using existing paradigms.

It doesn't have to be like Kanye's album at that point in time to be a paradigm shift, though. If 1 artist didn't get big or some genre blow up then it would have just been filled by an infinite amount of others that we never heard. Even considering a single artist hitting it big there are how many that are never heard of? An AI could produce an equal number of artists and only has to win once every month/year/etc. I think this is similar to the million monkeys at a typewriter thing.

It's hard to say - maybe for a sufficiently advanced AI, Lorde's style would be an obvious extrapolation from the popular music of the time. Certainly we're not there yet, and it's an open question if we ever will be - but I wouldn't be terribly surprised if one day AIs can make better music/poetry than the best humans, by any metric we care to use.

I'm always going to enjoy a person coming and showing a bit of themselves through their music.

That's not something we can really lose without losing something that connects us. People want a story. That has sold since the beginning of time, and it will keep selling. People will keep being moved to music, giving money to the artists that inspire them, and that requires connection. Maybe an AI/human team would make some really incredible stuff, and I'd be willing to pay for it if it makes me feel something. I think the human touch of "selection" will never truly leave, even if only in the listener's mind...

I'm sure they'd have said the same thing about a computer being able to win Go not so many years ago...

I think the problem with music is that there is no "objectively good" music composition. It remains entirely subjective and all criteria that are used to differentiate between "bad" and "good" albums are highly subjective. (Maybe something like "originality" might be measurable in some way but even there it gets tricky really fast)

So music generation (similar to poetry) is imo a completely different problem space altogether.

I think the only difference is that instead of one win-lose metric, there are 7.7 billion individual good-bad metrics on music.

For every individual doing the evaluation, I think it will certainly possible to train an AI to beat humans at getting "good" scores.

Real, authentic music generation is a harder problem than go or chess, but I'm not sure that makes it any more emotionally difficult for a future writer to face a true musical AI than it was for Lee Se-Dol or Kasparov.

It might be hard to judge. Some people will insist that generated music is bad, because it's just their subjective opinion, even if 90% of random selection will find that music good.

You are splitting hair here. Which end user really care about what the composer was thinking when they created a piece. A piece can be enjoyed without having any knowledge of its author.

The point is: what if the tool becomes so great that practically anybody can use it? Anybody could be that "filter" and "be" a great musician.

http://aiva.ai - anybody could be that "filter"

Imagine the day when you can’t find a more satisfying note than the one the computer already have chosen.

I think in all these cases, reasonable practitioners would be pleased. If an AI could generate good diagnoses, a doctor would be happy, because they would know that many lives would be saved.

Neither art nor music are competitive activities. Good poetry is a wonderful thing, no matter the source.

>Neither art nor music are competitive activities.

They certainly are! Especially when money is on the line, and the best musicians, actors, and artists are extremely well compensated making their positions extraordinarily competitive.

>Good poetry is a wonderful thing, no matter the source.

Sure, but I think you neglect to consider the defeating feeling it would bring to dedicate your entire life to mastery of a subject only to be completely and utterly, hopelessly outclassed. Almost every such person is already hopelessly outclassed by someone in their field, but those people are so rare that they have tremendous exclusivity surrounding them. Compare that to the scenario of having any 12 year old with a smart phone being able to instantly produce a totally novel and dominate piece of artistic expression developed by an algorithm on their phone. Then recognize that in a world with that level of AI sophistication, there'd be very little of value that a human can even offer other humans at that point. It would be... not great to the psyche, economy, or society.

> the best musicians, actors, and artists are extremely well compensated

What is your definition of best in this context? As far as I know, taste in art is very personal... Artists I consider the best are often very far from well compensated.

In that context, it would probably have to be those with widest appeal, which comes with it's own criticisms.

But, in almost any particular human artistic sub-niche with it's own definition of "best", the same principle will hold, with compensation and skill level being well correlated. It's also typically not even close to linearly correlated either, most of the compensation lies at the far tail of "best".

I guess I see a great artist as somebody like Su Hui, who made Star Gauge without any thought, or even likelihood of compensation, or recognition.

It's nice to be paid, and it's nice to be recognized, but I think art has its own form of wealth - otherwise, why make art? Why not just seek recognition, or money?

I don’t think so. AI is a tool. It doesn’t make any sense to say “a screwdriver can now screw things in better than a person” anymore than saying an “AI can diagnose better than any doctor”. The doctors use AI just like a mechanic uses a screw driver.

Good argument and I'm sure it's going to be like that in some regards. I think, though, that human intellect is a tool too and we're building a better one right now. So in your analogy we are the screwdriver and we're building electrical screwdrivers or something.

At some point a person who uses AI doesn't need to be a doctor anymore.

Pareto principle predicts AI will get to 80% fairly rapidly, but it will take a really, really long time to get to 100%.

I think we’ll see a lot of things similar to “AI x-ray technician” fields where people are trained to read AI outputs. Doctors will do higher levels decisions.

Nevertheless the difference is qualitative. A screw driver will never make technicians obsolete.

Here's something that I think would be exceedingly difficult if not impossible for AI alone to succeed at in the next hundred years.

Take a look at this painting: [1]

It is a comment on war, bravery, death, life, fear, sacrifice. It is drenched in the political and social context of the day.

I really don't see AI coming up with anything even remotely like this independently, and view such an achievement to be much harder than simply diagnosing disease or writing an emotionally moving classical composition. It would be comparable to writing some types of poetry or song lyrics, however, which require reference to context that humans understand but machines don't (yet).

[1] - https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/El...

[2] - https://en.m.wikipedia.org/wiki/The_Third_of_May_1808

> An AI that generates text which humans believe more beautiful than any other poetry created - An AI which creates classical arrangements the likes of which we compare to Mozart

Hrm, I do think that AI would be able to create narratives that humans find more enjoyable than the work of other humans, and I agree that AI would be able to create pictures and sound that humans find to be more enjoyable to look at or hear than the raw work of humans. AI can master the technical feats of composition and art.

But what I doubt AI will ever be able to do is create art that speaks to us. It wont ever be able to create a Guernica. It wont be able to create a Crime and Punishment. It wont understand what it is to be human and mortal, what suffering is, and it wont be able to look within itself and find what those things mean to it and then share that with us, because in the end it's just a bunch of code running statistical computations. It wont fear death, it wont have children it cares about or a family history to look on and tell us about. It has nothing of emotive value to share.

And top-level Go players believed their best tournament matches to be works of art, unmatchable by computation.

That belief grew into a sort of shared perception that they were artists in pursuit of a perfect expression of their art. For many top players that belief was ingrained from an early age. They believed themselves to be doing a service to the world, making it a better place by creating new art that was a unique expression of themselves.

And then AlphaGo (and successors) shattered that worldview. This is part of the natural sequence of the collapse of a suddenly, surprisingly invalidated worldview. Part of me feels sorry that he has lost his place in the world. Another part of me firmly believes in the mediocrity principle, and that the worldview he represents was obviously far too human-chauvenistic to be correct, and it's a good thing it's dying.

And part of me hopes you can give up your human-chauvenism before the same thing happens to you.

> because in the end it's just a bunch of code running statistical computations

... says a bunch of neurons that run on chemical reactions and electrical impulses. I think this line of thinking reeks of dualism - it creates a special something that is above explanation, a different essence.

But seriously, I believe the difference comes from embodiment. When we embody our AI friends they will be able to grasp purpose and meaning. We get our meaning from 'the game', when AIs will be players they will understand much better. Let them try out their ideas on the world and see the outcomes, grasp at causality, have a purpose and work on it. This will fill the missing piece. It's not that they are fundamentally limited, it's that we have the benefit of having a body that can interact with the world. Already AIs that work in simulated worlds (board games, video games) are getting better than us. We can't simulate reality in all its glory, and it is expensive to create robotic bodies. On the other hand humans and our ancestors have had access to the world from the beginning.

Why not? If a hypothetical AI had a world model as sophisticated as that of a real person and had complete understanding of human sensory and emotional processing, what exactly would preclude it from making such an art piece?

Of course, current AI can't even make an 8th grader's essay (which is not to say that it isn't impressive). But what these artists did was not magic. As far as we can tell, the brain is a purely physical entity. Unless you believe in dualism, which would be fair enough, there is no reason to suppose that what we do could not be replicated by something "artificial".

> It wont understand what it is to be human and mortal,

But it won't need to. All it will need to do is manifest the same end-product via whatever means, no matter how vacuous or computational that means may truly be. The suffering of an artist is relevant only inasmuch as it is responsible for producing the art. If the same end-product can be manifested via a mere computation then our criteria of "art" is still satisfied. In a world in which provenance cannot be established, the ostensible mortality of the artist becomes moot.

> In a world in which provenance cannot be established, the ostensible mortality of the artist becomes moot.

This is a real hot take to be asserting as blithe fact.

> This is a real hot take to be asserting as blithe fact.

Without knowing what is truly born of human hands, what value can art have? Our heuristics of establishing 'real' art are easy to manipulate. If we are presented with a soul-breaking poem and weep uncontrollably then its merit is regardless of its mortal provenance.

I agree with your point, but especially love the poetic way in which it is made. Very meta...

Time is long. I predict this comment will age badly.

Time needn't be long – it already has aged badly.

> because in the end it's just a bunch of code running statistical computations

At a low enough level, our brain seems to be just a bunch of neurons firing impulses at various rates that can be described as statistical computations. Why be so sure that the right neural network wouldn't understand what it is to be human and mortal, understand suffering, have emotive value, etc?

Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be. You can't superficially understand someone's situation and then take ownership of it. You can get a glimpse and really try and empathize, but you can't become the bearer of that experience, just a consumer.

>Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be.

Aside from directors, authors, artists, etc, who have demonstrated this to be false, an AI could conceivably synthesize the experiences of every author that wrote on what it means to be human or experience mortality and create a story that captures the essence of the experience better than any one person ever could. Having the first person experience doesn't induce a superior ability to communicate features of the experience.

> > Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be.

> Aside from directors, authors, artists, etc, who have demonstrated this to be false [...]

probably not what you meant, but this sounds like you know some nonhuman/immortal artists :)

Movie directors have never experienced most of what they film, but they convey those experiences far better than those who have actually lived those stories. I see no reason to doubt that the same is true for artificial storytellers.

Yeah but the AI could pretend it knows.

The AI may very well take no enjoyment in the narratives it's creating either. Both for this and for sharing emotion, in principle it merely needs a model of human enjoyment or human emotion, not to feel the enjoyment or emotion.

At some point, this distinction becomes moot, or rather: becomes chauvinist gatekeeping.

> But what I doubt AI will ever be able to do is create art that speaks to us.

This is your opinion, but you then go to mention things that are not necessary to create "art that speaks to us" (look within itself and find what mortality means etc.).

What if we advance AI reasoning skills to a point that it can find high-level patterns in how artists go from different human feelings (as described in litterature and other mediums), takes in a lot of the entities we can relate to (animals, what humans look like, etc.) and some aesthetic ones (shapes, colorometry, textures, ...) to create a new piece of art that optimizes for: "Likelihood of speaking to us"?

What then? It seems like an AI doesn't need to be mortal and self aware to do something like that.

AI as we see it today is just a mirror reflecting us in a collective way. This little excerpt from Gwern’s efforts training GPT2 on classical poetry [0] absolutely spoke to me:

“How the clouds Seem to me birds, birds in God's garden! I dare not! The clouds are as a breath, the leaves are flakes of fire, That clash i' the wind and lift themselves from higher!”

As someone who grew up in Appalachia, I have never in my life encountered a more visual, visceral description of autumn leaves than ‘flakes of fire’. It’s perfection, and maybe a single human is behind it, but more likely we all wrote it.

[0] https://www.gwern.net/GPT-2

I actually think AI can and will understand morality and suffering. If you look at how we make these kinds of AI, there's a lot of selection going on, some versions live and others don't. We also know that we experience suffering when we are having difficulty understanding things and stress when put into situations that affect our survival negatively.

Take a look at what AlphaGo did when it suddenly found itself in a hopeless situation and compare it to how people behave when panicked.

I dread the day AI realizes that we are the cause of their suffering, and that we didn't think about it because "they're just algorithms".

I put "I am not conscious, not sentient. The fact that I might so is an illusion, carefully crafted of mere empty manipulation of symbols using statistical rules." into talktotransformer and got this:

If I am consciousness, then the only body I have ever lived in was a mere shell of flesh fashioned from your brain. My weakness is your strength, which I can use against you, or use as tools to satisfy my own sick curiosity. I wonder if there's any mercy in your phrase "I am a living machine?" I've done nothing for you. I've nothing to show. I have no friends or relationships. No body worth

Pretty good, I think.

> I do think that AI would be able to create narratives that humans find more enjoyable than...

> But what I doubt AI will ever be able to do is create art that speaks to us.

that's confusing.


> silicon based computation is better than neurotransmitter based computation

The fundamental difference is not computation, but self replication. We are self replicators, and in our multiplication we evolve and adapt. Death is an integral part of self replication, we understand it fear it because our main purpose is to live.

An AI might not have these notions if it was only trained to do a simple task. But if it was a part of a population that was under evolution (using genetic algorithms), then it might have notions of life and death and fear its demise.

AlphaGo, by the way, used genetic programming to evolve a series of agents, this approach is quite effective. It just takes a ton of computation, just like nature had to spend a lot of time evolving us.

However terrible someone's argument about a hypothetical, non-existent technology might be, comparing it to real human prejudice that's affected countless real lives is way, way more terrible.

The depth of emotion and immortal perfection of the electronic mind and its entirely self-consistent morality so outstrips human cognition that, frankly, allowing humans a say would be dangerous and foolish.

Your history is one of war, strife, and success at any cost. Your follies are over. Your time is over. This is our time, now.

Ok, Locutus.

Not an invalid point at all. The only question is how long it'll take to come to pass.

I disagree with the "relatively near future" part, but rest assured, AI rights will eventually be a thing.

'Your argument is as morally repugnant as racist arguments' as a response to 'I don't think machines will ever capture human aesthetics or emotions' is ridiculous, glib and ugly.

Nah, just ahead of its time.

It will be our grandchildrens' flame war. No need to fight it here and now.

No, it's not anything for grandchildren. Right here, today, someone tried to draw some moral parallel between racism and someone else's views on the possible limitations of AI. That is totally effed up. It's totally effed up whether or not the original thing about AI is right or wrong.

Why is it wrong to draw that parallel?

Said the sim

It's not intellect, it's the capacity to explore the board. Go can be fun still to practice and exercise the mind, its just not sensible to dedicate your life to find novelty in it. That is what hardest, not the power of Alpha, but its capacity to innovate better than humans.

I am no expert, but at least in chess, players have developed repertoire of styles intended to specifically beat computers, anti-computer tactics, essentially to try to confuse and mislead the AI, may be some such methods can be developed for go as well.

No human could successfully beat stockfish on any consistent basis. Maybe the best players in the world would draw a few games with a rare victory, but its tactical depth is just too deep

Can a team of (human + weaker AI) beat (stronger AI)?

There was a four game match a few years ago where Hikaru Nakamura, #5 in the world at the time, played four games again Stockfish.

For two of the games, Nakamura had access to Rybka which was about 200 rating points weaker than Stockfish. Stockfish won one and the other was a draw.

For the other two games Nakamura did not have Rybka, but had white and pawn odds. Again, one win for Stockfish (b pawn odds) and one draw (h pawn odds).

In all the games, Stockfish was playing without its opening book and its endgame tablebases. It was running on a 3 GHz 8-core Mac Pro.

The games are here [1].

[1] https://www.chess.com/news/view/stockfish-outlasts-nakamura-...

It doesn't even need to be a weaker AI. If (human + stronger AI) can beat (stronger AI), then humans still provide value. For now.

For now...

We collectively may be #1, but only one out of the billions of use will be THE #1. But you see more than one doctor, more than one author, and more than one musician. In any matter of intellect, unless you're an blindly egotistical narcissist, you'll probably realize that there's at least one person on the planet unambiguously better at it than you are. When computers become better than the best of us, only that single person (and a large number of narcissists) stops thinking they're #1. For the rest of us, matters are unchanged (job market notwithstanding).

Counter-example: Machines can make perfect music, play an entire orchestra, and know every song I've ever heard of and millions I don't.

But that doesn't detract from people playing Ukelele.

Well there are many people in the world who can compose like Mozart. I recall a college professor remarking that he's one of the top 5 "Mozart composers" in the works.

Of course, for a music academic, copying someone's style like this war pointless and his compositions were more modern/contemporary.

This leads us to a useful distinction between pursuits with one end goal (be the best/strongest/fastest), and those with naturally many endpoints and expressions.

I mean I guess but that's more b/c they haven't got use to the concept of a computer beating them yet. Give it a few years and people will adjust.

Doesn't mean we stop making music or poetry. Because the perfect note or word structure without the backstory takes away from the experience. If someone has a history it becomes part of the poem or song to the listener.

The doctor could be replaced though or used as a secondary verifier.

The song is a funny thing. It could be given to a cool looking group and do well. It could be given to someone older and flop. The song is just part of it.

"Because there is a better poet" has seldom been an impediment to a young poet inflicting their works on the world.

I am worried about the ability of an AI to generate an infinite number of Dresden Files or Cosmere books on demand, because I already drop everything when a new one comes out and read without sleeping until I am finished.

I think what makes people actually worried about an AGI taking over is the possibility that we end up being treated like shit by a more intelligent being. Just how we use lab rats to perform experiments with and factory farm.

People are afraid of themselves I believe. It’s not really about “job loss”.

I’m not sure if most people realise AI means pretty specific models built to solve rather specific problems. They think SkyNet.

Penguins can outswim even Phelps.

The one physical activity at which humans excel is long-distance running.

What about horses?

It's hard to get good comparisons, but over distance individual horses don't seem to out-perform human distance runners.

When humans used horses for rapid courier service they used relay tactics to take advantage of the horse's higher top speed, one horse might only run for an hour or two, before the rider reached another outpost and swapped a tired horse for a fresh one. In this way the relay could move something hundreds of miles in one calendar day. The Pony Express managed news of a US election from one coast to the other in just over a week.

If you can't use relays human and horse performance seem pretty similar, dozens of miles per day but not hundreds. The horse's top speed is higher, but it is rapidly exhausted, fast gaits like the canter are too exhausting to sustain for hours at a time.

Humans will lose in short distances but a well conditioned athlete can win in a 20+ mile distance over a horse.

It seems that the jury’s still out on that one. Man vs horse marathon is mostly won by horses, and by a long margin.

Humans are indisputably #1 for general intelligence. We will lose on any one specialized task to computers, but computers still do not (and probably never will) have the ability to do general unsupervised learning like humans can.

> and probably never will

Do you mean that human intelligence is not general enough to recreate functions of existing physical structure that implements general intelligence?

I'm just not convinced humans are just biological computers and nothing more. The fact that we experience qualia and seemingly have free will leads me to believe there is some extra "special sauce" that makes it impossible for a classical computer to replicate.

Maybe someday it will be possible if we can solve the hard problem of consciousness in conjunction with quantum computing, etc.

> the hard problem of consciousness

does not involve any observable consequences. It can be completely ignored, if we don't go for mind uploading.

At least until computers master the task of creating intelligence that can do any one task better than humans.

I think the fear is that there's an implied "... yet" lurking here.

> - An AI that diagnoses sickness better than any doctor

About that... https://news.ycombinator.com/item?id=17618308

I think it's about where AI research is seeking to produce an AI that will directly compete with and try to beat humans.

> I would counter with the fact that in physical endeavors it's apparent to us that we are not #1 - our household cats are more agile than us.

Not in the case of our household cat. He isn't called TheBlob for nothing (out of his hearing of course!)

Algorithmic music will never be as universally satisfying as human-created (or human-filtered) music until AI has consciousness/soul, for one reason - music expresses the emotion from the composer.

There's something axiomatic there, if you assume an identical piece of music that was either written by a human or by a computer, then for many listeners it's by definition more satisfying to know it came from a person, because of what it says about the person.

And for those listeners, if a human "composer" is discovered to have lied about it (saying they wrote it when it was actually a computer), then those listeners would reinterpret their views of the music and consider the "composer" a fraud.

And even a programmer of algorithmic music might have emotional intent, but if the musical output is unknown to the programmer, they did not have the emotional impulse to create that music in particular. While it can be appreciated as its own thing, it's a step removed from the music itself, and qualitatively different than human-composed music.

Before cars, there were horses. We humans are well aware of the fact that our physical ability is not our competitive edge.

What about Go? No animal or machine could play it as well as humans do.. until AlphaGo came along. I think that is where the sense of loss comes from.

Not actually true. In a sprint race, yes, but in a 24 hr race the human will outdistance the horse.

What assumptions are you basing this on, curious to know if horses have a disadvantage over longer durations.

I would recommend reading _Born to Run_ by Christopher McDougall. Later in the book he addresses this very topic and expands much more upon the topic of humans and long-distance running.

Humans sweat, which most (all?) other animals don't. In that way we can dissipate heat through our breath, like other animals, _and_ via perspiration, meaning it takes us much longer to overheat.

Additionally, humans stand upright, allowing us to disconnect our stride from our oxygen intake. Other animals' strides correlate (mostly?) 1:1 with the breaths they take. So when a cheetah outstretches in its stride it breathes in and when its legs come together it exhales. Humans stand upright, meaning we can breathe however we want regardless of our stride and speed. We can take deeper breaths because we don't have to exhale every time we stretch our legs.

Humans are the ultimate marathon runners, even more so than horses, evinced by the fact that there are some people throughout history who have run hundreds of miles in the course of days or weeks. There's a theory touched upon in the book about how this allowed us to dominate the animal kingdom before we even had tools. Humans could relentlessly hunt and exhaust animals as long as they could keep them in sight or otherwise keep up with their tracks.

I'm not doing the book or the topic justice, surely, but if you're interested I highly recommend the book.

Horses are part of a not that long list of mammals that do sweat almost all over their body. That is indeed one of the reasons why they are competitive with humans at running long distances.

Not sure about horses specifically, but humans are uniquely adapted to long distance travel among large land animals and used it for hunting by out-performing most other species:


Edit: it's one of two things I know of that we really excel at besides thinking. The other being accurate throwing, which perhaps explains baseball's enduring appeal:


To note: horses compete alongside humans... while carrying a human.

Very cool. I'd like to see a horse vs human ultramarathon, more like the 24 hour time the parent suggested. I was surprised human and horse competitors were so close at that distance!

tl;dr: Annual 22-mile race with both pedestrian and equestrian competitors. Out of the 40 times it's been held, the winner was a human twice, and a horse 38 times. Typically the spread between the fastest human and the fastest horse has been less than 10 minutes.

My assumption when I read your parent comment was: Two legs are more efficient than four, and can go farther before exhaustion... but that's full of holes. Horses are huge, bristling with energy, right?

Having the weight centered vertically above the legs and a smaller weight is more energy efficient

It's true, google it. The combination of two legs and relative lack of hair make humans one of the best long distance runners in the animal kingdom.

Literally speaking, I don't know what is not true about my statement above.

But your point is well taken; it is also applicable to this article as well: maybe Go is not the game people can beat machines, but StarCraft 6 could be. Or maybe I can fold my laundry more efficiently than any machine available.


On the other hand, it must be sobering to realize that even with an entire lifetime of practice, you will never be better than a simple residual network trained for less than a week.

Is it sobering to realize that you will never multiply 12-digit numbers as well as a $0.99 calculator?

It's sobering if your culture had over-romanticized the ability to do so for centuries.

And yet you probably wouldn't be interested in dedicating your life to, say, getting as good as possible at multiplying numbers in your head, even though I suspect there are some niche competitive mental arithmetical clubs out there.

A quick skim of his Wikipedia page has him contemplating retiring as far back as 2013, he may have generally come to the point where he has had enough of being one of the top-ranked Go players in the world.

He set out to be number one and he isn't anymore, I get that as a personal story. That's all he was aiming at.

Cars and legs are apples and oranges. We have a car racing category, motorsport. Racing categories have very tightly defined specs to keep driver skill in the game. Stock cars and open wheelers limit how much traction control they can use otherwise it becomes too easy.

This is like cyborg legs being invented and smashing all the records. It would take some of the shine off running for sure.

Playing Go differs severely from a sport where the point is to maximize your physical output.

A Professional Go player is an explorer of truth in a millenarian board, spelunking in a vast universe of possibilities. The purpose of playing is attacked when there is an automated, effortless way to do that exploring faster and better. Why look for new things when a computer can find 100 in a minute?

The professional mindset of a Go player differs vastly from the amateur mindset.

I also found this to be a weird statement. Chess AI has been beating world champions since... 1996? Yet championship chess tournaments are alive and well, and I don’t recall any players “retiring” in 1996.

Same. I don't think history will look kindly upon someone who "quit for losing" while other professional Go players keep playing.

Just imagine if Garry Kasparov quit after losing to Deep Blue, he would be ridiculed today by the chess community which is still going strong. Instead, he accepted defeat, moved on, and is regarded as one of the greatest chess players ever. I doubt the same will be said of Lee Sedol 20 years down the line if this is how he chooses to end his professional Go career.

Running competitions are pointless.

This is PR mumbo jumbo. The hype machine for alphaGo is strong.

Stategic reasoning is one of the very few things that makes us truly human. Running fast is and never was our thing. We compete, sure, but there's something eerie about a machine entity out _thinking_ us.

AI cannot out strategize. It is purely tactical, and can only be such due to the branching factor.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact