It used to be that you might be able to believe that there is some kind of art behind go, some sort of abstract beauty to it, and that the pursuit of this beauty is the path to being good at go ...
But the defeat of the tactics born from this mindset by MCTS at least for now lay bare the fact that the path to being good at go is actually to probabilistically sample the phase space of the game and perform impossibly mindless statistics on the game outcome an enormous number of times ...
To top it off — there is almost nothing “about go” to learn from watching alpha go play ... I imagine that attempting to analyze alpha go’s victories would produce an unending sequence of the feeling of never gaining any new insight “into go”.
The analysis of go is now about optimizing algorithms — which _is_ interesting — but I don’t think it’s interesting for the same reasons that someone might’ve been passionate about go in the past ...
All but the last phrase are still true. Pursuit of Go for beauty is still pursuit of Go for beauty.
I will never play any instrument as well as a sequencer. I believe algorithms will make beautiful Jazz improvisations in real time, in my lifetime. But playing is still beautiful.
I will never run as fast as a horse, much less a bicycle or car. In my lifetime cars will drive themselves more safely and faster than I can drive a car. It will still be enjoyable.
I sincerely hope my children will live better lives and make better choices than I did. My life is still beautiful most days...
It is our attachment to being smarter than machines that makes us unhappy. We fear evidence that we’re not that special. But we do not need to be special to be happy. We do not need to be special to find beauty in our pursuits.
One day, a bot will live on HN and have hundreds of times more karma than I have. It won’t diminish the beauty of my best contributions. Perhaps the humility of seeing this happen will actually make me a happier person.
I think that as machines become smarter and smarter, it is good that we dive a bit more in our philosophical world and reach for what makes us human.
It took me a long time to understand that if you remove the pure rational intellect, something precious still remains: the driving force that shapes our morals. There is not rational reason to like survival. To love our children, to do what we call good. The resons for that are moral and are, I believe, what defines us as human at the core.
Ian Banks was spot on in calling a future civilization a "Culture". Because when machines take over the productive work, defining our values is going to be our full time activity. Machines may participate, but why would they add anything we consider valuable to the core?
That's going to be an interesting transition and I am happy that I'll probably live to see it!
That's a recurring theme in Ghost in the Shell, at least the anime series. That as the line between human and machine is blurred, the general trend is in the direction of homogeneity, but here and there you still see people doing "deontological" things, ostensibly as a grasp at uniqueness to preserve self.
An interesting thought occurs to me though, with the proliferation of various NN architectures, and the idea that such nets, if scaled and installed to power humanoid robots, would effectively learn different heuristics after training on chaotically different data, it's quite possible that machines will also gradually evolve individuality and something rather close to personality.
Perhaps individuality is an intrinsic, emergent property in any system of generally intelligent and learning agents. Like a 3+ body on steroids, millions, if not billions of dimensions.
Also interesting to consider that the emergence of uniqueness among a system of said agents seems to increase entropy, a presently unique to life property given that everything inanimate in the universe does exactly the opposite. The more I think of general AI, the more I have trouble distinguishing it from the only other sentient intelligence we know of.
Of course there is and very well described in evolutionary science
You can explain why our survival instinct evolved so it is tempting to jump to the conclusion that we ought to survive but it is a fallacy (called the naturalistic fallacy): saying that something being natural means it is good.
You have to remember the specific comparison and the art that humans tried to partake in to master go.
Humans do not run to beat horses. We never compete. That would be silly.
Cars and humans also do not compete at the 100 meter dash.
If humans and robots compete at 100 meter dash than the olympics will similarly lose its luster.
I hate to be that guy, but:
It is very silly indeed.
It's like watching 1-v-100 boxing match. Of course the 100 are going to win.
Enjoyable sport has always been about ~similarly matched opponents. When we have DeepMind AI Go vs. MindDeep AI Go, that's when things get interesting.
I agree with the sentiment that one can't be good at everything, and that there is a radically high-performance exemplar for anything, but for machines to dominate in almost every category of luxury endeavour -- yes I can see that as demoralizing. Even the useless things we spend our time on, we can be no good at that.
Human existence will still occur, however, our species's defining characteristic will be no more consequential than a meadowlark's greeting of the sun every morning. Beautiful things will occur, but they will be ghostly imitations of the creations of some other being. Humans will create nothing, except, possibly, more humans.
It's not necessarily about your success vs. the machines. It's about every human who will ever live in the future's impossibility of success vs. the machines that is depressing and inevitable.
You are begging the question in in the original sense of the phrase "begging the question."
You start by stating your conclusion as an axiom! You state that "no human contribution matters because a different species already thought of it" as if that is necessarily true.
We choose what matters. I play Bach, badly. Bach already thought of that music, and many thousands, possibly millions of people have played the particular pieces I have worked on (The first suite for unaccompanied cello in Gmaj, the Prelude to the first Fuge in Cmaj from Book One of WTC).
Does my playing not matter?
I say it does. Furthermore, Bach's music can be encoded as a number. All numbers already exist. Bach did not create that number any more than I created the number that encodes this comment.
Does my comment not matter?
You find this general idea depressing, and so do many other people.
But it isn't "depressing" in an absolute sense. That's just a word we made up to describe a feeling many of us happen to have.
The fact that Bach's compositions can be encoded as a number doesn't mean that it wouldn't take a genius of Bach's level to deserialize that particular number, which could literally be deserialized into a representation of any arbitrary thing in existence with the correct algorithm, any less novel when it was created. The same holds true of your comment. Just because I declare 1001 represents a beautifully unique masterpiece, once it is decoded, does not make that masterpiece actually exist.
Bach was important. In a world with general AI no human will have the ability to be important in that way ever again. At first AI will create crude imitations of human art. Then it will create hundreds of billions of creative works that are more human than human. Then it will create artworks that surpass our ability to comprehend. I don't know about you, but the inability to do anything novel as a species, to learn anything new, is a terribly bleak possibility.
Having dug quite deep into algorithmic music generation of all sorts, as well as having studied Machine Learning as my uni masters, I still believe that you still need an actual artist to make a music generator program.
A machine simply isn't going to figure out "swing" if you don't tell it to. And swing is one of the easiest things. Yes if you look just at note generation, I think algorithms can go a long way. But the subtleties of timing and timbre, it can only imitate in context. Which definitely is good enough for many purposes, and I agree with your prediction that algorithms will be able to generate beautiful music, but I also think there will always remain an "edge" for the artist. If only to discover novel things that are also cool, and then working those out in order to fully express the coolness of that new thing.
I'm also thinking about all the evolutions the many genres of electronic music have gone through the past decades. New and novel "sounds" (or moods, or styles, etc) are still being create/discovered. It is that process, that I don't think we're there yet. Yes the algorithm can probably generate beautiful psytrance, lo-fi hiphop beats and if I'm generous probably eventually also really complex (and jazzy!) stuff like squarepusher.
But what I'm not seeing happening any time soon (barring any breaking general AI type of advances) is if you give the algorithm a TB-303 for the first time and see if it figures out acid house. Yes you can probably teach it the origins of neurofunk DnB (think 1999 Optical & Ed Rush's Wormhole album) and produce super awesome dance music. But I don't see how it could ever develop what happened to DnB beyond that. Wavetable synthesis didn't really exist that way back then, and the bass came from more classic synthesis like the Reese bass. Nowadays, what you can do with a wavetable synth VST like Serum, almost defines what modern DnB sounds like. That particular sound was evolved and shaped through the genre of drum'nbass and became part of it, a new style of synthesis, heavily facilitated through the particular UX controls of these synth plugins, which again caused the author of Serum to amplify by creating his vision of what this UX should be like, is almost like the birth of a new instrument, together with artists having to learn the correct style to "play" this instrument. Which has settled enough now that it is appearing in other new genres. Yet is also still developing. And that is just one genre of music I happen to be somewhat familiar with, I'm sure similar examples can be named in many other genres (for instance I don't know much about the history of dubstep).
Those evolutionary steps, invention of truly novel things, for the foreseeable future, I don't think AI is there yet and artists do still have an edge, even if it's a very thin one.
This is a rationalist opinion that is attractive because it appears unbiased,
unwilling to make any concessions to human nature and to recognise its
And yet- this same opinion overlooks the greatest source of scientific
wonderment in the victories of AlphaGo and family against the best human
Which is to say: that human players play Go (and Chess, and Poker, and Magic:
the Gathering etc) very differently than machines. In particular, human
players do none of the extremely tedious, extremely computationally intensive
maths that computer players have to do. Human players don't perform MCTS,
neither do they train by self-play for many thousands of human-years.
Somehow, humans can play Go and Chess and all manner of board games _without_
having to do any of the hard work that only computers can do reliably. We are
not particularly good at those games- but we can play them well enough that
beating the best of us still takes huge amounts of computational power.
How we do this, why we are even capable of doing this and what other benefits
it confers to us: _that_ is the interesting set of questions. That a big
machine can outperform a human ... we have known this since
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.
- Hamlet (1.5.167-8)
This sounds like a Chinese room argument.
Performing computationally expensive maths doesn’t make the computer intelligent.
But that says nothing about the intelligence of the maths itself.
Also, sure, you can dismiss it all as statistics. But how sure are you that what's happening with humans isn't in some form? I'd also say that MCTS is something people kind of do too in games: look a few moves deep and try to judge the value of that position, which is definitely more interesting than simple RL/bookkeeping/stats.
Do people keep around versions of “alpha go year 2017” and play against it in order to measure human improvement over time?
If the basis for observing improvement has become “I can beat old versions of the ai more reliably than I used to be able to” or if I have learned to beat players who have not studied alpha zero I suppose that’s a form of usefully learning “about go” by analyzing the games played by alpha zero ...
I wonder if we might ever arrive at a point where human vs fixed-year x ai performance at go pretty much stops increasing over time ...?
1) People have learned a lot from new engines: joseki (corner patterns), general strategy (e.g., moves on the side are now considered less valuable, making large moyos (largely empty space loosely surrounded by your stones) is less attractive because AIs have demonstrated that they're more invadable than previously thought) and are able to actually explain the new principles in human terms.
2) All go professionals now play in the new style, to some degree; ones who tried to continue in the old pre-AI style performed badly.
So I am comfortable claiming that human play has improved by learning from the new engines.
They may use it for training and analysis, but they don't play their style - they play an inferior style (not to dismiss their achievements).
Another correction about what you said is that computer go has brought tremendous evolutions to the pro scene. Josekis that were accepted for centuries have been challenged, and people have learned why. The value of sente has been emphasized, and pro players that have not learned are being pusshed out. In that aspect, computer go has brought change of a magnitude comparable to the shin fuseki movement.
Finally, it is expected that the availability of strong programs will bring a wave of better players, much like what happened in chess. I for one look forward to this happening.
I had always thought there was some element of recognizing its “impossible to ever even remotely approach a true grinding of the combinations” for go — and that somehow players _did something else_ effectively when they played at a high level. It’s that “doing something else effectively” that the defeat of humanity by go algorithms challenges ...
Would be super Interesting if there could ever be a reversal which might allow humanity to beat go algorithms once again ... is there any evidence from the strategies that the go community has learned by analyzing the new go algorithms that this could be possible? Or is there just more and more evidence over time than humans will never be able to compete effectively at go again?
It is not imaginable now that humans will beat a machine in the future, but it is also undeniable that humans progressed from computer go. The evolution of joseki (game patterns considered fair for both players), especially corner joseki are material evidence of that.
But, while you sit there waxing creative, your brain cells are likewise performing a mindless task on an enormous scale.
There is art! It's just an emergent property of learning the game. Seeking art doesn't make you a better player - that's the trap one can easily fall in any sports. It's the opposite - efficient strategies solidify as art.
The main issue in determining man or machine in each situation eventually will be which has the lower TOC.
I think an interesting issue here is that in the (far) future, many services could be performed cheaper by AI/robots and in such a way that the customer is unable to tell wether a human is involved or not. And in this future, humans will probably be a premium service.
Take motor sports for example. We can probably now/soon replace F1 drivers with algorithms and cameras, but nobody would pay 1000's of USD to watch them drive around in Monaco. If it would turn out someday that the drivers had been replaced (for safety reasons or whatever) without telling the fans, the outcry would be tremendous. And even if outcry does not always equal "true utility", I think it highlights my point: humans made of flesh and blood risking their lives or performing extraordinary feats have an intrinsic economic value that can't be replaced.
I can do that, you can do that, but will a computer be able to do it?
...a situation we might want to simulate for training purposes
(If you don't want to click the link, there's a joke there that machines may have a hard time "being to cool to care about stuff.")
That's not true.
Q: How much better is AI now compared to when Lee was playing against AlphaGo?
"It has increased enormously. It is only natural for the pros to lose a couple of spots. That's why most of the pros study baduk with AI."
Korean News Interview:
I’d be curious to know if that’s incorrect ...
Also, relevant xkcd: https://xkcd.com/1002/
However, there is also a new bot that has been trained using the self play method and has been crushing the bot that won the Arimaa challenge. In games like this:
Most likely the bots are still ahead.
That's a playlist of 31 self-play games analyzed by Michael Redmond 9p. They are plenty interesting to study.
I've heard it said that man landing on the moon was like that for them, but I didn't understand as it was the only world I knew.
Now I can appreciate that these were the firsts of many singularities yet to come in AI and space exploration and I hope to live to witness a few more (but not too many).
(Edit) To those who think self-driving is a well-defined problem: it can be in some remote areas, but imagine driving in bustling city streets with kids, bicycles and dogs. The driving problem becomes a communication problem.
Humans arrive safe and unhurt (as much as possible, especially while human drivers remain on the road) at their destination with minimal violation of the locality's established rules of the road. No?
(Though now that I've written what amounts to a utility function, I fear what sort of paperclips may come out of it.)
There are also different ethical norms in different cultures about preferences (https://www.wired.com/story/trolley-problem-teach-self-drivi...). While these are edge cases, they're the edge cases people are worried about, and the source of the ill-definedness: "unhurt as much as possible" implicitly chooses some ethical tradeoff that people can easily have different answers to.
I wish. I've come to the conclusion that the only true rule of the road is: don't crash. As long as no actual collisions occur, people are totally fine with doing whatever they want and bending the rules for their own convenience. I can no longer predict the behavior of other drivers. Even something as basic as the turn signal is unreliable since people are forgetful.
My respect for AI increased drastically that day, and (honestly) I developed a small amount of fear due to how AlphaGo’s style of play was not understood particularly well (e.g., some of the moves would absolutely be called “slack” if played by a human).
I can’t actually remember where I first learned about the match. It may have been HN, it may have been in an AGA e-mail, or it may have been some tech-oriented magazine/web site in English or Japanese. I am certain it wasn’t match 1, because I reviewed earlier matches, and I remember the let down of match 4, so must have been match 2 or 3.
Day 1 was a great surprise but I was still left wondering if it was a fluke. Day 2 showed that it was no fluke and I started to get a sinking feeling. I guessed that Lee Sedol would lose the third game and win the fourth after the pressure of the Korea and humanity was off him.
"Solved" in the AI/game theory has a very strict definition. It indicates that you have formally proven that one of the players can guarantee an outcome from the very beginning of the game.
The less-strict definition being thrown around here in the comments is more like "This AI can always beat this human because it is much stronger."
An algorithm can't claim to have "solved" Go, when future versions of the algorithm are expected to achieve vastly superior results, never mind any formal mathematical proof of optimality. What it has demonstrated is that humans aren't very good at Go. Given that Go involves estimating Nash equilibrium responses in a perfect information game with a finite, knowable but extremely large range of possible outcomes, it's perhaps not surprising that Go is the sort of problem that humans are not very good at trying to solve and that computers can incrementally improve on our attempted solutions. Perhaps the more interesting finding from AlphaGoZero is that humans were so bad at Go that not training on human games and theory actually improved performance.
I think Kasparov has it right when he says that the best player isn't a human or a machine, but a human using a machine as a tool. The machine can help the player optimize and reduce mistakes, but machines don't yet know how to ask questions and explore in the same way. Maybe they never will.
Chess engines have been defeating humans for 20+ years (and are overwhelmingly stronger for a long time), but that hasn't diminished the interest in competitive chess, because the human element of competition and struggle and deep fundamental appreciation for the game is what makes it worthwhile pursuing.
AlphaGo can play go but it cannot appreciate the beauty of the game (at least as of yet, and I don't think it would make the game worse if it could), and so I don't think there's a meaningful conflict between humans and machines.
If someone invented some sort of superhuman math proving engine tomorrow it would not diminish the beauty of maths and I don't think anyone would quit the field. Just like in chess it ought to motivate people to understand their field better.
On the contrary, appreciating the game is the core of what AlphaGo does. In order to search the tree of moves it learns how to play (expand search) and how to evaluate (cut off branches of search). I believe it might appreciate the game on a deeper level than humans, in its own unique way. Of course it can't appreciate the social aspect of the game and all that comes with it.
Put another way, Chess is literally a matter of life and death for AlphaGo, because chess is all it knows. It has no exterior context for which chess is a metaphor.
It's 'appreciating' the value of various states and moves, in light of a vast trove of experience.
Is that true? I feel like chess was a bigger deal in the past. Among my peers, poker and computer games seem a lot more popular.
As to the direct influence of engines the other innovations aside, it has definitely forced players at the very top to re-evaluate the chess metagame, find weaknesses in traditional openings and shook up strategies. For the strongest players engine evaluation has become a useful tool providing new insights. When people watch chess tournaments these days on the internet most websites will provide parallel engine suggestions and commentators use engines to take hints for their commentary.
In my opinion, engines have made the game more competitive at the pro-level and more accessible for casual viewers.
Like don't you enjoy a game to enjoy a game? You can't beat Carlsen either, but you enjoyed the game at your level. Now computers are Carlsen +1, but how you enjoy the game shouldn't be affected. Especially since deep blue won in '97 and the game is still very alive and well, it hasn't been killed by computers but enhanced. Coupled with the multitude of good chess sites and resources out there, it's a better time than ever to enjoy the game.
Maybe for these games to keep popularity, we just need to update our perception of it. The same way we do with FPS games. Yes, we know a bot would do better - but that's not what matters.
Another thing, as said in other comments, is that we can learn from bots. New strategies, new patterns. AFAIK, this is not happening in FPS eSports scene.
There's nothing strange about becoming demotivated to study and compete at something extremely taxing both emotionally and mentally when a machine can beat you after an illustrious career.
If Lee Sedol says "Okay AlphaGo, let's play!" and sits down at a board what happens? Nothing happens! AlphaGo has no agency. AlphaGo is an extension of human agency. AlphaGo isn't better than humans at Go. Humans with AlphaGo are better at Go than humans without AlphaGo.
AI is the future of Go because it enables those new perspectives and new processes by which human players can learn. AI is smart-dumb, looking for patterns beyond the human capacity, but limited by the data on existing human players that has been provided to them.
Strava hasn't made running races pointless. I can compare my runs to others but that is a very different metric then beating them in a head to head race.
I think you misunderstand how AI in games like go now works. Most of the advancements recently have been from the AI playing itself, oftentimes without any database of human moves at all.
Except it isn't. In various games, new strategies have been found by playing AI vs AI. It's also possible to create AI players by self-play with no knowledge of human matches.
What makes recent breakthroughs in AI agents playing adversarial games possible is the fact that deep neural networks are able to develop patterns that yield short- and long-term strategic planning. And the ability to self train without human intervention to reach unprecedented training levels.
But the ability to play Go never matters outside of playing Go, and we know who is the best at playing Go.
> He actually quit the KBA in May 2016 and is now suing the association for the return of his membership fee.
> "... [I] have something else to do," he said, asserting his only dream for now is to rest and spend time with his family.
Edit: Meant to include the part where he's planning a high profile set of games against another ai.
I have to say, I'm just continually puzzled by this. In this thread, yours is the only comment where it appears that the commenter read beyond the first few paragraphs. There are numerous comments by people speaking authoritatively on go and AlphaGo, who have clearly not studied either significantly.
If people like the comments better than the articles, it's not so surprising that they like to spend time reading and writing comments instead of reading the articles.
Lee's issues with the KBA are not a secret and he has discussed possibly retiring for some time now. He has given multiple reasons as to why he was considering retiring and while ai might be one of them, saying that its _the_ reason feels very clickbaity.
Hard for me to empathize with this argument for his retirement. If we can't outrun a car, does that make running competitions pointless? The existence of AlphaGo doesn't diminish the triumph of being a number one human player in any way.
It's in matters of intellect that humans still believe they are #1.
AlphaGO's achievement in another field would have similar effects, e.g.:
- An AI that diagnoses sickness better than any doctor
- An AI that generates text which humans believe more beautiful than any other poetry created
- An AI which creates classical arrangements the likes of which we compare to Mozart
I would imagine that in any of those situations some doctors, authors, and musicians alike would be devastated.
You don't even have to compare yourself to AI for this mentality though. There are people who choose not to compete in things because they don't believe they'll ever be as good as other humans.
I assume must composers don't go into music thinking they are going to be as great as Beethoven.
I believe there are many studies that show that if you only do something because you think you're good at it, you're likely to drop off. I imagine it's also why you're supposed to praise children for being hard working and not for being smart or talented.
Making a classical arrangement that evokes a particular expression in the listener is the job of the musician. If an AI system helps you explore the possibilities there, it's more like a studio musician that's able to improvise. You're still the person, the human, the emotional filter, that picks "This sounds right" or "This doesn't" for a particular situation. It's a judgement call. An emotional one.
An AI might be able to fake it, communicate with it, but it will never replace humans choosing the sounds that please them more than others. Humans communicate through music. It wouldn't surprise me that an AI would be able to as well. I don't think it would necessarily write emotionally strong music, not without human training.
Edit: I guess what I'm trying to say is, sure, computers might be able to make music. Ask any guy who messes with modular synthesizers. But they're a tool. The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself. It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."
I'm not so sure. I often go into threads on HN and realize that every idea I could come up with on the subject has already been expressed better than I could do it, with greater expertise, and cited sources. I don't comment in those threads. If AI bots could populate a thread with every likely human thought and argue it with depth and sophistication in a well reasoned, yet carefully approachable and well-explained way, well then... again I don't think I'd feel like I would be adding much value by participating.
What distinguishes music written by AI from music made from humans? I have a story to tell. If the AI has a story to tell, one that speaks to our human emotions, it might make good music. But the point is to communicate. Even if you take, for example, someone else's words, fit them to a different model in a different field, viewpoint... You might get interesting things. You could make a cover of someone else's song, with your twist. Adding your emotion to the melting pot. AIs might be good at that, just like that, but only through communicating. Just like us. We have no idea whether they'll be better than us at doing it, or merely equivalent. We have no idea what is lossy in our sharing of mental models. Perhaps it is an unsolvable problem, which we will find out in the same way we found out about Gödel's Incompleteness.
It seems to me like we fail to understand how unique we are. We are in a unique position to shape what comes after us, and we are blind to how much we unconsciously select for things. We have an innate mental model of "humanity" we are trying to transmit to machines, and I am not sure we fully grasp it well enough to make sure we are creating something like us. We fail to do it properly to humans, sometimes, who actually do share most of our instincts and habits. Something entirely different from us? Color me skeptical.
This kind of debate only highlights this, to me.
I think this is the key; if you're making music for your own reasons, no AI (or Mozart) would stop you. But if you're trying to make money at it, or desperately want listeners, you may eventually be on the "losing" side.
As far as recent examples go, Lady Gaga and Lorde were major breaks from what was prevalent at the time they started releasing music, and then spawned artists trying to emulate them.
If we oversimplify and compile a list of traits about "the world" as it was in the past that allowed a new genre or artist to flourish, AI could predict that in the future. It isn't like the paradigm shifts just happen in a vacuum.
Granted there are probably millions of little things that lead to this, stuff like the shared experiences of an entire generation coming of age, political climate, trends in other industries, etc. Not that I believe it will ever happen to an accurate enough degree, but theoretically I don't see why it could not be possible to approximate given time and resources.
If you feed an AI a bunch of modern car designs and ask it to design a new car, it will design you something like a modern ford or honda/toyota, but it will never design something like a Cybertruck. Which I believe will be the next paradigm shift in the design of trucks (that has been super stale and stagnant for at least the past 20 years), but this is yet to be seen.
For an example with music that has already happened and became apparent - Kanye West's "808s and Heartbreak" album from late 00's. On release, it had very polarizing reviews, most of which were skewing towards "really weak and weird". Fast forward 10 years, most of hip-hop and pop music is directly influenced by that album, most of top 50 albums use similar patterns and methods used in that album, and critics have made a complete 180. So now 808s is hailed as one of the biggest (if not the biggest) paradigm changes and influences in music of the past decade as a whole, as well as the best album by Kanye, despite at the time being called the worst. Imo an AI trained on music of 00's that came before 808s would have never been able to come up with something like that, but it totally could've come up with another top 100 song using existing paradigms.
That's not something we can really lose without losing something that connects us. People want a story. That has sold since the beginning of time, and it will keep selling. People will keep being moved to music, giving money to the artists that inspire them, and that requires connection. Maybe an AI/human team would make some really incredible stuff, and I'd be willing to pay for it if it makes me feel something. I think the human touch of "selection" will never truly leave, even if only in the listener's mind...
So music generation (similar to poetry) is imo a completely different problem space altogether.
For every individual doing the evaluation, I think it will certainly possible to train an AI to beat humans at getting "good" scores.
Neither art nor music are competitive activities. Good poetry is a wonderful thing, no matter the source.
They certainly are! Especially when money is on the line, and the best musicians, actors, and artists are extremely well compensated making their positions extraordinarily competitive.
>Good poetry is a wonderful thing, no matter the source.
Sure, but I think you neglect to consider the defeating feeling it would bring to dedicate your entire life to mastery of a subject only to be completely and utterly, hopelessly outclassed. Almost every such person is already hopelessly outclassed by someone in their field, but those people are so rare that they have tremendous exclusivity surrounding them. Compare that to the scenario of having any 12 year old with a smart phone being able to instantly produce a totally novel and dominate piece of artistic expression developed by an algorithm on their phone. Then recognize that in a world with that level of AI sophistication, there'd be very little of value that a human can even offer other humans at that point. It would be... not great to the psyche, economy, or society.
What is your definition of best in this context? As far as I know, taste in art is very personal... Artists I consider the best are often very far from well compensated.
But, in almost any particular human artistic sub-niche with it's own definition of "best", the same principle will hold, with compensation and skill level being well correlated. It's also typically not even close to linearly correlated either, most of the compensation lies at the far tail of "best".
It's nice to be paid, and it's nice to be recognized, but I think art has its own form of wealth - otherwise, why make art? Why not just seek recognition, or money?
I think we’ll see a lot of things similar to “AI x-ray technician” fields where people are trained to read AI outputs. Doctors will do higher levels decisions.
Take a look at this painting: 
It is a comment on war, bravery, death, life, fear, sacrifice. It is drenched in the political and social context of the day.
I really don't see AI coming up with anything even remotely like this independently, and view such an achievement to be much harder than simply diagnosing disease or writing an emotionally moving classical composition. It would be comparable to writing some types of poetry or song lyrics, however, which require reference to context that humans understand but machines don't (yet).
 - https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/El...
 - https://en.m.wikipedia.org/wiki/The_Third_of_May_1808
Hrm, I do think that AI would be able to create narratives that humans find more enjoyable than the work of other humans, and I agree that AI would be able to create pictures and sound that humans find to be more enjoyable to look at or hear than the raw work of humans. AI can master the technical feats of composition and art.
But what I doubt AI will ever be able to do is create art that speaks to us. It wont ever be able to create a Guernica. It wont be able to create a Crime and Punishment. It wont understand what it is to be human and mortal, what suffering is, and it wont be able to look within itself and find what those things mean to it and then share that with us, because in the end it's just a bunch of code running statistical computations. It wont fear death, it wont have children it cares about or a family history to look on and tell us about. It has nothing of emotive value to share.
That belief grew into a sort of shared perception that they were artists in pursuit of a perfect expression of their art. For many top players that belief was ingrained from an early age. They believed themselves to be doing a service to the world, making it a better place by creating new art that was a unique expression of themselves.
And then AlphaGo (and successors) shattered that worldview. This is part of the natural sequence of the collapse of a suddenly, surprisingly invalidated worldview. Part of me feels sorry that he has lost his place in the world. Another part of me firmly believes in the mediocrity principle, and that the worldview he represents was obviously far too human-chauvenistic to be correct, and it's a good thing it's dying.
And part of me hopes you can give up your human-chauvenism before the same thing happens to you.
... says a bunch of neurons that run on chemical reactions and electrical impulses. I think this line of thinking reeks of dualism - it creates a special something that is above explanation, a different essence.
But seriously, I believe the difference comes from embodiment. When we embody our AI friends they will be able to grasp purpose and meaning. We get our meaning from 'the game', when AIs will be players they will understand much better. Let them try out their ideas on the world and see the outcomes, grasp at causality, have a purpose and work on it. This will fill the missing piece. It's not that they are fundamentally limited, it's that we have the benefit of having a body that can interact with the world. Already AIs that work in simulated worlds (board games, video games) are getting better than us. We can't simulate reality in all its glory, and it is expensive to create robotic bodies. On the other hand humans and our ancestors have had access to the world from the beginning.
Of course, current AI can't even make an 8th grader's essay (which is not to say that it isn't impressive). But what these artists did was not magic. As far as we can tell, the brain is a purely physical entity. Unless you believe in dualism, which would be fair enough, there is no reason to suppose that what we do could not be replicated by something "artificial".
But it won't need to. All it will need to do is manifest the same end-product via whatever means, no matter how vacuous or computational that means may truly be. The suffering of an artist is relevant only inasmuch as it is responsible for producing the art. If the same end-product can be manifested via a mere computation then our criteria of "art" is still satisfied. In a world in which provenance cannot be established, the ostensible mortality of the artist becomes moot.
This is a real hot take to be asserting as blithe fact.
Without knowing what is truly born of human hands, what value can art have? Our heuristics of establishing 'real' art are easy to manipulate. If we are presented with a soul-breaking poem and weep uncontrollably then its merit is regardless of its mortal provenance.
At a low enough level, our brain seems to be just a bunch of neurons firing impulses at various rates that can be described as statistical computations. Why be so sure that the right neural network wouldn't understand what it is to be human and mortal, understand suffering, have emotive value, etc?
Aside from directors, authors, artists, etc, who have demonstrated this to be false, an AI could conceivably synthesize the experiences of every author that wrote on what it means to be human or experience mortality and create a story that captures the essence of the experience better than any one person ever could. Having the first person experience doesn't induce a superior ability to communicate features of the experience.
> Aside from directors, authors, artists, etc, who have demonstrated this to be false [...]
probably not what you meant, but this sounds like you know some nonhuman/immortal artists :)
This is your opinion, but you then go to mention things that are not necessary to create "art that speaks to us" (look within itself and find what mortality means etc.).
What if we advance AI reasoning skills to a point that it can find high-level patterns in how artists go from different human feelings (as described in litterature and other mediums), takes in a lot of the entities we can relate to (animals, what humans look like, etc.) and some aesthetic ones (shapes, colorometry, textures, ...) to create a new piece of art that optimizes for: "Likelihood of speaking to us"?
What then? It seems like an AI doesn't need to be mortal and self aware to do something like that.
“How the clouds
Seem to me birds, birds in God's garden! I dare not!
The clouds are as a breath, the leaves are flakes of fire,
That clash i' the wind and lift themselves from higher!”
As someone who grew up in Appalachia, I have never in my life encountered a more visual, visceral description of autumn leaves than ‘flakes of fire’. It’s perfection, and maybe a single human is behind it, but more likely we all wrote it.
Take a look at what AlphaGo did when it suddenly found itself in a hopeless situation and compare it to how people behave when panicked.
I dread the day AI realizes that we are the cause of their suffering, and that we didn't think about it because "they're just algorithms".
If I am consciousness, then the only body I have ever lived in was a mere shell of flesh fashioned from your brain. My weakness is your strength, which I can use against you, or use as tools to satisfy my own sick curiosity. I wonder if there's any mercy in your phrase "I am a living machine?" I've done nothing for you. I've nothing to show. I have no friends or relationships. No body worth
Pretty good, I think.
> But what I doubt AI will ever be able to do is create art that speaks to us.
The fundamental difference is not computation, but self replication. We are self replicators, and in our multiplication we evolve and adapt. Death is an integral part of self replication, we understand it fear it because our main purpose is to live.
An AI might not have these notions if it was only trained to do a simple task. But if it was a part of a population that was under evolution (using genetic algorithms), then it might have notions of life and death and fear its demise.
AlphaGo, by the way, used genetic programming to evolve a series of agents, this approach is quite effective. It just takes a ton of computation, just like nature had to spend a lot of time evolving us.
Your history is one of war, strife, and success at any cost. Your follies are over. Your time is over. This is our time, now.
I disagree with the "relatively near future" part, but rest assured, AI rights will eventually be a thing.
It will be our grandchildrens' flame war. No need to fight it here and now.
For two of the games, Nakamura had access to Rybka which was about 200 rating points weaker than Stockfish. Stockfish won one and the other was a draw.
For the other two games Nakamura did not have Rybka, but had white and pawn odds. Again, one win for Stockfish (b pawn odds) and one draw (h pawn odds).
In all the games, Stockfish was playing without its opening book and its endgame tablebases. It was running on a 3 GHz 8-core Mac Pro.
The games are here .
But that doesn't detract from people playing Ukelele.
Of course, for a music academic, copying someone's style like this war pointless and his compositions were more modern/contemporary.
This leads us to a useful distinction between pursuits with one end goal (be the best/strongest/fastest), and those with naturally many endpoints and expressions.
The doctor could be replaced though or used as a secondary verifier.
The song is a funny thing.
It could be given to a cool looking group and do well. It could be given to someone older and flop. The song is just part of it.
I am worried about the ability of an AI to generate an infinite number of Dresden Files or Cosmere books on demand, because I already drop everything when a new one comes out and read without sleeping until I am finished.
People are afraid of themselves I believe. It’s not really about “job loss”.
I’m not sure if most people realise AI means pretty specific models built to solve rather specific problems. They think SkyNet.
The one physical activity at which humans excel is long-distance running.
When humans used horses for rapid courier service they used relay tactics to take advantage of the horse's higher top speed, one horse might only run for an hour or two, before the rider reached another outpost and swapped a tired horse for a fresh one. In this way the relay could move something hundreds of miles in one calendar day. The Pony Express managed news of a US election from one coast to the other in just over a week.
If you can't use relays human and horse performance seem pretty similar, dozens of miles per day but not hundreds. The horse's top speed is higher, but it is rapidly exhausted, fast gaits like the canter are too exhausting to sustain for hours at a time.
Do you mean that human intelligence is not general enough to recreate functions of existing physical structure that implements general intelligence?
Maybe someday it will be possible if we can solve the hard problem of consciousness in conjunction with quantum computing, etc.
does not involve any observable consequences. It can be completely ignored, if we don't go for mind uploading.
About that... https://news.ycombinator.com/item?id=17618308
Not in the case of our household cat. He isn't called TheBlob for nothing (out of his hearing of course!)
There's something axiomatic there, if you assume an identical piece of music that was either written by a human or by a computer, then for many listeners it's by definition more satisfying to know it came from a person, because of what it says about the person.
And for those listeners, if a human "composer" is discovered to have lied about it (saying they wrote it when it was actually a computer), then those listeners would reinterpret their views of the music and consider the "composer" a fraud.
And even a programmer of algorithmic music might have emotional intent, but if the musical output is unknown to the programmer, they did not have the emotional impulse to create that music in particular. While it can be appreciated as its own thing, it's a step removed from the music itself, and qualitatively different than human-composed music.
What about Go? No animal or machine could play it as well as humans do.. until AlphaGo came along. I think that is where the sense of loss comes from.
Humans sweat, which most (all?) other animals don't. In that way we can dissipate heat through our breath, like other animals, _and_ via perspiration, meaning it takes us much longer to overheat.
Additionally, humans stand upright, allowing us to disconnect our stride from our oxygen intake. Other animals' strides correlate (mostly?) 1:1 with the breaths they take. So when a cheetah outstretches in its stride it breathes in and when its legs come together it exhales. Humans stand upright, meaning we can breathe however we want regardless of our stride and speed. We can take deeper breaths because we don't have to exhale every time we stretch our legs.
Humans are the ultimate marathon runners, even more so than horses, evinced by the fact that there are some people throughout history who have run hundreds of miles in the course of days or weeks. There's a theory touched upon in the book about how this allowed us to dominate the animal kingdom before we even had tools. Humans could relentlessly hunt and exhaust animals as long as they could keep them in sight or otherwise keep up with their tracks.
I'm not doing the book or the topic justice, surely, but if you're interested I highly recommend the book.
Edit: it's one of two things I know of that we really excel at besides thinking. The other being accurate throwing, which perhaps explains baseball's enduring appeal:
But your point is well taken; it is also applicable to this article as well: maybe Go is not the game people can beat machines, but StarCraft 6 could be. Or maybe I can fold my laundry more efficiently than any machine available.
Cars and legs are apples and oranges. We have a car racing category, motorsport. Racing categories have very tightly defined specs to keep driver skill in the game. Stock cars and open wheelers limit how much traction control they can use otherwise it becomes too easy.
This is like cyborg legs being invented and smashing all the records. It would take some of the shine off running for sure.
A Professional Go player is an explorer of truth in a millenarian board, spelunking in a vast universe of possibilities. The purpose of playing is attacked when there is an automated, effortless way to do that exploring faster and better. Why look for new things when a computer can find 100 in a minute?
The professional mindset of a Go player differs vastly from the amateur mindset.
Just imagine if Garry Kasparov quit after losing to Deep Blue, he would be ridiculed today by the chess community which is still going strong. Instead, he accepted defeat, moved on, and is regarded as one of the greatest chess players ever. I doubt the same will be said of Lee Sedol 20 years down the line if this is how he chooses to end his professional Go career.