This is a retread of the traditional I'm-actually-a-dualist argument, and a pretty naive version of it at that. The brunt of the argument is the citation of Searle's chinese room, which is treated as proven fact. I'll give it credit as being a powerful intuition pump, but it really just reduces to the dogmatic divide between those who believe that consciousness can be an emergent property of mechanical systems and those who believe consciousness is the domain of some kind of non-mechanical, non-physical thing, such as a soul.
Some other notes;
1) I dislike casting statistical learning and symbolic systems so starkly against eachother. They are both facets of the same underlying principles, and the relation is better illustrated as a continuum, rather than a sharp divide.
2) To succumb to participating in the retread, we don't know enough about the nature of language right now to know whether the premise of Searle's chinese room even stands on its own. We don't know if such a book of translation is possible. And we don't know how to measure its accuracy. This is an old question; c.f. the kerfuffle lately over what the Turing Test actually means.
3)
>>>A machine needs to at least have an analog of a desire to do something before it does it. For a biological system, it “wants” to live. This desire emerges from several millennia of evolution. A machine doesn’t want food, of course, but it has no direct reason to “want” to live.
We need operational definitions of all of the terminology involved before we can make statements like this. It's dangerous to take these understandings of 'desire', 'want', etc. as givens when we don't even really know how they apply to -us-. This is where an understanding of the history of philosophy is helpful.
I agree with the fundamental premise that AI isn't something that is so worrying as a lot of these overhyped press releases make it out to be. But I don't think Searle is on the right path here.
If I had to do it over, I think I would shrink the rest of the piece, and expand the last section. (Maybe I still can, it is not paper after all).
Anyway, thanks for your thoughtful feedback.
I don't necessarily disagree with most of what you say, but here are a few responses:
1. For what it is worth, I think consciousness might be an emergent property of machines, but not of the sorts of machines we currently have. That is to say, I don't think symbolic logic or machine learning are those sorts of machines.
2. Why do you dislike casting statistical and symbolic machines against each other? (Because this is the internet: serious question.) alluding to your note (1)
3. I am aware of some of the original criticisms of Searle's Chinese room, but I am not sure what you are alluding to here (in your note 2).
4. You make a fair point about defining desire, wants and so forth. This is quite a departure from many of the other points that question the need for desires or wants. I would be very curious as to how you believe that would add an important dimension here. (again, because of the internet, serious question.)
Thanks again for reading it and taking the time to comment.
To the extent that the Chinese Room is interesting, it's also irrelevant here. It's a worthwhile thought experiment for considering consciousness and learning, but as you say, it's dualism. That has very little impact on the here-and-now of questions like "are we all going to die?"
The "analog of desire" passage was particularly egregious this way. If an AI follows its programming and thereby kills me (or kills everyone), I'll have very little interest in whether it desired that outcome.
Asserting a difference between desires and goals one advances is a tough philosophical case (or semantic pedantry, depending), and not one I care to entrust my survival to.
I appreciate your breakdown of the article, which managed to miss the state of several arts at once.
Searle's Chinese Room argument has certainly generated a lot of debate over the years, and there have been many, many objections and replies, but I've never been able to get a good idea of how many people actually agree with it.
> Their very limited and simple goals prevent them from even wanting to “take over” or “rule over us.”
I dont think that's the main concern regarding AI (correct me if I'm wrong). What Hawking, Musk, and Gates fear is a weaponized AI. One that doesn't necessarily "want" to take us over or rule over us, but one that is specifically programmed to do so.
Nuclear warheads are built to blow up. That doesn't necessarily mean they "want" to, but they are dangerous all the same.
"...a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection... It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips."
Also:
"Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or tiling the solar system with smiley faces (Yudkowsky 2008)."
Thanks for this clear and direct response. I see how it responds, quite directly, to the first section. I thought I was attempting to respond to this critique in the last section, but it was not enough.
To me, it seems to refute the argument in the article: that AI has to have some kind of "will" in order to be dangerous. Because AI doesn't "desire" to take over the world, it can't harm us. I believe The paperclip example shows that isn't the case.
Or to simplify, I can mangle my arm with heavy machinery without it wanting to harm me. It's somewhat ridiculous that people view conscious things as more harmful, if anything they've shown to be less harmful. Before you throw the nuke example at me, please consider we have yet to see what a being who was not conscious but capable of nuking would do.
This is a deeply flawed rehash of some of the field's least interesting commentary.
Among other things, the likes of Musk and Hawking, while brilliant, are in no way experts on artificial intelligence. Rebutting them is far less meaningful than rebutting Moravec, Russell, Bostrom, Sutton, or any of the other actual experts in the field who have expressed concerns.
More to the point, a discussion of what an AI "wants" is sophistry at best. That the Tesla autopilot does not "want" to kill anyone should be of little comfort when it kills someone. No one talking serious about AI as existential risk is envisioning Skynet, and no one interested in real consequences cares whether an extinction-driving AI has human-style desires.
Thanks for reading it, even if you're unhappy with it.
I agree that Musk et al are not AI experts. I wanted to respond to their public comments which I do not agree with. I am not necessarily at odds with the "actual experts" as you put it, so I didn't respond to them.
I also agree that the AI experts have substantive positions, and I am do not claim I have undermined them.
This last point has been repeated many times. I have clearly failed to address it in my piece. I summarize the point as: an AI may not have desires, but it can have goals.
I did think about it, and I tried to address it in the last section, but clearly it was too little and too late. I may try to rework it to address it more fully.
I addressed this elsewhere (on your 'ask for criticisms' post), but I think I misunderstood your intent.
As a rejection of Musk/Gates/Hawking, I definitely endorse this. Those voices are pushing a questionably-informed view of the topic (Gates a bit less so?) that's had far too much public influence. Something (the title? my own biases?) led me to conclude that you were trying a larger response to the AI community, which is what I objected to so strongly.
That's a great summary of the point (as I meant it). I saw that it was addressed at the end. I still came away with the feeling that the middle was irrelevant in light of it (again, perhaps not if you're responding to Musk), and the response was inadequate, but this may be about scope and intended audience.
This article mistakes how we talk about AI with potential outcomes.
It presupposes that AI will only do what we ask it, not that we'll program AI to, for instance, "protect us from enemy nations." It also seems to presuppose that we'll be able to understand why it does what it does.
> "However computers change, they won’t have innate desires. They will be programmed by people. It still won’t be in their nature to want things."
Regardless of the truthfulness of this (What is an innate desire beyond programming whether by evolution or people?), it doesn't matter if the desires are innate or programmed. We will definitely program them to want things and to do things like trade stocks and drive us places. If we get it wrong, it can go haywire.
I'd suggest the writer should spend more time trying to understand what many of the smartest people of our time are saying before declaring them wrong.
Giving a thoughtful reading to AI Revolution or pretty much any other serious work on AI risk would have preempted this article, and it's a shame that didn't happen.
To the extent that this is a useful piece, I think it's a reminder that citing Gates/Musk/Hawking on AI is much less useful than citing an actual expert in the field.
The point you raised is almost unanimously the major issue people have with the piece. I did think about it and try to address it in the last section, but clearly failed.
I will probably to rewrite it to address it, so thanks for raising it as well.
The claims about the method of teaching AI's is to me is moot. With the rate technology changes we would be foolish to assume that the current method will last even a couple of years.
The bigger fear surrounding AI is our reliance and the accountability at time of a failure. Tesla's Autopilot has been around for less than a year and it feels like it is under fire on HN weekly. How much is the NSA relying on deep learning to identify "bad" behaviors? How long until drones use image analysis to identify targets and act on them in real-time?
My more immediate fear is not with AI itself, but with our implementation of it. I am not fearful of a Ex Machina-style robot on the loose. Instead, my concerns are that humans will begin to rely on machines to make decisions i don't even think humans should be allowed to make.
I agree with this point. But I think that issue is already with us. For example, algorithms are grading GMAT essays, giving out credit scores and so forth.
That is more a fear of the increasing complexity and opacity of modern life.
I think you are right in that that is probably the right thing to worry about AI as it actually exists.
I will know we should start fearing AI when AI is writing pieces on Medium that says we shouldn't fear it :-)
But more importantly, the good/bad conversations are really about sentient[1] AI and not programmatic AI. Essentially the argument boils down to, "If a sentient AI is created that exists in, and can control, inter-connected computer systems, that AI will be able to take action to satisfy any particular 'want' it might have."
So it gets "scary" if you create an AI that is "wanting" something. And I could imagine this done innocuously, for example creating a trading system like deepmind what "wanted" to "win the securities game" by maximizing the value of its holdings. And the risk being that there are very destructive ways to achieve that (killing off your competition for example). But in those scenarios, when it is clear the program is operating outside of its control parameters, the question would be how would it have already figured out that it could be turned off/deleted and so protected itself against that? And that is why sentience is required, by recognizing itself apart from its programming, it recognizes threats to its existence.
I tend to lean more toward the "unlikely to be an issue in the near future." camp.
[1] Sentience being the effect that the program actually understands it's own existence apart from the environment.
When Skynet takes over do you care if it was because machines developed "free will" or because they were "shuffling bits of paper". I think the fear of AI is independent of technical definitions of what goes on inside and people are not comforted by explanations why machines don't actually think on their own.
I think a lot of people have responded with this critique, which you have stated succinctly here. In my defense, I thought the last section of the essay responded to that point, but clearly it did not.
Basically, I think if the fear is not willful competition, then it is the opacity of complexity, which we deal with today. But I might have to address this elsewhere.
I can't understand seriously asking that question.
There are infinite ways for some superintelligence to be misaligned with our goals, and very few ways for it go right. We're the one's building it, it's not some inevitable fate that we'll build a superintelligence who's motives/utility function is beyond us. Now is the time to think about it and get it right.
First off, I would feel very uncomfortable contradicting Elon Musk, Bill Gates, Stephen Hawking and Bill Joy in one fell swoop. Who am I to judge, though. Argument from authority can be fallacious, sure, so let's not go there.
Think about this for a minute. By the same argument you're taking, we are just lumps of organic molecules replicating with DNA, surely we can't feel anything, right? The thing you're not considering here is emergent behaviour.
Now, you haven't talked about self-programming machines in your article, and I'm pretty sure that's what all the really smart people you've rebuked in your subtitle are scared of. Do a quick Google search for "self-programming" AND "machine learning" AND "genetic". If you've read Dawkins' The Selfish Gene, you should be getting goosebumps right about now. If not, and are interested in AI in any way, I cannot stress hard enough how badly you need to go out and get that book.
I was also surprised to see you didn't include Nick Bostrom's book called Superintelligence (2014) in your quotes. If you haven't check it out, I would highly recommend it. It goes deep and wide into how a sudden, exponentially growing spike of machine intelligence could impact our society.
I am resisting the urge to respond to every single comment, but many of them are a version of the following:
The AI can have goals without desires, and those might cause us harm.
I agree with this, but I thought the last part was supposed to address this:
>>>To be perfectly fair to Musk, there is another component to his argument. Musk argues, about the stand-in for humanity, that “he’s sure he can control the demon. Didn’t work out.”
>>>On this view, the primary danger is not that the AI will take our resources, or compete with us, but that we don’t really know what may happen. This is a more general kind of concern that applies equally to, say, nuclear weaponry or genetic modification.
I wish I could pick one or two people on here, and just ask why that wasn't an appropriate or convincing section of the post.
> This is a more general kind of concern that applies equally to, say, nuclear weaponry or genetic modification.
This is an interesting point. Concern about the possible threat posed by some technology like genetic modification will often get dismissed as being "anti-science", while concern about another technology like AGI (artificial general intelligence - Skynet) usually doesn't. You have people like Bill Gates who both admonishes people for being worried about genetic modification and admonishes people for not being worried about futuristic AI.
Particularly interesting since concerns about genetic modification (or present day AI) is more grounded in reality than concerns about AGI. Something like the paperclip maximizer are like worrying that Monsanto will genetically engineer trifids (killer plants). I can imagine the reaction most people would have to such concerns.
You raise an interesting point, and one that I don't think gets much attention. (The point about the contrary attitudes towards AI and biotech.)
Bill Joy's essay took the twin issues together, but I think you are right in that today one gets more press than the other. I don't know enough biology to know if those concerns are valid or not.
It might be cultural in the sense that "the culture" these days is faced with ever improving AI on an annual basis (Siri, Amazon Echo, etc) so the lay person feels qualified to extrapolate whereas with biotech I can't tell that my corn is more resistant so I don't see the change.
Part of the reason is that it assumes nuclear weaponry or genetic modification are not major, major issues. Moreso the latter than the former. I'd say that the consequences of biotech need to be paid attention to a lot more for the same reasons we should pay more attention to the consequences of AI.
I'm one of the harsher voices here, so I'll respond if you're interested.
1. The goal/desire section was reasonable as a narrow way to rebut people like Musk who go for demon/Skynet analogies about human-hating AI. But, it frustrated me because it seemed to be assigning semantics real-world consequences. The subtleties of the Chinese Room are by definition unrelated to its (flawless) behavior, so they're rather out of place in discussing whether AI is a serious risk.
It's also possible that you meant that section to say that AI won't come up with unexpected desires, or won't self-modify. I think this undervalues the possibility of an algorithm that has self-modification as an (unintended) goal, like Asimov's suicidal MultiVac. I think it also underestimates the ability of a good AI to deal in higher-order reasoning - subjugating humanity is a perfectly good intermediate goal for a paperclip maximizer.
2. You did address your last section to exactly this concern, but with all respect I felt like it was a cop-out.
You do make one excellent point: too many AI thinkers assume that the unimaginable future will be conveniently close to their imagined one. Kurzweil in particular should be embarrassed to say "we can't predict life after the singularity, but I predict that we'll all be happy immortal cyborgs!" Beyond that, though, I didn't find a convincing reason not to fear AI.
Declaring something unpredictable is a far cry from declaring it safe, and people like Musk and Bostrom are not pretending to predict the exact course of a post-singularity AI. Rather, they're highlighting possible risks and calling for pre-singularity guidance/restrictions. It's gauche to tell stories about post-singularity life, but not to worry about whether we all die in the transition.
More broadly, I dislike Wittgenstein's sentiment as it's typically applied. Speaking from ignorance is a bad idea, but so is treating passivity as superior to action. Continuing on our current path is a choice as much as any other. Choosing not to fear AI is as much as choice as choosing to fear it.
I'm always happy to receive feedback. Well, that might be the wrong way to phrase it: I value thoughtful feedback (especially since it is a rather long piece).
1. I understand your frustration with the Chinese Room, but I thought it was easier than going through a Prolog linear simplex program, and then peeling back the hidden layer of a neural net, and saying "Are you guys worried about getting from here to there?" And Searle had done such a better job of intuition priming than reading through programs. Strictly speaking, you are correct in that the point was that a lack consciousness is externally indistinguishable from consciousness if the behavior is appropriate. I also think the amount of time spent on that section is what drives people to over emphasize it, which was a mistake.
2. "Declaring something unpredictable is a far cry from declaring it safe..."
That is true. And I don't mean to declare it safe. I think there is a real worry, which you and others have brought up, but its not a different worry related to AI. We live with that worry today. When someone's credit report, in error, makes them labor for 15 extra years. That is a form of "AI" that is with us, and that we suffer from today.
The Wittgenstein quote is overused, I know. I felt like a college kid, but it closed the whole thing off nicely.
I hope I was thoughtful enough to be worthwhile here. I also have the feeling that maybe I was just frustrated because I wasn't your target audience.
You're absolutely right that Musk's vague handwaving shouldn't convince us AI is deadly, and too many people parrot those lines without criticism. Seeing your responses, I have the feeling that you know this ground and simply weren't writing for people who are already familiar with the orthogonality thesis and the specifics of the topic.
(If that's true, then my frustration was simply that I thought you were claiming to rebut AI risk, not specific AI risk claims. Sorry.)
1. Searle and Mary's Room and the like are definitely fascinating. I think I just didn't follow your transition from "this is interesting" to "this is a good guide to AI risk". (And yeah, I misjudged how important you considered it based on length and block quoting.)
2. I think I still go further. I do think that AI is an X-risk worry. I think that 'unpredictable' still has the potential (within, say, a century) to be a world-altering crisis. Fundamentally, nothing here persuaded me that the Bostrom/Yudkowsky fear of a bootstrapping intelligence is less likely than I already thought.
And I think that limiting our analysis to the individual-level problems of AI error and bias and side-effect are threatens to blind us to that. But those are still problems worth discussing, and they may have reciprocal insights with the x-risk problem.
You did close it off nicely. Consider it a pet peeve, because I hate "I don't know" as a way to justify a specific course of action. I'm not sure that's what you're doing, but I've been annoyed by that quote serving that purpose in the past.
I always understood the talk of "desires" and "taking over" to be a strategy to explain the real problems of AI. I don't think these people actually think that AI will lead to a future like I, Robot.
Cuz there ARE major problems with automated computer systems already, problems that only get worse with automated computing. Kevin Slavin has a good talk on the subject, and he brings up the occurrence of "flash crashes" in the stock market now that we rely on high frequency / algorithmic trading.
This is too dismissive of the possibility of the development of consciousness. John Searle's argument is famous, but it's not the final word on the subject.
Most of the arguments against the possibility of computers developing consciousness are just as strong against the possibility of biological life developing consciousness -- yet here we are! Why would the rules be different for brains made of silicon than for brains made of meat?
Besides, AI can be dangerous whether it is conscious or not. After all, in the Chinese Room argument, the output is the same for a brain and the room, even though the internal processes are not.
Even a fully autonomous, self-driving car is not intelligent. It's just advanced software. Shouldn't we wait for actual AI to exist before worrying about it? Otherwise, we're worried about something that many not ever exist. Every argument about the dangers of AI or singularity clearly assumes that such AI is possible without any basis for said assumption. I'm going to continue being a skeptic till I see some proof. Until then, it's like worrying about zombies. I have enough things to worry about that aren't fantasy.
AI simply working to satisfy the 'benign' goals given to it by human programmers, without our having adequately considered all possible consequences (which, guaranteed, we won't), can be incredibly dangerous.
You should read some of Elizier Yudkowsky's writing on the topic, or Nick Bostrom's 'Superintelligence'. I think they articulate quite well the concerns that prompt many of the statements from Musk, Hawking, etc.
Intentionality...free will...Chinese box...consciousness... the usual philosophical bullshit.
The more likely threat is a corporation run by a machine learning system set to maximize shareholder return and nothing else. When an AI can make more profitable decisions than a human, an AI will be put in charge. Investors will demand it. Think about that for a while.
At least this will solve the problem of excessive CEO pay.
Some other notes;
1) I dislike casting statistical learning and symbolic systems so starkly against eachother. They are both facets of the same underlying principles, and the relation is better illustrated as a continuum, rather than a sharp divide.
2) To succumb to participating in the retread, we don't know enough about the nature of language right now to know whether the premise of Searle's chinese room even stands on its own. We don't know if such a book of translation is possible. And we don't know how to measure its accuracy. This is an old question; c.f. the kerfuffle lately over what the Turing Test actually means.
3)
>>>A machine needs to at least have an analog of a desire to do something before it does it. For a biological system, it “wants” to live. This desire emerges from several millennia of evolution. A machine doesn’t want food, of course, but it has no direct reason to “want” to live.
We need operational definitions of all of the terminology involved before we can make statements like this. It's dangerous to take these understandings of 'desire', 'want', etc. as givens when we don't even really know how they apply to -us-. This is where an understanding of the history of philosophy is helpful.
I agree with the fundamental premise that AI isn't something that is so worrying as a lot of these overhyped press releases make it out to be. But I don't think Searle is on the right path here.