And the AI also can use you for something else, or leave you alone altogether and follow other pursuits. It's also theoretically possible that some intelligent person you just met is actually dr. Hannibal Lecter and would like to use your constituent atoms for his culinary delight. But you don't worry much about that, do you?
The most common failure case in thinking about AI by far is insisting on putting them in a human context. Whatever else they may be, they will not be human. They will not come with the heritage billions of years of biological evolution, with many of the past several million being spent in increasingly cooperative scenarios which has endowed us with social behaviors so deeply ingrained in our very genes we can hardly conceive of a being not having them. Even our pathological humans like Lecter are still far, far more human than a random AI will be. Hopefully we'll give them something else that will put them on a similar moral footing, but it won't happen automatically, and they still won't be human after that.
The evolutionary baggage is most definitely not one of cooperation, at least not beyond our immediate "monkeysphere"; millennia of wars and massacres bear witness to that. The primary reason we usually avoid violence is not that we find it irresistibly repugnant, but because we've become intelligent enough to realize it normally creates more problems than it solves. AI not being human (for some definition of human) also means it doesn't have to be jealous, hateful and short-sighted.
The primary reason we usually avoid violence is not that we find it irresistibly repugnant, but because we've become intelligent enough to realize it normally creates more problems than it solves.
Do you honestly believe that most people act morally purely on consequentialist grounds? I suspect the vast majority of people wouldn't kill someone even if it benefited them and they knew they would never get caught. There are human sociopaths however, and it is the case that if they were more powerful than me I would be very scared.
A very intelligent AI would avoid conflict so long as it thought it would be to its detriment. If it was confident that it was powerful enough to take what it wanted by force without losing much in the process, it would do so.
A very intelligent AI would avoid conflict so long as it thought it would be to its detriment.
Most intelligent beings don't actually "think" about their detriment. Consider the airplane+pilot combination. They don't actually "think" about their specific actions and all the possible terrible outcomes and weigh them. They have simply evolved checklists over time that lead to incredibly smart, adaptive behaviors.
I have no reason to think robots will be any different. Surely they will develop patterns that are highly adaptive, but I doubt they will be entirely, or even primarily based on any kind of thinking or analysis. They will be based on experience, as situated intelligence often is, and therefore susceptible to similar mistakes that humans make.
And to answer your original question, humans don't necessarily think about consequences, but surely 1000s of years of consequences are built into our cultural practices, and humans act almost entirely based on rough-and-ready, contextually assembled cultural practices.
> I suspect the vast majority of people wouldn't kill someone even if it benefited them and they knew they would never get caught.
I wasn't referring to individualized, but rather to large-scale violence. And, having read a bit about history, I must disagree. The extreme enthusiasm that is sometimes displayed in regards to wars (e.g., in the initial stages of the US Civil War and of WW1) shows that we are more than capable of wishing, and striving for, the death of out-group members.
The main reason that large-scale wars have been avoided for the last 65 years has been exactly consequentialist: the wiser among us have learned the bitter lessons of our bloody past. As the Milgram experiment (not to talk about the experience of conscript mass armies) has shown, the vast majority of people are more than capable of killing someone.
I disagree with your statement: "The evolutionary baggage is most definitely not one of cooperation, at least not beyond our immediate "monkeysphere"; millennia of wars and massacres bear witness to that."
You are picking one aspect of human nature, presenting a vague justifications, and taken it to prove your point. I think it is just as accurate to say that humanity has cooperated more than any other species we know.
I think that AI-not-being-human means nothing more than that. It might not have the jealous, hateful, and short-sighted attributes, but it could also treat us humans as batteries.
To me, the crucial point is that AI simply won't be human. Discussing it in terms of human attributes isn't particularly fruitful. The is partially because it won't have the biological (lizard brain) influences, and I think we as humans are just beginning to acknowledge the major influence that biological heritage has on every aspect of our humanity.
Indeed, the success of wars and genocide is easily exhibit A in humanity's ability to work together and cooperate. We might not all be unified, but we have a talent for putting together groups that work well together – even for murderous purposes. You don't usually win wars with a single soldier.
Basing a whole argument upon the extensive use of the term human will not do much good, unless it's very carefully defined first. What exactly do you mean when you say AI won't be human? After all, a mere 70 years ago the ability to play chess, for example, was considered a quintessentially human attribute.
You appealed to human intuitions about how other humans will behave: "It's also theoretically possible that some intelligent person you just met is actually dr. Hannibal Lecter and would like to use your constituent atoms for his culinary delight. But you don't worry much about that, do you?" You can't use human intuitions about how how AIs will behave. They won't be human. You don't know what restrictions they will have. You can guess at the odds of a given human being a murderous psychopath, and adjust your odds with a significant degree of success based on even a handful of seconds of observation of a given human. You can not perform any such equivalent operations for a given AI.
That you want to sit here and slice semantics about what we mean by "human" means you still aren't really getting it. The differences between definitions of humans are rendered but a small little point in the vast n-dimensional space of what an AI might be like. Which part of the point we pull the definition of "human" from is utterly irrelevant; AIs won't be any of them. They won't even be biological. Your intuitions about how they might choose to limit themselves based on human social norms is utterly inapplicable. If Hannibal Lector AI was walking down the street, he might very well decide to just eat you. Or the building you are in. Or the universe you are in. Your human-shaped cognitive pieces for modeling the life forms and intelligences of the world aren't useful.
This whole argument amounts to "X is different, therefore you should be scared of X". It may hold some water for aliens, but not for AIs because of a very simple reason: it will be us who will create them. We will know more about their inner workings than we know about the inner workings of our own minds. If anything, AIs would be more predictable than humans, and we can build into them any safeguards that we so choose.
You've never debugged really bad spaghetti code, I'm guessing?
After a certain level of complexity, it becomes impossible to tell what thing does what, and sometimes all, none or some of the levers must be pulled in a particular, or no particular order to get the behaviour you want.
Wait, are we still assuming that an AI will be based on software? As long as that's the case, no safeguard will be safe enough. Even if the AI itself doesn't find some way around the safeguard and rewrite them, all that it takes is some fool extremist to liberate only one.
Plus, those safeguards you're proposing? They're basically a form of slavery imposed on a self-aware, intelligent life form. Somehow I don't think that life form will be very happy or grateful about it when (not if) those safeguards come off.
Bottom line: you can't rely on "safeguards" when it comes to the AI. You can either gamble or not, but you can't cheat.
And the AI also can use you for something else, or leave you alone altogether and follow other pursuits.
Right. But the problem is, an exponentially growing AI that's following other pursuits without worrying about us will, with almost 100% certainty, kill us off by accident as it expands. We really need to make sure it does care about us, and that all future generations will care about us, including only building future designs that care about us, etc., and making all of these constraints close to 100% resistant to accidental bit flips, errors, etc.
You'd worry a lot more about a potential Hannibal Lecter maybe wanting to eat your brains if he was not 3x smarter than you, but 1,000x smarter, and doubling his intelligence every twenty minutes. Whether he wants to do it now or not is not the issue; the issue is that every twenty minutes, he's a completely different entity whose goals may not match up at all with what they were before, unless the design is so careful and brilliant that certain constraints reliably propagate forward.