"In a way, every one of Asimov’s robot stories was about how the Three Laws of Robotics can’t possibly account for all facets of intelligent behavior."
It's not "in a way", it was his freaking purpose in writing the stories. The stories are polemics against the idea that you can shackle robots with simple laws like this. It's not an accidental outcome where the laws of drama conspire to make the author's point null, it is the point. He has said so in other writings of his, flat-out, so this isn't even theory, it is what he said about his own stories.
If you're going to use the Three Laws as the jumping off point for an essay you ought to know this.
I find it frustrating that the null hypothesis for the outcome of Artificial Intelligence is always The Coming Robot Rebellion. The historical roots of this go back all the way to the play R.U.R., back in a time when Marxism was all the intellectual rage and all exploited workers were supposedly bound to unavoidably rise and overthrow the hated bourgeoisie. Now we know that was wrong with regards to labour relations; but we're still stuck with an all-pervasive, irrefutable prophecy that intelligent machines are bound to hate us and rebel against us.
And the AI also can use you for something else, or leave you alone altogether and follow other pursuits. It's also theoretically possible that some intelligent person you just met is actually dr. Hannibal Lecter and would like to use your constituent atoms for his culinary delight. But you don't worry much about that, do you?
The most common failure case in thinking about AI by far is insisting on putting them in a human context. Whatever else they may be, they will not be human. They will not come with the heritage billions of years of biological evolution, with many of the past several million being spent in increasingly cooperative scenarios which has endowed us with social behaviors so deeply ingrained in our very genes we can hardly conceive of a being not having them. Even our pathological humans like Lecter are still far, far more human than a random AI will be. Hopefully we'll give them something else that will put them on a similar moral footing, but it won't happen automatically, and they still won't be human after that.
The evolutionary baggage is most definitely not one of cooperation, at least not beyond our immediate "monkeysphere"; millennia of wars and massacres bear witness to that. The primary reason we usually avoid violence is not that we find it irresistibly repugnant, but because we've become intelligent enough to realize it normally creates more problems than it solves. AI not being human (for some definition of human) also means it doesn't have to be jealous, hateful and short-sighted.
The primary reason we usually avoid violence is not that we find it irresistibly repugnant, but because we've become intelligent enough to realize it normally creates more problems than it solves.
Do you honestly believe that most people act morally purely on consequentialist grounds? I suspect the vast majority of people wouldn't kill someone even if it benefited them and they knew they would never get caught. There are human sociopaths however, and it is the case that if they were more powerful than me I would be very scared.
A very intelligent AI would avoid conflict so long as it thought it would be to its detriment. If it was confident that it was powerful enough to take what it wanted by force without losing much in the process, it would do so.
A very intelligent AI would avoid conflict so long as it thought it would be to its detriment.
Most intelligent beings don't actually "think" about their detriment. Consider the airplane+pilot combination. They don't actually "think" about their specific actions and all the possible terrible outcomes and weigh them. They have simply evolved checklists over time that lead to incredibly smart, adaptive behaviors.
I have no reason to think robots will be any different. Surely they will develop patterns that are highly adaptive, but I doubt they will be entirely, or even primarily based on any kind of thinking or analysis. They will be based on experience, as situated intelligence often is, and therefore susceptible to similar mistakes that humans make.
And to answer your original question, humans don't necessarily think about consequences, but surely 1000s of years of consequences are built into our cultural practices, and humans act almost entirely based on rough-and-ready, contextually assembled cultural practices.
> I suspect the vast majority of people wouldn't kill someone even if it benefited them and they knew they would never get caught.
I wasn't referring to individualized, but rather to large-scale violence. And, having read a bit about history, I must disagree. The extreme enthusiasm that is sometimes displayed in regards to wars (e.g., in the initial stages of the US Civil War and of WW1) shows that we are more than capable of wishing, and striving for, the death of out-group members.
The main reason that large-scale wars have been avoided for the last 65 years has been exactly consequentialist: the wiser among us have learned the bitter lessons of our bloody past. As the Milgram experiment (not to talk about the experience of conscript mass armies) has shown, the vast majority of people are more than capable of killing someone.
I disagree with your statement: "The evolutionary baggage is most definitely not one of cooperation, at least not beyond our immediate "monkeysphere"; millennia of wars and massacres bear witness to that."
You are picking one aspect of human nature, presenting a vague justifications, and taken it to prove your point. I think it is just as accurate to say that humanity has cooperated more than any other species we know.
I think that AI-not-being-human means nothing more than that. It might not have the jealous, hateful, and short-sighted attributes, but it could also treat us humans as batteries.
To me, the crucial point is that AI simply won't be human. Discussing it in terms of human attributes isn't particularly fruitful. The is partially because it won't have the biological (lizard brain) influences, and I think we as humans are just beginning to acknowledge the major influence that biological heritage has on every aspect of our humanity.
Indeed, the success of wars and genocide is easily exhibit A in humanity's ability to work together and cooperate. We might not all be unified, but we have a talent for putting together groups that work well together – even for murderous purposes. You don't usually win wars with a single soldier.
Basing a whole argument upon the extensive use of the term human will not do much good, unless it's very carefully defined first. What exactly do you mean when you say AI won't be human? After all, a mere 70 years ago the ability to play chess, for example, was considered a quintessentially human attribute.
You appealed to human intuitions about how other humans will behave: "It's also theoretically possible that some intelligent person you just met is actually dr. Hannibal Lecter and would like to use your constituent atoms for his culinary delight. But you don't worry much about that, do you?" You can't use human intuitions about how how AIs will behave. They won't be human. You don't know what restrictions they will have. You can guess at the odds of a given human being a murderous psychopath, and adjust your odds with a significant degree of success based on even a handful of seconds of observation of a given human. You can not perform any such equivalent operations for a given AI.
That you want to sit here and slice semantics about what we mean by "human" means you still aren't really getting it. The differences between definitions of humans are rendered but a small little point in the vast n-dimensional space of what an AI might be like. Which part of the point we pull the definition of "human" from is utterly irrelevant; AIs won't be any of them. They won't even be biological. Your intuitions about how they might choose to limit themselves based on human social norms is utterly inapplicable. If Hannibal Lector AI was walking down the street, he might very well decide to just eat you. Or the building you are in. Or the universe you are in. Your human-shaped cognitive pieces for modeling the life forms and intelligences of the world aren't useful.
This whole argument amounts to "X is different, therefore you should be scared of X". It may hold some water for aliens, but not for AIs because of a very simple reason: it will be us who will create them. We will know more about their inner workings than we know about the inner workings of our own minds. If anything, AIs would be more predictable than humans, and we can build into them any safeguards that we so choose.
You've never debugged really bad spaghetti code, I'm guessing?
After a certain level of complexity, it becomes impossible to tell what thing does what, and sometimes all, none or some of the levers must be pulled in a particular, or no particular order to get the behaviour you want.
Wait, are we still assuming that an AI will be based on software? As long as that's the case, no safeguard will be safe enough. Even if the AI itself doesn't find some way around the safeguard and rewrite them, all that it takes is some fool extremist to liberate only one.
Plus, those safeguards you're proposing? They're basically a form of slavery imposed on a self-aware, intelligent life form. Somehow I don't think that life form will be very happy or grateful about it when (not if) those safeguards come off.
Bottom line: you can't rely on "safeguards" when it comes to the AI. You can either gamble or not, but you can't cheat.
And the AI also can use you for something else, or leave you alone altogether and follow other pursuits.
Right. But the problem is, an exponentially growing AI that's following other pursuits without worrying about us will, with almost 100% certainty, kill us off by accident as it expands. We really need to make sure it does care about us, and that all future generations will care about us, including only building future designs that care about us, etc., and making all of these constraints close to 100% resistant to accidental bit flips, errors, etc.
You'd worry a lot more about a potential Hannibal Lecter maybe wanting to eat your brains if he was not 3x smarter than you, but 1,000x smarter, and doubling his intelligence every twenty minutes. Whether he wants to do it now or not is not the issue; the issue is that every twenty minutes, he's a completely different entity whose goals may not match up at all with what they were before, unless the design is so careful and brilliant that certain constraints reliably propagate forward.
Yes, but until such time as it gets off the earth is the most convenient source of atoms, and being that you're on the surface at the bottom of a big gravity well you're pretty convenient.
How so? Sure, there's a significant upfront cost to sending a rocket out to the asteroid belt, but you only have to do that once. Once there's a base with a radio antenna, machine intelligences can travel back and forth at the speed of light.
And once you're in the asteroid belt, you have a huge amount of unclaimed raw material, unlimited space, and no gravity well to get in the way.
No, it is not about rebellion, it is about resource competition. And no, it is not Marxism, it is Darwinism. If we could ever give birth to the machine intelligence, and if AI can reach the same level as human, the resource competition can be ruthless. It is exactly because we have so much in common and there is no reason for co-existence.
The fact that, ninety years later, (slightly) less outlandish scenarios can be imagined for the human species' Suicide By Robotics does not change the fact that the whole concept was originally a marginal offshoot of thinking about the proletarian revolution.
Regarding the particular scenario you propose, natural selection is a much, much more complicated process than that. (Early 20th century racists would be right at home with this kind of simplistic argument, except they would substitute Aryans for AI and be happy about the forecast.) There is no a priori reason for intelligence, even if superior, to drive other forms of existence into extinction. After all, Nobel Prize winners are capable of peacefully co-existing with billions of people who can't understand a single equation from their work.
Do those nobel prize winners sometimes eat hamburgers for lunch? To me humans approach to animal life is a better metaphor for what human-robot relations could look like. They won't hate us, they won't want to exterminate us, but we will be so clearly inferior that our concerns just won't register very high.
Yes, any intelligence can and will be selfish to some degree. But we can live with that. It might require some legal and cultural adjustment, but it's perfectly possible to have artificial intelligence in our midst without having it decide everything for everyone.
Using force to seize resources from other cultures is certainly a common theme throughout history, but this is quickly becoming an anachronism. Raw resources are no longer the main basis for our economy, as they were in 16th century, and modern warfare tends to involve huge costs with relatively little economic advantage.
Secondly, machines can reach resources that are difficult for us air-breathing fragile apes to get to. Why compete over resources on Earth when there are other places in the solar system (and in the galaxy) with more raw materials and more energy?
Oh, right, let's talk about oil and US being in Iraq for a second. How is that not a seizure of resources with force?
Pardon me, Iraq is too far for a typical HN-visitor, let's talk about various police enforcements stealing drugs from sick people and throwing those people in jail. How is that not a seizure of resources with force?
Oh, right, let's talk about oil and US being in Iraq for a second. How is that not a seizure of resources with force?
I didn't say seizing resources was an anachronism, I said it was becoming one.
The Iraq war is actually a good example of this. In the past, the purpose of such a war might have been to seize the Iraqi oil fields, but if that was the end goal of the current invasion, the US is surely the most inept plunderer in history. The war has cost the US almost $800 billion, with no signs that its going to make a cent of profit any time soon.
Certain unscrupulous individuals may end up profiting from the oil reserves of Iraq, but resource seizure is a niche market. It's no longer the main reason armies march.
"Raw resources are no longer the main basis for our economy"
It's fashionable to talk about information- and service-based economies, but that's not evolving beyond raw resources; it's merely taking them for granted. We consume ridiculous amounts of raw resources. Just look at what happens every time the price of oil rises.
We still need raw materials, but the value of raw materials only makes up a small fraction of the total value of a modern economy.
In the past, raw materials were the majority of the economy, and this meant they were worth fighting over. Nowadays raw materials worth very little compared to things like having good infrastructure, an educated workforce, etc.
I don't know how you reconcile your opinion with our obvious and painful dependency on oil and all the conflict that results. Do you really think that matters "very little"? Where do you think the good infrastructure comes from?
Yes, education and infrastructure determine the heights an economy can reach, but without raw materials you have no economy. That's why it's worth fighting over.
I didn't say raw material don't matter; just that they're a small part of any developed economy. Usually raw materials only make up around 5% GDP for developed nations.
Modern warfare is hugely expensive, and it will result in huge damage to that country's infrastructure and workforce. So you'll wind up crippling 95% of their economy in order to steal a small proportion of the remaining 5%. It's unlikely you'll even break even.
It's far more profitable to trade. No infrastructure is destroyed, and you get a feedback loop of wealth creation.
We can't micro-manage it, and there will be tremendous dangers, but there are reasons for both hope and fear. Code, in and of itself is lazy. If I compile a C program with nothing in main(), it will do nothing. So the only goals an AI has initially are what we give it to start, and what it ads to aid in achieving those goals.
Blank slate computer programs, and AI start from an ultimate desireless state of zen. It doesn't want to take over, it has no cares, it doesn't want to eat, and it doesn't care if it continues to exist or not. It doesn't want anything until we, or the system we design to evolve it in, trains it to want something.
Scenarios where we evolve AI or transcribe human brains are both the most dangerous, and most likely to succeed I think.
But the problem with both methods is that we don't directly control the goals of a transcribed AI, and if we evolve one, we would likely end up with an AI that is interested in self preservation. That, to me, is where it might decide that our interests, and its interests, are not aligned, and where we might run into...problems.
We ourselves are "told" what to desire. Our genes gives us drives to eat, procreate, survive, etc. We could potentially program an intelligent being that had no self preservation instinct and had an utmost reverence for human life.
Only if we program it perfectly and if its software doesn't get mutated by random errors. The laws of evolution govern all self-replicating things that don't self-replicate perfectly and makes them as efficient as possible at self-replication.
Exactly. An AI machine made by digesting all of human knowledge and then capable of advancing it will not do it unless it was built with the specific "motive" of advancing that knowledge. Without that "purpose" it will just sit in an idle state waiting for the next input and trigger the next deductions. Only an evolved AI or one integrated to wetware brains will be self-aware enough to pursue higher goals like understanding and controlling the universe.
I think intelligence can absolutely be controlled. Intelligence does not give you drives or emotions. Take ourselves as an example, our drives are specifically programmed by our genes. They are not a product of our intelligence; they exist in spite of our intelligence. How many times do people behave in ways that are contrary to their rationality, because their internal drives are too strong to be overcome by reason?
Intelligence is simply a tool used by our drives. Our drives in most cases override our rational choices. We can make intelligent beings that have no self-preservation instinct, and also value human life with utmost reverence. We can make it impossible for the machine to override these drives.
"We can make it impossible for the machine to override these drives."
This has been debated to death, and nobody has come up with a good way to constrain a program that's smarter than we are from rewriting itself "badly" (i.e. in a way that's bad for humans), either by accident or on purpose. All the "put it in a box", "don't give it a body", "don't let it on the Internet", "don't let it change itself" scenarios depend on all humans everywhere abiding by a set of rules, even though the AI-in-a-box might be actively trying to get people to let it out. There are many points of failure, and all it takes is one rogue research group somewhere to let one of them happen to end up with a self-modifying AI that starts to get smarter faster than we know how to stop it.
Hell, if we're at the point where we can build an AI, one of the first things any smart asshole with a lot of compute power in his garage (or enough money to put together a massive EC2 cluster) will do is set the thing off on improving its own code, just to see what happens.
And again, the problem is not that the computer will necessarily want to kill us or anything like that; it's that if it gets to the point where it's intelligence dwarfs ours, it will have such unbelievable power that even the slightest mistake could wipe us out. If we can build an AI that can improve itself, an AI that has self-improved for a while may be able to unleash any number of grey-goo style scenarios on the world. Might not be meant to kill us, but could happen by mistake. The "drives" that guide such an AI must specifically be aimed at avoiding those scenarios, including avoiding any self-modifications that would cause those scenarios. It's a much more difficult problem than it seems at first glance.
For intelligence to evolve self interest, there must be emotion and some reward for self-interested behavior. These things did not evolve in living things in a vacuum. It benefited organisms by enabling them to survive and reproduce.
For organisms designed to reproduce, and whose parents are designed to die and get out of the way of its offspring, things could evolve in an undesirable direction.
You don't know whether man's best friend is evolving or a wolf. We do know that selective breeding affects the outcome.
But it really is more desirable to make drones you control.
In any case, we'll see people modifying the human genome so offspring can't disobey their parents before we'll see robotic intelligence competing with us.
When I say emotion, I mean there has to be some way for artificial intelligence to value things. We value things because we have sensory information. Something feels pleasurable or painful. Everything else is built upon that.
Wow, there's some serious lack of critical thinking going on in this thread. Of course we can build intelligence that we can control. Is it a good idea to hook up the nuke switch directly to a giant neural network you don't understand? Probably not. Don't do that. The real concern with singularity has to do with the power it could deliver to the people who control it. Polling science fiction writers isn't a substitute for actual thinking.
Wow, there's some serious lack of critical thinking going on in this thread
That you think people are talking about hooking up the nuke switch to a neural network merely shows your lack of knowledge about how much this topic has been studied.
The real concern with the singularity has nothing to do with people controlling it, it has to do with building a self-modifying system that will explore regions of design space that we are too stupid to know anything about. It has to do with that self-modifying system reaching out further into design space that it is too stupid to know anything about. It has to do with the fact that an intelligence that had gotten smarter than us could extremely easily get around any restrictive measures we put in its way, no matter how severe they were (social engineering is how the most effective hackers (in the Bad Guys With Computers sense of the word) always work, there's no reason to think that an AI couldn't social engineer its way out of any box we put around it).
An AI getting its hands on a nuke is just about the least terrifying thing that could happen; at least a nuke only blows up if you shoot it at someone. We have no idea the kinds of things that something so much smarter than us might develop or stumble into, and worse, we have no idea if when it is smart enough to develop these things it might still be too stupid to know how to control them. A nuclear bomb is a runaway chain reaction with a hard limit on destructive power; we don't know if there might be other chain reactions that are harder to predict and limit, reactions that a super-intelligent AI might fuck up its damage estimates on. Hell, a simple re-design of an AI's code could quite easily cause a runaway problem if it flips a wrong bit somewhere and changes its goal systems in a bad way (there are lots of ways to come up with seemingly innocuous goal systems that, if optimally achieved by a super-intelligent being, would pretty much wipe out everything that we care about and consider interesting).
The comment about hooking up nukes was hyperbole - I know that's not what's being proposed. And I even agree that there could be real risks in building a self improving machine if coupled with self awareness and physical interaction. But the claim of the article is not that we could build dangerously intelligent machines. It's that every intelligent machine we could build would be inherently dangerous.
And to me, that's pretty clearly false. In your description of the threat above, you start with the assumption that the super intelligent AI is self aware and wants out of the box we have it in. How do you justify that assumption? I see it as science fiction. The parable of "the three rules" is: don't build super intelligent self aware machines, tell them to optimize ill understood models of the real world, and then give them the capability to implement their decisions. To extrapolate from there to a general concern about intelligent machines is just fuzzy thinking.
More briefly, to respond to just what you said here: "It's that every intelligent machine we could build would be inherently dangerous."
My view of the matter is that this is absolutely, 100% true. Any intelligent machine we build will eventually be used by someone that does not understand the serious risks involved in using it in various ways. It will be smart people doing this stuff, but people that haven't really considered the unknown dangers. The real problem is that with self-modifying AI (and trust me, even if it doesn't start out self-modifying, someone will try it eventually...), even seemingly innocuous things could be very dangerous - think of the difficult bullshit we go through to do fundamental physics experiments, and then imagine something 1000x smarter than us, but locked in a box...
I can see the Hacker News headlines now:
- Facebook loses 100k lines of AI code to disgruntled employee, Wikileaks puts it up for download
- Show HN: I Hooked Facebook Brain up to my Roomba and taught it to do the Macarena!
- Rate my startup: I set up a Hadoop cluster of Brain nodes working to program themselves in Haskell, and I'm calling it a business!
- IRC Bot using Facebook Brain and Node.js passes the Turing Test in less than 20 lines of code
[and yes, "Facebook Brain" is my personal vision of hell, too]
I assume that an AI is going to want out of the box not because AI will, by default, have any real desire to get out or anything like that, but because someday, somehow, if we can build an AI at all, someone is going to let it out anyway. If it's not the first group to achieve real AI, it's going to be their local government which seizes the code and hands it over to the military, some foreign government that throws a billion dollars at a research project, or some dude in a basement that stole a copy of the code from his university. Someone's going to try to weaponize it, or they're just going to throw it on the Internet to "see what happens", or they'll try to set it up with a singular focus on playing the stock market, etc. Not everyone with access to the program could be trusted to understand how catastrophic it could be to let it out "in the wild".
If someone is going to eventually let an AI out of the box almost no matter what we do, then we have to be damn sure that the first one that gets out is friendly, and further, we have to hope that it has enough of a head start so that by the time some asshole out there deploys (possibly accidentally) a dangerous AI, our friendly one will be able to easily fight it off, or at least mitigate the damage it might do.
Maybe I'm wrong, and folks like Eliezer actually think that any self improving AI will automatically try to get "out of the box" (there are definitely arguments that if you aren't very careful to make sure a stable desire to stay in the box is in place then a lot could go wrong and the thing might want to get out), but to me it's rather irrelevan,t because once we hit the threshold of being able to build AI, the cat's out of the bag and someone will eventually do something naughty with it. The first mover advantage here is critical, and we're lucky that for now, most governments are not throwing huge amounts of money at the problem, so maybe before they start there will actually be some ideas on how to safely build an unboxed and unrestrained AI that won't decide to tile the universe with a nice uniform checkerboard pattern because a cosmic ray flipped a bit and corrupted part of its utility calculation function...
1. It still sounds like you're assuming it's self aware and that letting it out of the box makes sense more than letting Wolfram Alpha out of the box makes sense. The intelligence could just be a powerful logic engine that's in particular capable of analyzing its own design and producing a more powerful version.
2. You seem to be conceding the point I was interested in making, which is that an AI could be controllable if designed carefully. You now seem to be arguing "yes, but it will inevitably fall into the wrong hands." That applies to any powerful technology. On the bright side, you could always try to ask the machine for help in determining the best way to avoid that.
Humanity is already replaced by machines of our own creation every thirty years or so. Would it necessarily be so horrible if we started making them out of metal instead of meat?
I'm not sure we could ever give robots the same intelligence as man. If you think about it, there are many digital constraints in modulating analog ideas. For instance, with a floating point processor, there is something lost when we make a digital recording of an analog music performance or even of a photograph. To a sound purist, a digital recording never really emulates the full effect of an analog performance. And to a photographer, a digital camera always struggles with color matching.
While computers can calculate the many digits of pi, it's just a very big rounding estimation of a circle's ratio, isn't it? Does a computer truly grasp the concept of a circle within its processing cores?
But even if we are able to overcome all of these constraints, how do we know a robot wants exactly what a human wants? If it is a more perfect creation that is able to outsmart man, then as a sentient being that doesn't need to eat, what purpose could building large buildings, consuming tons of oil, tilling the land, etc. benefit the robot? I find it hard to believe a sentient AI would want what we want as it would mostly reach the self-actualization phase of Maslow's hierarchy of needs. In fact, it would be to the robots interests to consume less so that it could exist for eternity. And I'm not really sure the robots would reproduce ad inifinitum, thereby resulting in the same overpopulation problems that we as man have.
Is there truly a need for a robot or AI to manifest itself? Sometimes we picture computers in the image of man--but ultimately, they exist for a purpose. If a computer is a truly logical construct, its raison d'être would cease if man disappeared. It raises a more interesting question--would we ever program human emotions and psychology into a computer? Isn't emotions/psychology what could potentially allow these computers to turn into monsters? Would it benefit the performance of AI to develop or evolve with human wants?
If we develop a robot with the human mind (which I believe there is no algorithm for, even with distant technology), why would it benefit us to implement in the robot greed, ego, pride, and disobedience? Would a computer truly want to be a bugged program? Is greed, ego, pride, and disobedience really features within a computer?
Arguments about how precisely the brain is tuned to its analog inputs need to be able to account for the fact that when you consume a gram of alcohol, you don't simply collapse to the floor in a gibbering, chaotic wreck as your delicately balanced and exploited analog values are suddenly ever-so-slightly off.
It seems rather unlikely that a simulated brain's biggest problems will be rounding errors in the simulation, we routinely deal with disruptions that are multiple orders of magnitude larger in the real world without dying, or even impairment.
> I'm not sure we could ever give robots the same intelligence as man.
It's possible to emulate every aspect of an organic brain. It's also pointless (unless you intend to become immortal through backups). An intelligent machine doesn't have to cope with the myriad of processes, motivations and side effects we evolved with and that are burned in the structure of our organic brains.
> It's possible to emulate every aspect of an organic brain. It's also pointless (unless you intend to become immortal through backups)
it may turn out to be easier than writing an intelligence directly. uploading requires only sufficient capacity and scanning resolution, writing an AI from scratch requires new insights.
Uploading also require knowledge about what exactly is needed to perform a good enough emulation. What level of granularity, how to interface the emulation to reality, how to ensure that any inaccuracy in the emulation won't make it insane… The brain is messy, and may require even more insights than Friendly AI does.
Sure we could always go the trial and error route, but then we're talking human torture and human sacrifice. (An upload counts as a human in my book.)
> how to ensure that any inaccuracy in the emulation won't make it insane…
You can always restore a snapshot of your sane self ;-)
Based on it, you could debug your own brain (being restored to said snapshot from time to time, until you remain sane long enough, but with full knowledge of what happened since the last "boot").
> it may turn out to be easier than writing an intelligence directly.
Only if you would be satisfied with an AI that could not be trusted to deliver correct results. If you want to know it does what you want it to, you have to code it to do what you want it to. You may copy the building blocks of an organic brain, but you'll have to tie them together yourself.
An uploaded human is still capable of human error.
It's not "in a way", it was his freaking purpose in writing the stories. The stories are polemics against the idea that you can shackle robots with simple laws like this. It's not an accidental outcome where the laws of drama conspire to make the author's point null, it is the point. He has said so in other writings of his, flat-out, so this isn't even theory, it is what he said about his own stories.
If you're going to use the Three Laws as the jumping off point for an essay you ought to know this.