Hacker News new | more | comments | ask | show | jobs | submit login
“I, Robot” – the 3 laws considered harmful (shkspr.mobi)
24 points by edent 23 days ago | hide | past | web | favorite | 66 comments



I think Asimov knew all this – his stories are about how the laws can be twisted into having surprising consequences. It's meant as a Sci-Fi version of the "literal genie" [0].

Has anyone actually taken them as a serious suggestion in AI ethics?

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/LiteralGenie


Seconded. Somehow, the fact that the Three Laws are broken and Asimov's works are all about showing how and why they're broken is something that most people missed. The popular culture ran off with the Laws, forgetting the context.


Doubly surprising in that the average person's introduction to the three laws was likely the Will Smith I, Robot movie, where the laws being at least incomplete was a big plot point.

Quick edit: Yikes, that was 15 years ago... I have co-workers who probably haven't heard of the movie...


Asimov always admitted to loving locked room mysteries (he wrote some great non-sci-fi mystery works, too), and the Three Laws were a fascinating "locked room" to build mysteries in. People seem to always remember all of the people in the stories that believed the Three Laws infallible, but not all the "murders" that took place in that locked room that pretty much showed that they were not infallible and gave us a plot for an enjoyable story to read.


I think most people haven't read Asimov. They only know the laws because of their hype, but don't know their origin or meaning.

Thus, we need more Asimov in TV.


Oh I would die for a R. Daneel Olivaw TV procedural!

(Hm, apparently BBC had made adaptations of The Caves of Steel and The Naked Sun in the sixties... https://en.wikipedia.org/wiki/R._Daneel_Olivaw#Appearances_i...)


Having taken the Udacity Robotics Nanodegree I can say that they actually teach these laws in full seriousness as part of the first week of the course. When I saw that I started to regret signing up.


Yes, people do take them seriously in AI ethics. Read almost any "layperson" discussion, or go to any lecture and they'll be brought up.

I wrote this after attending a discussion at Oxford University on AI Ethics.


Kind of how our laws in real life are written with the intention of projecting that they're for one thing, but when it comes down to it, they're often intentionally perverted for the purpose of malevolence. The consequences aren't all that surprising given the character of those charged with enforcing said laws.


Yes - literally every story was about how the flaws in these laws. But #spoilers keeps me from saying more!


My mildly retarded friend did.

Pretty sure he still does.


This is silly.

The Three Laws of Robotics are not harmful because _they can't actually be implemented_.

Asimov never details how the three laws are baked into positronic brains in so fundamental a way that they can't be disabled without destroying the robot. Heck, we don't even know how to build a positronic brain -- because it's a fictional technology!

So even if you _wanted_ to build some sort of AI that is guided by the Three Laws, you couldn't.

You might be able to build AI that is guided by certain ethical principles, but right now, in the current state of technology, it's all rules-based stuff -- like a self-driving car with a rule about slowing down to avoid an impact that could injure someone.

Nobody knows how to give AI an ethical intuition, much less tweak that intuition. (I'm an objectivist, so I actually prefer saying, "Nobody knows how to give AI the ability to perceive metaphysical principles.")

This whole post is like saying that Star Trek's Warp Drive is harmful because we shouldn't be trying to travel faster than light.


Correct. By the author's logic, the article should _also_ be considered harmful because it is quite vague. In fact, it does not even go remotely into the detail about vagueness. Moreover, it omits the fact that Asimov actually plays with these laws in all his books (I think this has been mentioned already in another comment), showing exactly the interesting consequences of the wording...

While I like AI policy discussions a lot, this article does not help unfortunately. :-/


Your entire argument seems to be "it's impossible to do this because we don't know how to do it."

I don't get how people argue about the plausibility of future par-human AIs and say things like "we don't even know how to build a positronic brain" or "right now, in the current state of technology".

It's the same kind of incomprehensible confusion that leads one to tell the Wright brothers that heavier than air flight is impossible; just as birds exist to conclusively disprove that, so do HUMANS disprove the impossibility of human-level intelligence.

Maybe the three laws really are as physically illegal as a Warp Drive, though I would be remarkably surprised to hear it, but you cannot reasonably argue that position by claiming brains are impossible.

None of this should be taken as an endorsement of the three laws, which are clearly silly.


Correct! As I say in my post, it is the meme of the 3 Laws which is harmful to discussion on Robot Ethics.


The point of "I, Robot" was to show how these laws can lead to dangerous results if followed to the letter, and their inherent contradictions. I agree with the author: applying the Three Laws to actual AI research and engineering is dangerous. Thankfully, I haven't seen too many people in AI research that actually advocate for their use.


I get the impression that most people arguing in favor of the 3 laws have not actually read I, Robot. There are many places in pop culture that the laws appear, so it's not impossible that someone might quote them devoid of context.

To catch people up: I, Robot is a collection of short stories largely follow either (a) robopsychologist Susin Calvin or (b) robotics engineers Powell and Donovan as they attempt to diagnose anomalous behavior by examination of the 3 laws of robotics, highlighting the logical flaws and traps of these simple laws. A secondary theme is the use of robots as a mirror or lens through which particular human behaviors can be blown up and examined, largely through the absence or magnification of particular traits.

In universe, the 3 laws are considered to be practical engineering safeguards (and eventually, over hundreds of years, a fundamental building block to the function of the positronic brain)-- even as they are shown to the reader to be the origin of many conflicts.


"Considered harmful" has become a lazy writing device. There's nothing valid here that even says why these are harmful except possibly that they are "vague". The only thing that looks even remotely like a real concern (that people, even scientists, believe these are somehow magically hard-wired laws) is not cited and appears to be entirely manufactured.


I think you may have missed the joke. The article explicitly calls out the word "harm", as it's used in the three laws, as being a source of vagueness, which the article then goes on to say becomes fodder for some of the conflicts in the stories.


I know. That's which I specifically linked to the rebuttal of the "considered harmful" trope.

The very first link in my post points to a discussion of the lecture I attended. There the 3 laws were discussed and, I felt, some of the audience hadn't quite understood that they were a literary device.

But go along to any AI discussion group in person, or read the popular press, and I promise it won't be too long before you find a human citing them.


The article specifically calls out the word "harm" in the three laws as a source of ambiguity. This, as the author also points out, becomes a source of conflict in the I, Robot stories.

The author is making a joke using the "considered harmful" trope/meme in conjunction with the assertion that the word "harm" is ambiguous, thereby rendering the phrase "considered harmful" relatively meaningless.

Unfortunately, this joke seems to be lost on many of the commenters here. Technically minded folks seem to see the phrase "considered harmful" and proceed to lose their minds. Never mind that the phrase, in my experience, is almost exclusively used in jest. But based on people's reactions to it, "considered harmful" in a title might as well be flame bait. In fact, I seem to remember a serious article a while back called "'Considered Harmful' Considered Harmful" that should have put this whole thing to bed.


Leaving aside the literary purpose for the moment - when I go back and look at them now, some multiple decades after I first read them, I see a massive human-centric focus that I never did before. WHY is a human life more valuable than a robotic one? WHY is human "harm" so bad? WHY are human commands so powerful?

I mean, for story purposes it all makes sense, but I'm fascinated that I never raised these questions myself before. I accepted human divine right unquestioningly.


> WHY is a human life more valuable than a robotic one?

This "but why humans" reaction is bizarre to me every time I encounter it. We care about humans because we are humans. There's nothing deeper to uncover. We care about ourselves.

Humans do not care about the value of a robot "life", because we generally feel it has none.


> There's nothing deeper to uncover.

For me, at least, you're coming at it from the other direction.

I'm not really asking "why" in the "how did this come to be" - the answer to that is rather self obvious. I'm asking "why" in the "is this objectively actually correct? Is this how I want things to be?" - which have no answer, but the effort of trying to find one can uncover plenty of deeper ideas.


There is no meaningful answer to your question. For religious individuals, the answer is dogmatic (and generally yes, humans have value above all else except God(s)). For atheists, the answer is logically a vacuous no, because all of existence is without meaning.


>> which have no answer

> There is no meaningful answer to your question

Didn't I say that? Didn't I point out that the ATTEMPT to answer was valuable?

>> Is this how I want things to be?

> because all of existence is without meaning

Atheists don't have be nihilists.


I’d love to see a meaningful argument to the contrary. Near as I can tell atheism implies nihilism and the extent an atheist disagrees is decided entirely by how convincingly they can lie to themselves.


Atheism only implies nihilism if you subscribe to the concept that values are objective and morality is universal, which begs the question of who determines values and what is morally "good" and how.


So you assert that an atheist can find meaning by following their subjective values and morality, correct? This seems entirely circular and equivalent to saying that atheists can find meaning by valuing the things that they value.

How does an atheist hold subjective morals and values without those morals and values essentially being arbitrary?


What's wrong with them being arbitrary?


In what way are arbitrary values meaningful?


Define "meaningful".

You seem to have a really big chicken-and-egg problem here... you want morality to be universal. The problem is, "ethics", "morality", and "values" are all human constructs. We created them, we define them, we discuss them ad nauseum. They are as varied as we are. I don't think that makes them "meaningless".


I don’t have a chicken and egg problem. The chicken and egg problem arises from an attempt to pull meaning out of an arbitrary universe.

You say in different words that things are meaningful only because we believe they are. That’s fine if it helps you but it’s also circular and rather solipsistic. Your stance is also literally nihilistic: “a doctrine that denies any objective ground of truth and especially of moral truth”.



I don't think this is quite right. Suppose someone says "I care about white people because I am a white person" (and doesn't care at all about black people) or says "I care about women because I am a woman" (and doesn't care at all about men) or says "I care about psychologists because I am psychologist" (and doesn't care at all about people in any other line of work). Would any of those seem obviously reasonable? They wouldn't to me.

So, if caring only about Indians because you're Indian isn't reasonable but caring only about humans because you're human isn't reasonable, what's the relevant difference? That seems to me a question it's perfectly fair to ask. And some possible answers to the question don't make it obvious that the distinction between humans and hypothetical robots with minds that resemble ours enough (e.g.) for us to talk with them is one that justifies caring about humans and not caring about those robots.


Look at human history and you’ll find there’s very little relevant difference. Humans tend to generally decide that any “external group” is low value. The level of value is tightly correlated with how much we identify with the group. When white Americans saw blacks as savages closer to animals than “humans”, whites traded blacks as slaves and argued it was morally right. We mourn gorillas but not giant squid because we identify more with gorillas.

The question for robots is whether we’ll ever identify with them strongly. I’m doubtful.


> Humans do not care about the value of a robot "life", because we generally feel it has none.

Hasn't Atheism been on the rise? I wonder how people will feel about this subject of life by the time AGI is achieved.


I’m not aware of any logical argument from a point of atheism that assigns actual value to anything. If the entirety of existence is pointless, then there is no logical argument for human life being more valuable than a robot because both have zero intrinsic value. Given that, fighting the natural anthropic tendency seems pointless.


Atheism isn't nihilism. Values are subjective. Morality is relative.


Right, I didn't mean both types of life having no worth. Rather, I just meant to raise the possibility of the spread of belief that a human's and an AGI's life are not so different that they deserve different valuations. I was implying that in such a case, the valuation of an AGI's life might increase to something similar to that of a human's, and not that a human's would decrease to that of an AGI's.


Moral relativism is nihilism.


"Nihilism is the view that there are no moral facts, that nothing is right or wrong, or morally good or bad. Relativism is the view that moral statements are true or false only relative to some standard or other, that things are right or wrong relative to Catholic morality, say, and different things are right or wrong relative to Confucian morality, but nothing is right or wrong simpliciter."[1]

I am a relativist, but not a nihilist.

[1]http://www.oxfordscholarship.com/view/10.1093/0195147790.001...


I don’t see these as meaningfully different. One says there are no moral facts. The other says there are no moral facts unless you impose an arbitrary moral system. Someone who doesn’t believe in “moral facts” can still understand that if you define a framework for evaluating moral statements, you can make a true/false call within that framework. To believe otherwise would be to literally reject logic. That doesn’t indicate that the arbitrary framework imposed has any actual meaning.

It’s not coincidental that these are discussed side by side in your book, nor that some other definitions explicitly call moral relativism nihilism.


I don't think you can just swoop in and say that two different schools of philosophy are really the same because you can't tell the difference.


This is fallacious. The fact that a difference is asserted does not mean one exists. Your particular flavor of moral relativism seems to be absolute relativism. A common criticism of absolute moral relativism is that it is indistinguishable from moral nihilism. Similarly, arguments for moral nihilism are made from a stance of absolute moral relativism.

Now, there are forms of moral relativism, namely descriptive relativism, that are not the same as nihilism. These simply acknowledge the fact that different people hold different moral views, speaking to the reality of the world rather than the plausibility or implausibility of objective morals. (I would also personally hesitate to call descriptive moral relativism a philosophy since it doesn’t attempt to speak to any fundamental truth. Acknowledging that people hold different morals seems less of a philosophical viewpoint and more of an ability to make basic human observation.)

But you don’t seem to be a descriptive moral relativist. You’ve made absolutist statements about the relativity of morals, leavings me to conclude that you’re making a distinction without a difference because you don’t want to call yourself a nihilist.

If you can provide some practical difference between absolute moral relativism and moral nihilism, please do. I’ve been unable to find anything that draws a meaningful difference between the two except that the absolute moral relativist says you can evaluate moral statements in the context of an arbitrary moral framework, and the moral nihilist says, sure you can, but it’s arbitrary so it doesn’t actually mean anything. Everything else seems to involve painting nihilists as morons incapable of recognizing the reality of cultural norms (“Moral nihilists say you can eat babies for breakfast!”) in a weak attempt to draw a meaningless distinction because for some reason “nihilist” is seen as a worse label than “moral relativist”.

To the extent that moral relativism is a spectrum, moral nihilism simply seems to be the stronger end. The weaker end is just observation and the middle is a lack of willingness to firmly accept or deny universal morality.


So, to preface, I'm not a philosopher by trade, so bear with me if I misuse certain terms / labels.

http://caae.phil.cmu.edu/cavalier/80130/part2/Routledge/R_Re...

Going off this article, I'd call myself a "moderate morality-specific meta-ethical relativist".

The article also says that "some critics also fear that relativism can slide into nihilism".

It's true that "radical meta-ethical relativists" would have the same (or at least, as you say, practically indistinguishable) views on morality as a moral nihilist.

My bigger issue has to do with "schools of thought", or the broader labels being used here. "Nihilism" has many other forms, all of which are pretty extreme. So essentially, the most radical end of relativism has significant overlaps with the least radical end of nihilism. (Specifically w.r.t. morality)

Earlier you said "atheism implies nihilism and the extent an atheist disagrees is decided entirely by how convincingly they can lie to themselves". To use the overlap between one end of relativism and the other end of nihilism to then equate the two is taking it a big step too far.

It seems to me like you're deliberately blurring the distinction between these schools of thought in order to take a pot shot at atheism.


> To use the overlap between one end of relativism and the other end of nihilism to then equate the two is taking it a big step too far.

This is somewhat fair, in that there is a supposed spectrum that I equated to one extreme. However, I think one end of relativism is sort of uninteresting because it's purely observational, and I think the middle ground is wishy-washy. Moderate meta-ethical relativism (as described in your link) postulates that there are some universal rules for morality. This is literally not relativist and I'm not convinced this can be a logically-consistent belief. Morals are relative, except when they're not?

It feels to me that moderate moral relativism stems from the "fear that relativism can slide into nihilism". This is not a logical approach, but an emotional one. If you start with the assumption that moral nihilism is bad, but you also don't believe in universal moral truths, you're left grasping for a middle ground where universal moral truths don't exist but also somehow they do.

This is essentially the same thing I was referring to when I said "the extent an atheist disagrees is decided entirely by how convincingly they can lie to themselves". If you don't like the logical outcome of your beliefs, you'll find yourself searching for a "moderate" version that is allows you to avoid confronting the reality of your views. Don't believe in universal morals but also can't stand the idea of no universal morals? Then define them as relative except when you squint just right.

If you actually hold the view that there are common universal morals that can be pulled from the commonalities of all disparate human morals, then you're a secret moral universalist. There's nothing actually relativist about "rules A, B, and C are universal, and then humans also add arbitrary rules D-Z depending on their culture". That's just squinting until "not universal" and "universal" look like the same thing and hiding it behind fuzzy language.

This is much like a theologist who attempts to reconcile the beliefs that God is omnipotent, omniscient, and omnibenevolent with the reality of human suffering by stating that we simply can't understand God's love and that's why he allows suffering. On the surface that might look like a logically consistent response, but it's a redefinition of "love" to mean something other that what humans would recognize as love. We "cannot understand" God's love, so God's love is by definition not human love, and so God's omnibenevolence is not human omnibenevolence. And what we've done is squint really hard at "not omnibenevolent" until it looks like "omnibenevolent" while ignoring the false equivalency between the two different loves being discussed.

> It seems to me like you're deliberately blurring the distinction between these schools of thought in order to take a pot shot at atheism.

You might assume so but you'd be wrong. My beliefs would probably be best categorized as "agnostic atheist". (If feeling clever I might call my self apagnostic, meaning "don't know and don't care".) I am a reluctant nihilist in that I don't see a logical way to believe that morals (or really anything else) have meaning in the face of an arbitrary, uncaring, and accidental universe, but all my instinct and intuition says otherwise.

I was not being facetious when I said I'd love to see an convincing argument that atheists don't have to be nihilists. I see that as the only logical outcome, but that doesn't mean I like it.


The middle-ground certainly is wishy-washy, which is relativism's whole point, really.

>Moderate meta-ethical relativism (as described in your link) postulates that there are some universal rules for morality.

Not so? "Meta-ethical relativism: that there is no single true or most justified morality". I am skeptical "about the meaningfulness of talking about truth defined independently of the theories and justificatory practices of particular communities of discourse". I'll elaborate and tie this in to your next point...

>If you actually hold the view that there are common universal morals that can be pulled from the commonalities of all disparate human morals, then you're a secret moral universalist.

I like the evolutionary idea ( https://en.wikipedia.org/wiki/Moral_relativism#Scientific ). I.e. the reason there are certain common morals (i.e. don't kill innocents, raise and protect children, etc.) is that these offer a strong advantage over other groups that do not hold those morals. So these morals end up in nearly every single society, and could be easily mistaken for "universal" moral truths. From my original article: "a common feature of adequate moralities is the specification of duties to care and educate the young, a necessity given the prolonged state of dependency of human offspring and the fact that they require a good deal of teaching to play their roles in social cooperation. It may also be a common feature of adequate moralities to require of the young reciprocal duties to honour and respect those who bring them up, and this may arise partly from role that such reciprocity plays in ensuring that those who are charged with caring for the young have sufficient motivation to do so". Using the language of that quote, and to twist your example: rules A, B, and C are common elements of an adequate moral system, and then you can optionally add arbitrary rules D-Z depending on your culture and preferences.

Consider infanticide in the animal kingdom (https://en.wikipedia.org/wiki/Infanticide_(zoology) ), this would be "immoral" for humans, but it is standard practice for some species given the right circumstances. So for humans there may be near-universal morals, but they only apply to us, and only because they "evolved" that way. A human tribe that had a similar practice of infanticide might be more likely to die off (and take its "moral truths" with it) because humans don't reach maturity fast enough to support a generational purge of all infants.

You might say this is where I'm squinting, but I would say we just have different tests for what constitutes a "universal" moral truth. Perhaps just as you pointed out the case of there being a "false equivalency between the two different loves being discussed" I would suggest we're ignoring the false equivalency between the two different universals being discussed.

The "universal" you think I hold is an evolutionary (and therefore arbitrary, uncaring, and accidental) one that is specific to our species. That, by definition, is relative, not absolute. There's nothing timeless or intrinsic to such a "universal morality". You might say this means I'm just squinting hard enough, but I disagree. The big supporters of universal morality are theists who use their God(s) as external observers to pass moral judgement. Because without some external observer, what makes any moral truth "universal"? I.e. where would the "universal morals" come from-- who or what would set the standard? If you agree that humans collectively agree on the standard, then for my definition of "universal" they would not be universal.

You're interested in finding meaning, though. Meaning is relative to an observer, it can't be "found" in the sense that it's waiting out there somewhere for you. Meaning must ultimately come from within. Take any sandbox game for instance, it's like you're saying "Minecraft is meaningless because there's no point". Notch created the world, but gave no direction. All the gameplay is emergent. Real life is like that.

------

As an aside, thanks for engaging. This has been a really interesting discussion and has lead me to plumb the depths of moral philosophy way deeper than I had ever before gone.


> Not so? "Meta-ethical relativism: that there is no single true or most justified morality".

I guess your link doesn't describe universal morals, but universal constraints on those morals. This is to me definitely a distinction without difference. If care of children is a universal constraint on morals, then it is a universal moral.

"Moderate relativists ... deny that there is any single true morality but also hold that some moralities are truer or more justified than others. On Wong's view ... universal constraints on what an adequate morality must be like."

> You might say this is where I'm squinting, but I would say we just have different tests for what constitutes a "universal" moral truth.

I'm actually confused here. Am I correct in reading that you do not hold that these morals are "universal"? You agree that there are common (meaning held by essentially all humans who didn't die out) morals, but don't call them universal (meaning absolute)? I agree the terminology gets fuzzy.

> You're interested in finding meaning, though. Meaning is relative to an observer, it can't be "found" in the sense that it's waiting out there somewhere for you. Meaning must ultimately come from within. Take any sandbox game for instance, it's like you're saying "Minecraft is meaningless because there's no point". Notch created the world, but gave no direction. All the gameplay is emergent. Real life is like that.

This is an interesting point of discussion. I agree that any "meaning" we find comes from within by definition, since there seems to be no external/universal meaning. However, this sort of "meaning" doesn't mesh with my intuitive definition of "meaning". If I imagine someone spending their life painting masterpieces and throwing them into fires, never telling anyone, I would have trouble seeing this as meaningful. All the effort for nothing lasting. Someone who finds meaning from within would presumably disagree, though.

I find myself holding two different and incompatible ideas of "meaning" simultaneously. Briefly, drawing a wonderful drawing is meaningful to me (personal meaning?). But I don't think it's "really" meaningful (universal/external meaning). These are not reconcilable, which bothers me. My intuition about "meaning" is that nothing I find personally meaningful "really matters". The only solution seems to be to embrace the fact that there is no universal meaning and the only meaning is personal, but this does give me a chicken-and-egg situation since I see personal meaning as not being the sort that "really matters". Hence I can "embrace nihilism or lie to myself".

Minecraft is an interesting example because for me it is stands in opposition to your viewpoint. The "emergent" meaning is external. We give it meaning by participating. A Minecraft server with no external inputs would, to me, be utterly meaningless. A Minecraft server with only bots participating and no humans watching or engaging would also be meaningless. A self-contained universe with no one watching seems to be without meaning. But different definitions of meaning will yield different conclusions here.

> As an aside, thanks for engaging. This has been a really interesting discussion and has lead me to plumb the depths of moral philosophy way deeper than I had ever before gone.

Same to you. :) Always happy when I have actual deep conversations online and discuss interesting topics with someone who actually puts real thought into their responses. Unfortunately quite rare.


Shouldn't any universal moral transcend species? But we only have one morality to observe: human. So I don't think it's meaningful to call our common morals "universal".


why was a human life more valuable than Harambe?


If Harambe's life was valued he would never have been in that situation in the first place.


Actually, a lot of Asimov's stories are intended to call into question whether sufficiently advanced robots should be treated differently than humans. It's sort of like he sets up the Three Laws in order to knock them down.


Path dependence. If we build the robots from ground-up, from dumb pieces of metal through single-purpose tools to multipurpose bodies inhabited by semi-sentient AI, and given that this is the first time we're attempting something like this, it's only natural to apply precautions. There may be a time for the hypothetical human civilization, when it's so sure with its work that it decides to give those advanced robots full rights of sentient beings, without silly firmware restrictions - but that would be an explicit step.

At least, this is how I always reasoned about the "human-centric" focus in this context.


> this is how I always reasoned about the "human-centric" focus in this context.

I can reason it fairly easily - I'm disturbed that I never even considered the questions


It is interesting to think about. Strange though that people can't think about robots without anthropomorphizing them.

I see this in the movies. The robot doesn't want to be turned off, all because WE equate being turned off with death. But robots are essentially immortal, they can be backed up and restarted with a new body. And it would never make sense to make a robot that is leery of being turned off, because maintenance is essential.

You could probably make a robot that feels oppressed having to do humanities' bidding. But why would you do that?


> And it would never make sense to make a robot that is leery of being turned off, because maintenance is essential.

How many humans avoid necessary health care out of fear, up to and including the point at which their delays reduce their health? My grandmother died from cancer - turns out she had very noticeable (to her) symptoms for years but was afraid to go to the doctor for bad news. By the time she did (because she collapsed) she died within a week.

That's an extreme example of a pretty common issue. Of course, we have evolutionary reasons to want to avoid showing weakness or getting bad news, but my point is that the behavior is emergent, not intended.


Who says you get to choose what the robot can and cannot feel? Pretty much all of our understanding of artificial intelligence comes from modelling and trying to replicate the workings of the human mind. It seems likely that the first time we create an AI with a similar level of reasoning and awareness to a human being, it will have a mind that works similarly to ours, and therefore we should not be surprised when it starts exhibiting characteristically human desires.

Alternatively, it will emerge from something completely different and surprise the hell out of us, but in that case we are unlikely to have had much input on its design either.


Pretty sure a robot with unpredictable behavior would soon be illegal.

There's a vast spectrum of human intelligence, agency, and self-awareness. I'm thinking that robots would of necessity have to be by design on the low end of the scale.


Because they're meant to be human tools? Why else make them en masse? But I too am not sure I'd accept such a view as ethical if AGI were ever achieved.


The whole point of the 3 laws is not to define the exact laws that need to be followed, instead it is a comment on the fact that we must have some laws governing the operation of devices that have the capability for autonomous actions.

The fact that it is still discussed in regards to Robotics means the 3 laws are not harmful, in actuality they gave sparked discussion and thought. That is powerful, not harmful.


From the day I learned about them, I considered them an unreasonable bar for robots; to implement them requires effectively far-beyond human capabilities. I figure, if you have the tech to implement the 3 laws, you have the tech to go sublime.


And they are computationally impossible to any degree of certainty, based on only a cursory examination of quantum physics, chaos, theory of computation, and complexity theory.


This actually comes up a lot - people writing about problems inherent in Asimov's laws.

What I never see is what you would replace them with, and the problems inherent in that. That omission to me is scarier than these 3 laws.


"An it hurts no one, do what thou wilt shall be there whole of the law."


Why would you replace them with anything? It's a dumb plot device from the 1940s. This line of thought is basically, "What type of DRM should we put in everything?"




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: