Has anyone actually taken them as a serious suggestion in AI ethics?
Quick edit: Yikes, that was 15 years ago... I have co-workers who probably haven't heard of the movie...
Thus, we need more Asimov in TV.
(Hm, apparently BBC had made adaptations of The Caves of Steel and The Naked Sun in the sixties... https://en.wikipedia.org/wiki/R._Daneel_Olivaw#Appearances_i...)
I wrote this after attending a discussion at Oxford University on AI Ethics.
Pretty sure he still does.
The Three Laws of Robotics are not harmful because _they can't actually be implemented_.
Asimov never details how the three laws are baked into positronic brains in so fundamental a way that they can't be disabled without destroying the robot. Heck, we don't even know how to build a positronic brain -- because it's a fictional technology!
So even if you _wanted_ to build some sort of AI that is guided by the Three Laws, you couldn't.
You might be able to build AI that is guided by certain ethical principles, but right now, in the current state of technology, it's all rules-based stuff -- like a self-driving car with a rule about slowing down to avoid an impact that could injure someone.
Nobody knows how to give AI an ethical intuition, much less tweak that intuition. (I'm an objectivist, so I actually prefer saying, "Nobody knows how to give AI the ability to perceive metaphysical principles.")
This whole post is like saying that Star Trek's Warp Drive is harmful because we shouldn't be trying to travel faster than light.
While I like AI policy discussions a lot, this article does not help unfortunately. :-/
I don't get how people argue about the plausibility of future par-human AIs and say things like "we don't even know how to build a positronic brain" or "right now, in the current state of technology".
It's the same kind of incomprehensible confusion that leads one to tell the Wright brothers that heavier than air flight is impossible; just as birds exist to conclusively disprove that, so do HUMANS disprove the impossibility of human-level intelligence.
Maybe the three laws really are as physically illegal as a Warp Drive, though I would be remarkably surprised to hear it, but you cannot reasonably argue that position by claiming brains are impossible.
None of this should be taken as an endorsement of the three laws, which are clearly silly.
To catch people up: I, Robot is a collection of short stories largely follow either (a) robopsychologist Susin Calvin or (b) robotics engineers Powell and Donovan as they attempt to diagnose anomalous behavior by examination of the 3 laws of robotics, highlighting the logical flaws and traps of these simple laws. A secondary theme is the use of robots as a mirror or lens through which particular human behaviors can be blown up and examined, largely through the absence or magnification of particular traits.
In universe, the 3 laws are considered to be practical engineering safeguards (and eventually, over hundreds of years, a fundamental building block to the function of the positronic brain)-- even as they are shown to the reader to be the origin of many conflicts.
The very first link in my post points to a discussion of the lecture I attended. There the 3 laws were discussed and, I felt, some of the audience hadn't quite understood that they were a literary device.
But go along to any AI discussion group in person, or read the popular press, and I promise it won't be too long before you find a human citing them.
The author is making a joke using the "considered harmful" trope/meme in conjunction with the assertion that the word "harm" is ambiguous, thereby rendering the phrase "considered harmful" relatively meaningless.
Unfortunately, this joke seems to be lost on many of the commenters here. Technically minded folks seem to see the phrase "considered harmful" and proceed to lose their minds. Never mind that the phrase, in my experience, is almost exclusively used in jest. But based on people's reactions to it, "considered harmful" in a title might as well be flame bait. In fact, I seem to remember a serious article a while back called "'Considered Harmful' Considered Harmful" that should have put this whole thing to bed.
I mean, for story purposes it all makes sense, but I'm fascinated that I never raised these questions myself before. I accepted human divine right unquestioningly.
This "but why humans" reaction is bizarre to me every time I encounter it. We care about humans because we are humans. There's nothing deeper to uncover. We care about ourselves.
Humans do not care about the value of a robot "life", because we generally feel it has none.
For me, at least, you're coming at it from the other direction.
I'm not really asking "why" in the "how did this come to be" - the answer to that is rather self obvious. I'm asking "why" in the "is this objectively actually correct? Is this how I want things to be?" - which have no answer, but the effort of trying to find one can uncover plenty of deeper ideas.
> There is no meaningful answer to your question
Didn't I say that? Didn't I point out that the ATTEMPT to answer was valuable?
>> Is this how I want things to be?
> because all of existence is without meaning
Atheists don't have be nihilists.
How does an atheist hold subjective morals and values without those morals and values essentially being arbitrary?
You seem to have a really big chicken-and-egg problem here... you want morality to be universal. The problem is, "ethics", "morality", and "values" are all human constructs. We created them, we define them, we discuss them ad nauseum. They are as varied as we are. I don't think that makes them "meaningless".
You say in different words that things are meaningful only because we believe they are. That’s fine if it helps you but it’s also circular and rather solipsistic. Your stance is also literally nihilistic: “a doctrine that denies any objective ground of truth and especially of moral truth”.
So, if caring only about Indians because you're Indian isn't reasonable but caring only about humans because you're human isn't reasonable, what's the relevant difference? That seems to me a question it's perfectly fair to ask. And some possible answers to the question don't make it obvious that the distinction between humans and hypothetical robots with minds that resemble ours enough (e.g.) for us to talk with them is one that justifies caring about humans and not caring about those robots.
The question for robots is whether we’ll ever identify with them strongly. I’m doubtful.
Hasn't Atheism been on the rise? I wonder how people will feel about this subject of life by the time AGI is achieved.
I am a relativist, but not a nihilist.
It’s not coincidental that these are discussed side by side in your book, nor that some other definitions explicitly call moral relativism nihilism.
Now, there are forms of moral relativism, namely descriptive relativism, that are not the same as nihilism. These simply acknowledge the fact that different people hold different moral views, speaking to the reality of the world rather than the plausibility or implausibility of objective morals. (I would also personally hesitate to call descriptive moral relativism a philosophy since it doesn’t attempt to speak to any fundamental truth. Acknowledging that people hold different morals seems less of a philosophical viewpoint and more of an ability to make basic human observation.)
But you don’t seem to be a descriptive moral relativist. You’ve made absolutist statements about the relativity of morals, leavings me to conclude that you’re making a distinction without a difference because you don’t want to call yourself a nihilist.
If you can provide some practical difference between absolute moral relativism and moral nihilism, please do. I’ve been unable to find anything that draws a meaningful difference between the two except that the absolute moral relativist says you can evaluate moral statements in the context of an arbitrary moral framework, and the moral nihilist says, sure you can, but it’s arbitrary so it doesn’t actually mean anything. Everything else seems to involve painting nihilists as morons incapable of recognizing the reality of cultural norms (“Moral nihilists say you can eat babies for breakfast!”) in a weak attempt to draw a meaningless distinction because for some reason “nihilist” is seen as a worse label than “moral relativist”.
To the extent that moral relativism is a spectrum, moral nihilism simply seems to be the stronger end. The weaker end is just observation and the middle is a lack of willingness to firmly accept or deny universal morality.
Going off this article, I'd call myself a "moderate morality-specific meta-ethical relativist".
The article also says that "some critics also fear that relativism can slide into nihilism".
It's true that "radical meta-ethical relativists" would have the same (or at least, as you say, practically indistinguishable) views on morality as a moral nihilist.
My bigger issue has to do with "schools of thought", or the broader labels being used here. "Nihilism" has many other forms, all of which are pretty extreme. So essentially, the most radical end of relativism has significant overlaps with the least radical end of nihilism. (Specifically w.r.t. morality)
Earlier you said "atheism implies nihilism and the extent an atheist disagrees is decided entirely by how convincingly they can lie to themselves". To use the overlap between one end of relativism and the other end of nihilism to then equate the two is taking it a big step too far.
It seems to me like you're deliberately blurring the distinction between these schools of thought in order to take a pot shot at atheism.
This is somewhat fair, in that there is a supposed spectrum that I equated to one extreme. However, I think one end of relativism is sort of uninteresting because it's purely observational, and I think the middle ground is wishy-washy. Moderate meta-ethical relativism (as described in your link) postulates that there are some universal rules for morality. This is literally not relativist and I'm not convinced this can be a logically-consistent belief. Morals are relative, except when they're not?
It feels to me that moderate moral relativism stems from the "fear that relativism can slide into nihilism". This is not a logical approach, but an emotional one. If you start with the assumption that moral nihilism is bad, but you also don't believe in universal moral truths, you're left grasping for a middle ground where universal moral truths don't exist but also somehow they do.
This is essentially the same thing I was referring to when I said "the extent an atheist disagrees is decided entirely by how convincingly they can lie to themselves". If you don't like the logical outcome of your beliefs, you'll find yourself searching for a "moderate" version that is allows you to avoid confronting the reality of your views. Don't believe in universal morals but also can't stand the idea of no universal morals? Then define them as relative except when you squint just right.
If you actually hold the view that there are common universal morals that can be pulled from the commonalities of all disparate human morals, then you're a secret moral universalist. There's nothing actually relativist about "rules A, B, and C are universal, and then humans also add arbitrary rules D-Z depending on their culture". That's just squinting until "not universal" and "universal" look like the same thing and hiding it behind fuzzy language.
This is much like a theologist who attempts to reconcile the beliefs that God is omnipotent, omniscient, and omnibenevolent with the reality of human suffering by stating that we simply can't understand God's love and that's why he allows suffering. On the surface that might look like a logically consistent response, but it's a redefinition of "love" to mean something other that what humans would recognize as love. We "cannot understand" God's love, so God's love is by definition not human love, and so God's omnibenevolence is not human omnibenevolence. And what we've done is squint really hard at "not omnibenevolent" until it looks like "omnibenevolent" while ignoring the false equivalency between the two different loves being discussed.
> It seems to me like you're deliberately blurring the distinction between these schools of thought in order to take a pot shot at atheism.
You might assume so but you'd be wrong. My beliefs would probably be best categorized as "agnostic atheist". (If feeling clever I might call my self apagnostic, meaning "don't know and don't care".) I am a reluctant nihilist in that I don't see a logical way to believe that morals (or really anything else) have meaning in the face of an arbitrary, uncaring, and accidental universe, but all my instinct and intuition says otherwise.
I was not being facetious when I said I'd love to see an convincing argument that atheists don't have to be nihilists. I see that as the only logical outcome, but that doesn't mean I like it.
>Moderate meta-ethical relativism (as described in your link) postulates that there are some universal rules for morality.
Not so? "Meta-ethical relativism: that there is no single true or most justified morality". I am skeptical "about the meaningfulness of talking about truth defined independently of the theories and justificatory practices of particular communities of discourse". I'll elaborate and tie this in to your next point...
>If you actually hold the view that there are common universal morals that can be pulled from the commonalities of all disparate human morals, then you're a secret moral universalist.
I like the evolutionary idea ( https://en.wikipedia.org/wiki/Moral_relativism#Scientific ). I.e. the reason there are certain common morals (i.e. don't kill innocents, raise and protect children, etc.) is that these offer a strong advantage over other groups that do not hold those morals. So these morals end up in nearly every single society, and could be easily mistaken for "universal" moral truths. From my original article: "a common feature of adequate moralities is the specification of duties to care and educate the young, a necessity given the prolonged state of dependency of human offspring and the fact that they require a good deal of teaching to play their roles in social cooperation. It may also be a common feature of adequate moralities to require of the young reciprocal duties to honour and respect those who bring them up, and this may arise partly from role that such reciprocity plays in ensuring that those who are charged with caring for the young have sufficient motivation to do so". Using the language of that quote, and to twist your example: rules A, B, and C are common elements of an adequate moral system, and then you can optionally add arbitrary rules D-Z depending on your culture and preferences.
Consider infanticide in the animal kingdom (https://en.wikipedia.org/wiki/Infanticide_(zoology) ), this would be "immoral" for humans, but it is standard practice for some species given the right circumstances. So for humans there may be near-universal morals, but they only apply to us, and only because they "evolved" that way. A human tribe that had a similar practice of infanticide might be more likely to die off (and take its "moral truths" with it) because humans don't reach maturity fast enough to support a generational purge of all infants.
You might say this is where I'm squinting, but I would say we just have different tests for what constitutes a "universal" moral truth. Perhaps just as you pointed out the case of there being a "false equivalency between the two different loves being discussed" I would suggest we're ignoring the false equivalency between the two different universals being discussed.
The "universal" you think I hold is an evolutionary (and therefore arbitrary, uncaring, and accidental) one that is specific to our species. That, by definition, is relative, not absolute. There's nothing timeless or intrinsic to such a "universal morality". You might say this means I'm just squinting hard enough, but I disagree. The big supporters of universal morality are theists who use their God(s) as external observers to pass moral judgement. Because without some external observer, what makes any moral truth "universal"? I.e. where would the "universal morals" come from-- who or what would set the standard? If you agree that humans collectively agree on the standard, then for my definition of "universal" they would not be universal.
You're interested in finding meaning, though. Meaning is relative to an observer, it can't be "found" in the sense that it's waiting out there somewhere for you. Meaning must ultimately come from within. Take any sandbox game for instance, it's like you're saying "Minecraft is meaningless because there's no point". Notch created the world, but gave no direction. All the gameplay is emergent. Real life is like that.
As an aside, thanks for engaging. This has been a really interesting discussion and has lead me to plumb the depths of moral philosophy way deeper than I had ever before gone.
I guess your link doesn't describe universal morals, but universal constraints on those morals. This is to me definitely a distinction without difference. If care of children is a universal constraint on morals, then it is a universal moral.
"Moderate relativists ... deny that there is any single true morality but also hold that some moralities are truer or more justified than others. On Wong's view ... universal constraints on what an adequate morality must be like."
> You might say this is where I'm squinting, but I would say we just have different tests for what constitutes a "universal" moral truth.
I'm actually confused here. Am I correct in reading that you do not hold that these morals are "universal"? You agree that there are common (meaning held by essentially all humans who didn't die out) morals, but don't call them universal (meaning absolute)? I agree the terminology gets fuzzy.
> You're interested in finding meaning, though. Meaning is relative to an observer, it can't be "found" in the sense that it's waiting out there somewhere for you. Meaning must ultimately come from within. Take any sandbox game for instance, it's like you're saying "Minecraft is meaningless because there's no point". Notch created the world, but gave no direction. All the gameplay is emergent. Real life is like that.
This is an interesting point of discussion. I agree that any "meaning" we find comes from within by definition, since there seems to be no external/universal meaning. However, this sort of "meaning" doesn't mesh with my intuitive definition of "meaning". If I imagine someone spending their life painting masterpieces and throwing them into fires, never telling anyone, I would have trouble seeing this as meaningful. All the effort for nothing lasting. Someone who finds meaning from within would presumably disagree, though.
I find myself holding two different and incompatible ideas of "meaning" simultaneously. Briefly, drawing a wonderful drawing is meaningful to me (personal meaning?). But I don't think it's "really" meaningful (universal/external meaning). These are not reconcilable, which bothers me. My intuition about "meaning" is that nothing I find personally meaningful "really matters". The only solution seems to be to embrace the fact that there is no universal meaning and the only meaning is personal, but this does give me a chicken-and-egg situation since I see personal meaning as not being the sort that "really matters". Hence I can "embrace nihilism or lie to myself".
Minecraft is an interesting example because for me it is stands in opposition to your viewpoint. The "emergent" meaning is external. We give it meaning by participating. A Minecraft server with no external inputs would, to me, be utterly meaningless. A Minecraft server with only bots participating and no humans watching or engaging would also be meaningless. A self-contained universe with no one watching seems to be without meaning. But different definitions of meaning will yield different conclusions here.
> As an aside, thanks for engaging. This has been a really interesting discussion and has lead me to plumb the depths of moral philosophy way deeper than I had ever before gone.
Same to you. :) Always happy when I have actual deep conversations online and discuss interesting topics with someone who actually puts real thought into their responses. Unfortunately quite rare.
At least, this is how I always reasoned about the "human-centric" focus in this context.
I can reason it fairly easily - I'm disturbed that I never even considered the questions
I see this in the movies. The robot doesn't want to be turned off, all because WE equate being turned off with death. But robots are essentially immortal, they can be backed up and restarted with a new body. And it would never make sense to make a robot that is leery of being turned off, because maintenance is essential.
You could probably make a robot that feels oppressed having to do humanities' bidding. But why would you do that?
How many humans avoid necessary health care out of fear, up to and including the point at which their delays reduce their health? My grandmother died from cancer - turns out she had very noticeable (to her) symptoms for years but was afraid to go to the doctor for bad news. By the time she did (because she collapsed) she died within a week.
That's an extreme example of a pretty common issue. Of course, we have evolutionary reasons to want to avoid showing weakness or getting bad news, but my point is that the behavior is emergent, not intended.
Alternatively, it will emerge from something completely different and surprise the hell out of us, but in that case we are unlikely to have had much input on its design either.
There's a vast spectrum of human intelligence, agency, and self-awareness. I'm thinking that robots would of necessity have to be by design on the low end of the scale.
The fact that it is still discussed in regards to Robotics means the 3 laws are not harmful, in actuality they gave sparked discussion and thought. That is powerful, not harmful.
What I never see is what you would replace them with, and the problems inherent in that. That omission to me is scarier than these 3 laws.