
“The Robot and the Baby” by John McCarthy - ColinWright
http://www-formal.stanford.edu/jmc/robotandbaby/robotandbaby.html
======
gbog
Interesting short story. The part on robots rings well, but there was one
thing about parenting at the end that rang not so well to my ears:

"children grew up more attached to their robot nannies than to their actual
parents. This was mitigated by making the robot nannies somewhat severe"

This reflects a very simplistic view on who kids love. It is in fact wrong
even in the first order: kids actually love more severe parents or severe
teachers (obviously, below a severity threshold, I am not talking about
sadistic education here).

I experienced it myself as a kid: we respected and loved much more the teacher
of French litterature, because he was severe and knew how to make himself
respected. The German teacher tought the softer she was the better, and she
was bullied and hated.

I experienced it with my own kids, and with the kids of friends. Each time I
am severe with them, e.g. not bending to their will, I gain points in their
hearts.

If I were to find a way to make kids prefer their parents over their robot
nannies, I'd say we would just let the parents have and show they have the
power of decision, and therefore the power of empowerment. From my experience
kids love the most those would can empower them doing new things, the ones who
will forbid and allow. a personal example is thusly: I forbid my 4yo son to
touch my tools, but he is allowed to watch when I'm fixing something, and
sometime I ask him for help ("bring me the screwdriver") and he is sometime
allowed to use some tools when it is safe and supervised. This is I think a
great way to make him interested in tools, and to feel empowered when he can
use the usually forbidden screwdriver, and he clearly loves it.

~~~
wrongc0ntinent
>I forbid my 4yo son to touch my tools, but he is allowed to watch when I'm
fixing something, and sometime I ask him for help

This is hilariously reminiscent of my early childhood, so I can confidently
speculate that your kid will also become an expert at putting things back just
the way they were before you left the house.

~~~
gbog
What a good news!

------
bsirkia
It cracks me up that the most implausible part of this story is the idea that
everyone's privacy would be so well protected.

~~~
Houshalter
I can see it happening either way. It's possible people will become more and
more used to privacy being invaded and it just becomes normal, but it's also
possible it eventually goes to far and swings in the opposite direction which
sometimes happens in politics.

------
nemo1618
There's actually a bunch of really interesting predictions by McCarthy here:
[http://www-formal.stanford.edu/jmc/future/index.html](http://www-
formal.stanford.edu/jmc/future/index.html)

But the hit counter only shows 66 hits since 1998! What a shame.

~~~
lizzard
I worked for him for a while putting some of his papers in order to go to the
Stanford library archives. His letters with Nash and Minksy were pretty
interesting.

------
dxbydt
Nice potshots at UC Berkeley philosophers...the ones from Stanford would have
similar objections to the robot's actions

~~~
anandkulkarni
I believe he's referring to Berkeley's Hubert Dreyfus, and perhaps John
Searle, who are well-known for their critiques of the prospects for AI.

~~~
joe_the_user
And to be fair, so far they've been right, no robotic AI remotely similar to
this scenario is visible as even distant blip on the horizon.

Their point is entirely fair that robots would have to do deal with a
_continuously_ ambiguous world and would lack anything this fable's "general
good purpose" module for resolving the ambiguity problems in a touchy-feely
way. Of course, the complexity of human interaction wouldn't appear suddenly
in a moment of interaction with one drug addict but that would hit and crush
any "real world" AI the moment it tried to get out the door.

~~~
Houshalter
A real world AI would almost certainly have to learn the rules of it's
environment rather than being hard coded with arbitrary human designed rules.
Machine learning is getting better and better at doing this.

If we ever got them as intelligent as the robots in this story, we'd have no
way of programming them with abstract high level goals (i.e. "do good", or
"don't hurt humans", etc.) Except by giving them examples of robots hurting
people and robots not hurting people and hoping they infer the pattern we want
them to from it.

This is an (extremely) simplified argument for the dangers of AI.

~~~
joe_the_user
The only "real world AI" example we have is us, human beings ourselves.

Humans manage to be able to both learn from their environment and to learn by
being told rules - a person would have a hard time demonstrating intelligence
if they weren't able to be instructed in things and so it seems like anything
intelligent we construct would have to have those abilities too.

I suppose it's a natural overreaction for people to believe that if
intelligence is not just rule-following, it must be not at all rule based. I
believe the truth is in the middle.

~~~
Houshalter
An intelligence that smart would likely understand what you are saying and
what you want. That doesn't mean the AI would _want_ to do what you tell it to
do though.

Comparing it to humans, if you tell a human you want them to do something it
doesn't mean they will do it even though they understand you.

If we train the AI the same way we do today, it would involve giving it
examples of robots doing what they are told and robots failing to do that.
That approach would likely fail because of all the possible ambiguities
involved in interpreting meaning.

Other approaches, like giving a robot a reward every time it does something
right and punishment every time it does something wrong, might result in the
robot killing it's master and stealing it's reward/punish button.

~~~
joe_the_user
_An intelligence that smart would likely understand what you are saying and
what you want. That doesn 't mean the AI would want to do what you tell it to
do though._

I haven't yet seen any evidence that a concept like "wanting" or "desire" have
any meaning outside the context of humans.

I agree that if we could produce an AI with various blind methods, it would
likely be a dangerous thing.

I simply also doubt we could produce an AI in this fashion. I mean, you
couldn't train functional human by putting him/her in room with just rewards
and punishment.

I would note that even the animals of the natural world are constantly using
signs to communicate with each other and other functional mammals receive a
good of "training" over time.

~~~
Houshalter
>I haven't yet seen any evidence that a concept like "wanting" or "desire"
have any meaning outside the context of humans.

Those specific feelings/emotions, no. But AIs do have utility functions, or in
the case of reinforcement learning, reward and punishment signals (which
itself is essentially a utility function.)

>I simply also doubt we could produce an AI in this fashion. I mean, you
couldn't train functional human by putting him/her in room with just rewards
and punishment.

Possibly. It's just an example to illustrate how difficult the problem of
coding abstract, high level goals into an AI is.

------
kvee
FWIW, here's the t-shirt in the story from Teespring, a YC company:
[http://teespring.com/fuckingrobot](http://teespring.com/fuckingrobot)

------
cordite
I liked practically reading lispy stuff in there :-)

------
mkrecny
"unless you can assure me there are no children or ladies present"

------
lsh
an epub of that page: [http://www15.online-convert.com/download-
file/9bb357fa4f58ff...](http://www15.online-convert.com/download-
file/9bb357fa4f58ffb1ab540c86963cd903/converted-cbf06fcf.epub)

using this service: [http://ebook.online-convert.com/convert-to-
epub](http://ebook.online-convert.com/convert-to-epub)

seems to work surprising well. not sure how long the link will last but I'll
be reading this on the train to work tomorrow :)

------
johnnyg
"There were no interesting wars, crimes, or natural catastrophes, and peace is
boring."

That hits a little too close to the core...

------
kimonos
Just the thought of having my baby taken cared of by a nanny robot does scare
me.. But nice story. Thanks for sharing!

~~~
lucb1e
Just don't be that mother :)

------
wazoox
This story is strikingly simplistic in many ways, and reeks of the nutwing
conservatism of its author (AFAIK McCarthy was a randian libertarian). Only
women have anything to do with babies, for a start, and know anything of
diapers and such. I won't even comment on the implied views on poor single
mothers, government and politicians. Heck, this is interesting at times but
quite shocking indeed.

------
jMyles
Is this the martial arts referee?

~~~
astine
No:
[https://en.wikipedia.org/wiki/John_McCarthy_%28computer_scie...](https://en.wikipedia.org/wiki/John_McCarthy_%28computer_scientist%29)

------
ragebol
"All four children survived the educational system."

