
When her best friend died, she rebuilt him using artificial intelligence - Futurebot
http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot
======
drdeca
Why are people acting like a simple chatbot is somehow a recreation of a
person?

It isn't a person, it's a chatbot.

There is even a company saying that they will try to revive people with a
combination of chatlogs and brain scans or something.

Chatlogs aren't going to bridge the gap between what you can get with brain
scans and what you need to do a full emulation.

Even if humanity does succeed in doing complete mind simulations, humanity
will not have thereby solved death. It will have, at best, delayed it for a
very very long time.

And then what? ;)

~~~
narrator
Isn't this Ray Kurzweill's brain uploading? You make a computer that can
accurately simulate a person and then, according to Kurzweil, the person is no
longer dead and completely equivalent to the previously alive human being.

I don't buy it of course, but isn't that the theory?

~~~
feral
In fairness to Kurzweil, he isn't talking about a simple chatbot. That's a
pretty big difference.

But beyond that, there's an argument that there's no clear distinction between
a simulation of a person and the person.

If you want to get into this, start out by telling me the difference between
A) a new sorting algorithm you wrote vs B) a simulation of the new sorting
algorithm. In particular, why B isn't also running the same algorithm.

If that's not an objection, there's the question about "completely equivalent
to the previously alive human being" \- pragmatically, it seems impossible to
me to fully recreate a 'completely equivalent' human from their
writing/output; but, if you're willing to accept a 'so similar you cant tell',
the idea is no longer as clearly ridiculous as it first seemed to me:

How big is the space of potential human minds?

Lets say you've an incredibly powerful intelligence: its got someone's DNA and
their writings/output. And lets say it has a lot of data on humans, and has
deduced a pretty good model of how their minds form, given their DNA and
experiences. How much can it now deduce about how a deceased person's DNA was
expressed, and the person they became, given their experiences? How small is
the space of minds it could narrow in to, from that data? Can it find a small
enough space that another human can't tell the difference?

How hard is it to model a human mind? I can make pretty good predictions about
what people I know well will say in response to many situations.

But I don't know; I don't think we know enough about minds to say one way or
the other.

~~~
narrator
The problem with the analogy to a sorting algorithm is that a sorting
algorithm is obviously computable. What if there's something like protein
folding that has O(2^n) complexity involved in being a person that the body
does trillions of times a second but is impossible for any size computer to do
for large N?

~~~
feral
That seems a very specific/narrow point, not a general objection.

But anyway: I guess its possible human biological hardware does something
difficult to simulate on a computer (at least one similar to our current
architectures.)

Given a super advanced intelligence, though, I doubt adding hardware that does
protein folding (or the protein folding analogue operation) fast - in biology
if necessary - is going to be a major obstacle.

And that's if it can't come up with an algorithm that does an equivalent
operation, that runs well on whatever hardware it already has.

~~~
narrator
If we could compute everything in a living system in polynomial time or better
then we could design drugs inside of a computer and that technology would be
worth billions!

Seriously, until we are doing drug design with computers, we don't really
understand biological systems well enough to simulate them inside a computer.
Drugs interact with folded up proteins all the time so this requires
simulation of protein folding and this algorithm is exponential time
complexity, so basically not computable for anything but extremely trivial
problems.

------
evolve2k
"An uncomfortable truth suggested by the Roman bot is that many of our flesh-
and-blood relationships now exist primarily as exchanges of texts, which are
becoming increasingly easy to mimic"

This. I was reflecting with a friend today how with modern social networks
there is this strange feeling of being connected yet somehow disconnected. I
think the above paragraph captures this same essence.

We discussed that maybe digital technology is akin to medicine, it's all about
the dosage. Chat to the bot a little, maybe be healing, chat all the time and
maybe it's really not going to be good for you.

How do we navigate these tools that can connect us with the world and yet
often leave us feeling so alone at the same time?

~~~
rublev
I think our psyche is extremely delicate, and pretending we can mentally
manage and sort the influx of constant information is fooling ourselves.
Fooling ourselves because by virtue of exposing yourself to so much
information daily, you're altering the way you think and rewiring your mind.

I'd like to think that I'm above being influenced by subliminal propaganda in
media (especially television), but I'm not. I've caught myself a few times
where I held a strong opinion about something I knew nothing about and I was
able to trace it back to what originally influenced me. When pressed on
certain opinions I couldn't say anything concrete.

Life has been a lot better when I deleted all social media accounts (900+
'friends' on FB). Really only about 20 of those connections I keep in touch
with, and I've made it a point that texting is only for details not chatting.
Since talking on the phone and making efforts to see people in real life I've
been a lot happier. My world feels less overloaded, there are no strings
connecting my life to social media and the control is back in my hands. I can
feel how I want to feel about things without carefully summing up and
absorbing how others feel about my experiences. Social media and texting and
all that shit is just another and even more shaved off abstraction of human
interaction (boiling down interaction to just characters on a screen). I draw
the line at the human voice, nothing below that save for carefully thought out
and written letters is 'human' enough for me to really derive any lasting
pleasure from.

It's depressing for a very long time after deleting social media accounts, it
feels like your entire social rug has been suddenly swiped right out from
underneath you. I didn't back any of it up or save any of it. And you know
what? I don't remember _any_ of it. I don't remember the conversations because
there were 100's a day. I don't remember any of the comments or banter left on
photos, or the photos themselves. It all blends into one. All of it was
meaningless, I didn't grow closer to any of those people. I discovered more
about them, sure.

Omitting (or at least trying my hardest to) popular media and social media has
been mentally freeing. I get to ask people 'Oh yeah? Why don't you tell me
about it?' when they say 'Hey have you heard of <current event>?'

I have no idea what's going on anywhere and I love it.

------
lancefisher
Caprica, the prequel to Battlestar Galactica, uses this idea as a major plot
element.

[https://en.m.wikipedia.org/wiki/Caprica_(TV_series)](https://en.m.wikipedia.org/wiki/Caprica_\(TV_series\))

------
Rhapso
This has been one of the reasons life logging has been interesting to me.

It might be a while before we can "upload" your brain, but we can build a
predictive model of your behavior and leave an interactive "autobiography"
much sooner.

~~~
space_ghost
The British sci-fi sitcom Red Dwarf had a character that was just such an
interactive recreation.

~~~
pmoriarty
Philip K Dick, in the 1950s and 60s, wrote about simulations of famous people
(like Einstein or Lincoln), of ordinary people's deceased relatives, and of
other humans that were used for pleasure or entertainment. He also wrote of
living people, famous or not, who turned out to be simulations.

------
butz
Black Mirror episode in 2013 had the same plot.

~~~
topbru
Such an incredible show, new episodes later this month on Netflix, can't wait.

~~~
dudouble
Thanks for this. I loved this show and had no idea. Can't wait either!

------
beeskneecaps
I can definitely sympathize here. There are so many people that I wish I could
have one last conversation with and this would maybe help.. I, personally,
would _really_ not want to be reanimated and potentially misrepresented by
Markov chains and some early deep learning system.

------
neoteo
The Max Headroom series 2 episode "Deities" was about a Vu Age church that
sold A.I. recreations of loved ones as salvation.

~~~
tudorw
This is well worth a watch. Prescient.

------
Esau
An interesting article, although I disagree with use of the term "artificial
intelligence" to describe what was created.

In some ways, I can sympathize with Ms. Kuyda. Both my father and grandmother
died in 2013, and even though they are gone, I still get birthday alerts for
them on my phone. I don't have the heart to remove them.

------
billconan
I really want to know the model behind this chatbot

~~~
kelvich
I bet whole source code of that engine can be found in Basic Usage section of
[https://github.com/jsvine/markovify](https://github.com/jsvine/markovify)

~~~
billconan
I wasn't able to find proof? are you sure?

~~~
kelvich
That's just a guess.

------
fixxer
I find this unseemly.

Grief is a vital part of the healing process. This strikes me as a part of a
goal to avoid/numb it.

~~~
kuyda
This is exactly the opposite. What happens today that there is no ritual,
nothing that allows us to remember and to face it. We try to avoid our
feelings by trying to shove them as deep as possible as fast as possible. It
doesn't help. We all need a ritual that would work in the digital age to
process - this is like sitting shiva.

~~~
fixxer
I disagree with your statement to my core. I think we're going to have to
accept philosophical differences on this one.

Sorry for your loss.

------
zeristor
Wasn't this the core of Stanislav Lem's Solaris?

Albeit reconstructing the personality of a loved one from someone else's
memories.

------
RandomName2020
It ultimately boils down if you accept John Searle's Chinese room argument or
not. (BTW his claim is not that conscious machines are not possible - we are
such machines, but they cannot be implemented as digital symbol manipulators;
we first need to figure out what exactly makes the biological machines
conscious).

