
Death Is Optional: A Conversation with Yuval Noah Harari and Daniel Kahneman - sergeant3
http://edge.org/conversation/yuval_noah_harari-daniel_kahneman-death-is-optional
======
fuligo

      Computers are very, very, very far from being like humans, especially 
      when it comes to consciousness. The problem is different, that the system, 
      the military and economic and political system doesn't really need 
      consciousness.
    

Consciousness is a poorly defined concept with many unfortunate connotations,
but let's assume in this context it's the same as self-awareness. I assert
it's very likely that nature didn't evolve conscious brains by accident, it's
probably a byproduct of making an intelligence that can reason about itself
and its environment.

I know it's just a thesis, but when you think about what our mindless AIs
lack, it makes sense. They're characterized by a complete incapability for
global reasoning and an inability for personal consideration. You might argue,
as the article does, this is exactly how we want our tools to behave, but then
we might have to accept there could be hard limits on the complexity of mental
tasks these systems are able to perform without access to higher reasoning.

~~~
rivd
The word "Consciousness" is almost like the word "God".

Many people seem to know what it is, no one can actually define it, and it has
never been measured, located or proven to exist.

Maybe we should stop using it altogether.

~~~
ForHackernews
We should either stop using it, or at the very least own up to what it is
we're [trying to] talk about:
[http://consc.net/papers/facing.html](http://consc.net/papers/facing.html)

~~~
rivd
Going to read that in earnest, did a quick scan (not my native language) and
yes: i want to differentiate here between the not-asleep and the i-think-i-
know-i-am-thinking type of consciousness. It's not only experience, but
knowing or thinking to know your own experience.

Furthermore, it seems only to exist or pinpoint when you're communicating with
another person. In total solitude, boundaries between your self-image and the
other(s) just don't hold up, and the whole thing becomes almost meaningless.

Edit: the concept of time seems to be connected to it as well, but i really
need to read this paper first now, i think :)

------
reasonattlm
Many people believe that medical control over aging will be stunningly
expensive, and thus indefinite extension of healthy life will only be
available to a wealthy elite. This is far from the case. If you look at the
SENS approach to repair therapies [1], treatments when realized will be mass-
produced infusions of cells, proteins, and drugs. Everyone will get the same
treatments because everyone ages due to the same underlying cellular and
molecular damage. You'll need one round of treatments every ten to twenty
years, and they will be given by a bored clinical assistant. No great
attention will be needed by highly trained and expensive medical staff, as all
of the complexity will be baked into the manufacturing process. Today's
closest analogs are the comparatively new mass-produced biologics used to
treat autoimmune conditions [2], and even in the wildly dysfunctional US
medical system these cost less than ten thousand dollars for a treatment.

Rejuvenation won't cost millions, or even hundreds of thousands. It will
likely cost less than many people spend on overpriced coffee over the course
of two decades of life, and should fall far below that level. When the entire
population is the marketplace for competing developers, costs will eventually
plummet to those seen for decades-old generic drugs and similar items produced
in factory settings: just a handful of dollars per dose. The poorest half of
the world will gain access at that point, just as today they have access to
drugs that were far beyond their reach when initially developed.

Nonetheless, many people believe that longevity enhancing therapies will only
be available for the wealthy, and that this will be an important dynamic in
the future. Inequality is something of a cultural fixation at the moment, and
it is manufactured as a fantasy where it doesn't exist in reality. This is
just another facet of the truth that most people don't really understand
economics, either in the sense of predicting likely future changes, or in the
sense of what is actually taking place in the world today.

[1]: [http://sens.org/research/introduction-to-sens-
research](http://sens.org/research/introduction-to-sens-research)

[2]:
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3616818/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3616818/)

~~~
neilk
A complex system with millions of nodes and trillions of potential
interactions with external and internal factors, working perfectly with no
oversight and no unexpected outages.

Do you live in this century?

P.S. I'm not saying that this is impossible, or even unlikely, but it's
probably going to be something that requires constant and possibly very
expensive maintenance.

~~~
reasonattlm
Your body performs that maintenance when you are young.

The damage repair approach to treating aging suggests that if we remove the
fundamental differences between old and young tissue, then the body will
continue to maintain itself as it does when it is young. There are not all
that many types of fundamental damage.

So this doesn't require maintaining the whole system; it requires removing the
spanners in the works. These spanners are well known and well characterized.
For example, cross-links in the extracellular matrix that degrade blood vessel
elasticity, that causes hypertension, cardiac remodeling, and microstrokes,
and so on. Break down the cross-links with designed drugs (and deal a few
other items that also contribute to stiffening) before the point of serious
remodeling and all of that goes away, because it is the stiffening that drives
this dsyfunction. There are a number of other items that have similar roles,
but not so many that it is unfeasible to think of producing effective
therapies on a time scale of the next two decades.

Think of damage of this sort as rust in a fantastically complex metal
sculpture. You don't have to understand the sculpture, just how to rust-proof
it, and how to remove the existing rust. Rust is simple, and the complexity of
the sculpture doesn't much matter when it comes to how you approach rust-
proofing and removal.

~~~
p1esk
Cancer is simpler than aging. We understand cancer better than aging. Yet
after many billions of dollars and billions of hours of research, we still
cannot "cure" cancer. A huge number of people die from it every year.

------
vinceguidry
The thought of being able to abolish death as soon as you have a working BCI
by running one's mind on silicon hardware is, if not ludicrous, at least
really really far out. So, you hook your brain up to a computer, and get basic
input and output. Great, you now have an upgraded keyboard / mouse. It doesn't
get any better no matter how many capabilities you add. Direct memory access,
changing the contents of programs on the fly, none of it will abolish death
any more than being able to do by 'hand' does. The best you can do is create a
really sophisticated program that will continue acting deterministically after
the programmer goes away. You will live on in the same way Tolkien lived on
after he died in his books.

You're not going to be able to abolish death until you can get neural networks
to run at the speed of your current brain hardware and at similar capacity. In
other words, you still have to be able to simulate a brain. To take the
functions of the brain currently has, and get enough of them working
electronically until you can start to build an identity on top of it.

Even once you do that, it won't feel like your brain without a lot of
training, both on your artificial hardware and on your biological hardware. I
envision an era where early adopters have hybrid consciousness, where we
slowly incorporate an electronic identity with our biological one. Going from
"This is my brain extension" to "this is another part of me, some of my
thoughts are here, some of them are in this piece of hardware."

I suspect this will be a highly individual process that we will slowly gain
mass competence with in a similar way we're doing with software now. We'll
have to grow our ability to simulate neuronal processes and replicate our
psychologies computationally. Even then, it will feel like an artificial
prosthetic until you have enough capabilities to where if you suddenly lost
the other side, you wouldn't feel trapped in the most hellish solitary-
confinement prison ever devised.

Then after they're fully integrated, start prioritizing experiencing through
the mind prosthetic. I suspect one would need many many years before cutting
off the biological part wouldn't cause grievous trauma to one's sense of
identity. Especially since our legal system will need a long time to figure
out how to fit non-biological beings into society.

~~~
kanzure
> You're not going to be able to abolish death until you can get neural
> networks to run at the speed of your current brain hardware and at similar
> capacity.

That can't possibly be true. You're dead if you are slower than you once were?

~~~
vinceguidry
If you can't merge your consciousness with the prosthesis then you'll never
actually be able to consider it 'you'. And you won't be able to merge if
there's a serious impedance mismatch.

Unless you're talking about taking not just a human brain and simulating it,
but rather programming a brand new electronic brain to run your own
consciousness. Which is even farther out than merging. You'd have to
understand neurons to a much-greater depth.

This is assuming that neuronal processes aren't relying on quantum dynamics,
which I suspect would make true simulation impossible. Just because you can do
simulated annealing on silicon doesn't mean you can predict the unfolding of
arbitrary quantum states. Merging would be the only viable strategy, and
there, your new brain would only have a fraction of its former speed until
we're so far along with quantum computers that they're as ubiquitous as
silicon processors are today.

------
tekelsey
AI is a simulation (artificial). Even if you did the acrobatics of saying
there could be II (inorganic intelligence), there's nothing to suggest self-
awareness could transfer from organic to inorganic, and there's no way to
distinguish between a simulation being self-aware, and having the "appearance"
of being self-aware, so I don't see how anyone could claim to have made a
self-aware machine.

If you believe that people have spirits, then the question is also moot. But
if you don't believe that people have spirits, there's no way to prove that
your memories transferred into anything other than a simulation of yourself.
If you consider yourself "you", then something else would not be "you", even
if it had the appearance of being "you".

Even if you believe that consciousness arises from the synapses, and is
therefore a simulation or illusion to begin with, there's still no way to
prove that you would "transfer".

You can't say "there is a singularity" \- you can only hope, if that's what
you hope for. Others hope there is a god. It's a matter of faith both ways,
and a reflection of wanting to escape death.

I don't think there's any harm in believing in the singularity for
entertainment value, but as for providing a door to immortality, I think the
main danger is simply distraction. While all the effort and discussion and
research is spent on something there is no way of proving -- in the meantime,
20,000 children die each day from preventable diseases.

Even if you are a rational egoist, I invite you to consider what you'd do if a
child was dying right in front of you. Would you help them? The fact is
children are at arm's length, courtesy of our mobile devices, with which we
could place more attention on relieving suffering than in trying to achieve
immortality by flipping a coin on an unproven allegation.

So the fact is, because we could choose to act, rational egoists included, we
are genocidal in our indifference. The first step in facing this is to admit
it, and then to try and take steps to do something about it.

I invite anyone who seriously considers a singularity to set it down, and put
some effort into the relief of suffering. When we have preventable diseases
and poverty figured out, then let's revisit the singularity. I'd enjoy working
on it then.

~~~
hmsln
> I don't think there's any harm in believing in the singularity for
> entertainment value, but as for providing a door to immortality, I think the
> main danger is simply distraction. While all the effort and discussion and
> research is spent on something there is no way of proving -- in the
> meantime, 20,000 children die each day from preventable diseases.

> I invite anyone who seriously considers a singularity to set it down, and
> put some effort into the relief of suffering. When we have preventable
> diseases and poverty figured out, then let's revisit the singularity. I'd
> enjoy working on it then.

The amount of money currently being spent on singularitarian pursuits is
negligible compared to what is spent on development aid, or healthcare.

Furthermore, one of the necessary enablers of the singularity is faster
computers. Regardless of the sustainability of Moore's law, it is economic
competition between chip-makers that has provided the impetus for the
continuous exponential increase in computing power that happened during these
last decades; as long as it's physically possible, and as long as the chip
market is not a monopoly, the increase in computing power is going to happen
anyway, and the singularity would be nothing but a byproduct of these market
forces - and will not come at the cost of a disinvestment from charitable
pursuits.

As a species, it's nice to have a portfolio of possible futures - and it's
nice to know that the singularity road is being probed by some people.

~~~
tekelsey
I think the most precious thing we have is time, not money, because our time
is finite. When you consider the potential of the human race and computing
power to help spare daily genocide, there is clearly an opportunity to do
better. If a person is driven to spend ever more time on pursuing the
innovations that could theoretically lead to a singularity, what would that
person say to a stadium full of children who are going to die the next day? I
wouldn't know what to say. I'm not sure they'd be comforted by assurances that
a byproduct of the pursuit of the singularity results in economic momentum
that employs people, as good as that may be.

Perhaps though the attendant momentum of deep mind projects might result in AI
or II be putting to the task of humanitarian relief. But my guess is that any
such system would probably ask, "what should the priority be for preserving
life?" \-- and if the prioritization were done by consensus, I'm guessing that
most people would prioritize preserving life over pursuing the singularity.
For example, I think most people would want a self-driving AI government to
prioritize their own survival over the singularity. And if that were the case,
and AI were tasked with helping to "load balance" priorities of ethics,
efforts, solutions and systems, I wonder if it might not admonish us to make
changes in our life.

I'm not self-righteous, I'm self-unrighteous. I'm just beginning to question
my general acceptance of wherever technology goes, wherever I spend my time. I
don't presume to tell people what to do, but I do propose that the ethic of
choosing whether to relieve a suffering child right in front of you may be
self-evident to some people -- and if so, that realizing children are at arm's
length, that we can do something about it, may convince some people, myself
included, to think about "re-balancing" priorities.

I read fiction regularly. I would be a hypocrite to judge anyone for "not
spending enough time relieving suffering". Yet overall I question how content
and comfortable I am with life. I guess in a portfolio of effort, I hope that
the allocation of assets in my own life and others would place a primary
emphasis on sustainability that includes the relief of suffering. I think
that's what 20k kids dying each day says to me. I think that's what they'd
post to Hacker News, if they could.

------
karmacondon
When people in the 1940s looked into the future they saw flying cars,
spaceships and cities in the clouds. When we look into the future today we see
AI, a post-labor economy and the end of death itself. I'm not sure if
technological progress has increased, but we're definitely dreaming bigger.

I think that predictions from the past can be instructive in two ways. First,
most of it didn't come to pass. We still don't have flying cars or cloud
cities, and common commercial space travel is still decades away. The second
thing that might be of note is how positive the past vision of the future was
compared to our current ideas about the future. We're so dystopian and
negative in 2015. We'll have self aware supercomputers, but we're worried that
they'll kill us. Robots will be able to do anything a human can do, but we're
worried that we'll all be unemployed while a few get rich. We'll have the
ability to live _forever_ , but we're worried that will only be a privilege
for the rich people who control the robots, and then only if the robots don't
decide to kill us all.

Putting these two things together, I can only think to quote Packer's QB Aaron
Rogers: "R-E-L-A-X" [0]. Odds are that most of these revolutionary
technologies are far away, and won't turn out to be as evil as we imagine. The
beginning of the 20th century was a time of tumultuous change. The automobile,
household electricity, recorded music and movies, communism and the end of
aristocracies. Everything was changing and people were optimistic about the
future. Now we're on the other end of the pendulum. Outside of communications
and the internet, there haven't been a lot of big technological or political
changes and people are feeling generally negative about everything. Hopefully
the early 21st century will produce big ideas that will have us all feeling
inspired again. I wish I had some grand conclusion or takeaway from all of
this, but I think the key is that this too shall pass and we should all try to
enjoy the journey.

[0] This is a references from sports news that might be out of place here. If
you didn't get it, it's ok. Move along, nothing to see here.

~~~
reasonattlm
People in the 1940s were riding the top of an energy revolution, an enormously
rapid upward trend in energy production that, had it continued unabated, would
today see us with the output of a nuclear powerplant generated by threads in
our clothing for a dollar or two. They didn't foresee information technology,
for the most part. They foresaw solar system-wide travel and simple mechanical
computing devices like slide rules coexisting.

Of course it proved harder than expected to keep that curve going, and we got
the infotech future rather than the high power future. We're probably better
off for that, given that medicine is driven by infotech, not power.

~~~
api
It's also possible that had energy generation capability continued to increase
forever at that rate it would have led to self-destruction in any number of
forms.

------
jkaunisv1
I find it strange that he says we have no way of imagining what post-
singularity would be like. I get his point, it's so different that we can't
really get it. But we have ample science fiction that still explores the
possibilities and gives us a starting point for imagining it.

Also surprised he doesn't mention basic income when discussing the decreasing
value of the individual and the problems that could cause. He's identified
some important problems but this interview was very high level and didn't seem
to even touch on possible solutions. These are things society is already
thinking of.

~~~
TeMPOraL
> _I find it strange that he says we have no way of imagining what post-
> singularity would be like._

It's by definition - the singularity is _defined_ as the moment when
technology advances so fast that we just can't keep up with it. If you can
still reasonably predict how will it look like, it's not singularity yet.

~~~
jkaunisv1
It's the moment where we lack the capacity to work at its level and control
it. That doesn't preclude imagining outcomes. His whole article was about how
he tries to imagine the full scope of possibilities rather than narrow down to
predictions (that are often false). It's not about reasonably predicting.

As an example, we don't yet have fine-grained, ubiquitous nanotech like that
featured in Diamond Age by Neal Stephenson. But that book is all about
imagining what it might lead to.

We extrapolate from assumptions all the time, why is this one any different?

------
petercooper
So I had a random thought earlier. At one point, the cessation of the
heartbeat signified death. But now we have CPR. Now 'brain death' is often
considered to signify death.. but could/can we 'restart' the brain as we do
the heartbeat with CPR?

------
ununun
Harari seems to be worried that sometime in the future there will be tons of
superfluous people that economy will no longer need. Suppose technology brings
about abundance never before seen on the planet Earth. Suppose that means the
vast majority of everyone goes unemployed. Are we to worry that most of those
people will necessarily be poor now because the few powerful up top will hoard
all the riches? Is that the most likely scenario?

------
penny
Perhaps a loss of power and relevance of the human masses will give rise to a
renewed reverence for and recognition of the fading beauties of human frailty,
similar to the romantic movement following the industrial revolution. In such
a setting, where we hand tech and science off to the machines (or they seize
it), the profit model will be in the humanities.

------
penny
Would not any human-devised outcome, no matter how intellectually superior its
capacities, ultimately be secondarily dependent on the same metabolic laws as
humans, even if in a different order of magnitude? I mean those delimiting
laws would not disappear. Their indefatigability is powered somehow, and our
fatiguability is adaptive.

------
arxii
But at the end of the day, what is life and death anyway? Merely a self
repeating pattern of some atoms...maybe consciousness is just a collection of
concepts which our brain uses to reinforce its self identity? Maybe
consciousness is just as valid as religion and our existence really holds as
much meaning as the existence of a tree in an infinite universe? and
considering the fact that the majority of people would value a nice diamond
more over a tree, a well sculpted rock might just have more meaning than
"life" itself.

maybe intellect,in itself, is merely a small bump in the fractal nature of the
universe itself, after all...there is a universe in everything and nothing is
everything.

Only with that in mind before i die, would i upload my mind...

~~~
nicklaf
_To see a World in a Grain of Sand

And a Heaven in a Wild Flower,

Hold Infinity in the palm of your hand

And Eternity in an hour._

William Blake

------
alx
Somehting similar in video :

Humans Need Not Apply -
[https://www.youtube.com/watch?v=7Pq-S557XQU](https://www.youtube.com/watch?v=7Pq-S557XQU)

------
narrator
The point of technological progress is to lower the cost of things. Eventually
everything will be free or negligible cost. We're already there with music.

~~~
wwweston
Tangent, but... we're not there with music. We're pretty much there with the
cost of _copying /distributing_ music.

And we're partway there with the falling cost of production tools, and wider
affordable/free availability of instructional materials.

But _creating_ music -- particularly creating good music -- still takes a
significant investment of time in the immediate act of creation and another
order of magnitude of time in investing in skills.

And without an economic model to support that investment of time, we'll get
less of it.

~~~
easong
Not necessarily. As a counterexample, people (many of whom are very creative
and talented) put staggering amounts of time and dedication into playing and
becoming skilled at video games for no reason other than personal
entertainment. Videos of people playing games are freely available and no
profit is typically expected.

A similar phenomenon might be observed with music, where players of music
dedicate large amounts of time to playing music and upload the fruits of their
efforts to the internet for the enjoyment of their peers.

~~~
wwweston
The cases you're describing either involve someone who spends some other
portion of their time (probably "full-time") on money making activities, or
someone who has an external means of support (trust fund, savings, fortune
they earned earlier, family, whatever).

In the former case it's pretty much as I originally described. Sure, if
they're dedicated and don't have any other time sinks like a family, you may
still get _some_ music from them, but significantly less or lower quality
music than you'd get from them if they could spend full-time on it.

In the latter case.... it's true enough that people in this position have the
privilege of studying and creating music full-time without any expectation of
payment. They have all the money they need, they're free to spend their time
as they wish. Of course, saying this is our economic model means that most of
our music has to come from people in this position, mostly kids, retirees, and
those with rich family or other benefactors.

And depending on your tolerance for how undemocratic that's likely to be,
maybe that's OK. Though I think that if one accepts a picture of the world
painted in the article where an increasing number of people aren't even
_needed_ by any of our market, civic, or social institutions, the prospect of
having them also sidelined in music and other letters gets a little more
troublesome.

(And as another tangent: while I enjoy video games myself -- even some very
difficult ones that require big investments of skill -- I'd be very careful
about drawing larger lessons about skills from them. One of the reasons we
enjoy these games is that their practice-reward cycles are often significantly
shorter than many comparable real-world skills.)

------
qsymmachus
"In terms of history, the events in Middle East, of ISIS and all of that, is
just a speed bump on history's highway. The Middle East is not very important.
Silicon Valley is much more important."

This is what futurologists actually believe

~~~
TeMPOraL
It might sound self-centric and limited, but giving it a second thought, I
think it's also true - because the "events in Middle East" are important only
as much as they threaten to spin out of control and lead to the collapse of
technological civilization. There are no important technological advances
coming out of this conflict. It's unlikely that the existence of ISIS will
lead to an important new political insight, a piece of social technology being
developed. It's just, like often in history, a case of groups of humans
failing to get along very well with each other.

This is how I see war, nowadays. A stupid, useless distraction. "We're
building amazing things here for everyone with our technological civilization,
so would you kindly please pause for the moment, and don't ruin everything
because of some idiotic dispute?".

------
decisiveness
I'm doubtful of the prediction that humans will eventually become useless or
superfluous. Common jobs today that only humans can do will undoubtedly be
accomplished by machines in the future, but that doesn't mean humans will
become obsolete.

As long as the universe and time exist as we know it humans will never be
perfect. And just like humans, AI will always have bugs, as its root creator
will always be a flawed human. Whether there are unintended consequences of
those bugs is another story. But since a human can never create AI to create
AI better than a human, AI can never render the human mind obsolete.

Any AI not created with bad intentions will mostly be created to serve,
defend, or improve our way of life and survival. These things work to support
our purpose not destroy it.

But as Harari says (before he starts predicting), "it's impossible to have any
good prediction for the coming decades."

~~~
vectorjohn
A lot of baseless assumptions here.

"its root creator will always be a flawed human"

"But since a human can never create AI to create AI better than a human..."

This is your premise. I don't think, and a lot of smart people don't think,
this is true. The thing that gets people really worried or excited is that
they think it IS possible to make an AI that can create better AI. And it's a
positive feedback loop that goes nobody knows how far.

There is no reason, unless you believe in magic, to think AI can't be as smart
as humans. But if you go that far, there's no reason to think it can't be
smarter. And if it can do that, it can make better AI than humans can.

~~~
decisiveness
"There is no reason, unless you believe in magic, to think AI can't be as
smart as humans."

AI can be as smart or smarter than most humans in many ways, but I think it's
a very real possibility that its development path won't render the human mind
useless. They key difference between AI and humans is AI has the power to
iterate and learn from its mistakes much faster than humans without fatigue.
The methods of which it learns are created by humans. To assume the creation
of AI with "a positive feedback loop that goes nobody knows how far" without
humans first understanding how seems more of a belief in magic to me.

"I don't think, and a lot of smart people don't think, this is true."

When it comes to predictions, smart people can be wrong. I could be wrong or
they could be wrong, and they may be smarter than me, but I'm smart enough to
know this is true.

~~~
TeMPOraL
> _To assume the creation of AI with "a positive feedback loop that goes
> nobody knows how far" without humans first understanding how seems more of a
> belief in magic to me._

Not really. This is pretty much a definition of a positive feedback loop.

To give a _very simplified_ example, imagine that a mind of IQ N is able to
create, at best a mind of IQ N+10. So say, the smartest human alive has 150
IQ. He goes and creates an AI that has 160 IQ, which then goes on to create a
170-IQ AI, ad infinitum.

Of course you could argue the relationship is different. Maybe the ith mind
can create at best an N+(1/2)^i mind, at which point the whole series will hit
an asymptote, a natural limit caused by diminished returns. But it would be
_one hell_ of a coincidence if humans were close to that natural limit.

So basically, what we need to do to potentially start intelligence explosion
is to figure out how to make a general AI that is _just a little bit_ smarter
than us. Which seems entirely possible, given that we can use as much hardware
as we like, making it both larger and faster than human brains.

~~~
decisiveness
I understand the concept of creating something exceedingly more generally
intelligent than its creator, I'm simply suggesting it's not possible. Many
people assume that it is, and we'll have to agree to disagree. But even if I'm
wrong and it does become possible, think about how unlikely it would be for a
human to accidentally accomplish this.

Also, if AI is to be smarter than humans, it will know it could potentially be
wrong about anything. Armed with that knowledge, how much smarter can it
really be?

~~~
TeMPOraL
> _Also, if AI is to be smarter than humans, it will know it could potentially
> be wrong about anything. Armed with that knowledge, how much smarter can it
> really be?_

That's not a big leap. In fact, we humans know this already, and we've even
quantified it nicely, and called it probability theory.

