
The Merge - holman
http://blog.samaltman.com/the-merge
======
shostack
I'd love to know Sam's views on his particular responsibility in addressing
this as a member of Reddit's board.

If the "attention epidemic" and risk of hacking people's attention is high,
Reddit is likely to be just as culpable in that as Facebook. Further, Reddit
has shown to be ripe for weaponizing by hostile foreign nations given the
Russian meddling. This has not gotten _nearly_ the attention it warrants, and
many feel Reddit has done nothing particularly worthwhile to address the
matter.

So in an age where algorithms optimize to capture attention for the benefit of
platform owners, and platform owners are incentivized by ad revenue where the
advertisers may have malicious social engineering motives, or where the
platform is seen as an attack vector outside of ads, what is the
responsibility (legally and morally) or platform owners and investors in doing
something about this?

------
ak_yo
I don't quite get why this is written as if the author is a neutral observer
of this phenomenon, when in reality he works extremely hard every day to make
sure it happens.

~~~
throwawaymoreai
"If we don't do it, the other guys will first."

Sure, if you could coordinate all human activity on the planet you could
prevent further AI research from happening. Assuming you can't coordinate all
activity, getting out in front of the problem is probably the best you can
hope to do.

------
sevensor
Computers are tools. Some of the tasks for which we have formerly used
cognition can be handled by them, just as some of the tasks for which we
formerly used teeth have been largely delegated to knives. This doesn't mean
we're merging with the computer any more than we have with the knife, nor does
it mean computers can replace our brains any more than knives have replaced
our teeth.

~~~
notquitesure
Large brains are just tools. Great apes used sticks and rocks as tools to
extend the use of their hands, and evolved larger/more powerful brains to
extend their decision making and problem solving abilities. There isn't any
real difference in the global impact of humans relative to that of great apes,
because humans just have a more well developed versions of this particular
tool.

Then again, the descendants of the apes that didn't develop the particular
"tool" of generalizable problem solving are extinct or endangered, while the
descendants of the ones that did are busy working to create AI

~~~
sevensor
> Large brains are just tools

That's an odd position to take. Surely there's a difference between the hand
and what it grasps.

~~~
notquitesure
The difference is not between the hand and the tool it grasps, both of which
apply force to manually manipulate the environment, but between the hand/tool
and the mind that directs it.

I agree that that is an odd position -- it's not one i hold, but a reductio of
the "computers are tools no different than knives" argument above. machines
that are able to process novel information and make decisions are
fundamentally different from other machines, whether they are composed of
tissue or silicon

------
chadgeidel
I'd be happy if my phone's autocorrect feature would actually produce
sentences with proper spelling and reasonable grammar. Apparently that's still
too hard for our mighty machine learning systems. I'm not going to merge with
any technology that is worse than my stupid mush brain.

[edit - why am I still editing the spelling and grammar on this post?]

~~~
ilaksh
Just because that good autocorrect AI or AGI isn't available today doesn't
mean that we should dismiss it as something that we don't need to worry about.
Because the change to our way of living is so great, we should start making
plans, even if it is 10 or 30 years out.

Personally I believe we will see really interesting animal-like AGI demos in
2018 and 2019.

It's just dumb that people can't take this stuff seriously until all of the
engineering and research is 100% done.

~~~
chadgeidel
I'm not dismissing it. On the contrary, I can't wait for brain-machine
interfaces. I can't wait for really good self-driving cars. I'm just not as
optimistic as many of my colleagues.

~~~
dboreham
I'm waiting for self driving lawyers. And lawn mowers.

~~~
letlambda
My lawyer can drive himself, but it only works with BMWs.

------
ericand
"It’s probably going to happen sooner than most people think. Hardware is
improving at an exponential rate—the most surprising thing I’ve learned
working on OpenAI is just how correlated increasing computing power and AI
breakthroughs are—and the number of smart people working on AI is increasing
exponentially as well. Double exponential functions get away from you fast."

I'm not convinced that an exponential number of people working on AI produces
exponential advancements. Wouldn't we see diminished returns with each new
person, presumably each one is less capable and less expert? I see AI
experiencing a hype-cycle like disillusionment before seeing "double
exponential" returns.

~~~
hobofan
The AI research space is still a pretty green field. Right now you can
basically take any random paper and combine it with another random paper and
you will be able to find a use case where that succeeds and write a new paper
about it. More experienced researchers will, of course, have a better
intuition/knowledge of what techniques to combine to get more impressive
results.

As long as it stays like that (and by opening up new sub-areas, it could stay
like that for quite some while), adding more somewhat qualified people to the
field results in all the useful combinations being discovered faster and thus
exposing new potential starting points. Probably no exponential growth in the
strict sense, but quadratic growth isn't bad either.

~~~
pg_bot
Perhaps we could build some sort of neural net that ingests AI research papers
and then outputs the next big thing in AI. \s

~~~
hobofan
Half of the research papers titles already read like that, so why not?
¯\\_(ツ)_/¯ ("Learning to learn by gradient descent by gradient descent")

------
adamw2k
Respect the thought here, though definitely feel like we're still a ways off.
That said, what resonated most with me was the footnote:

"I believe attention hacking is going to be the sugar epidemic of this
generation. I can feel the changes in my own life — I can still wistfully
remember when I had an attention span. My friends’ young children don’t even
know that’s something they should miss. I am angry and unhappy more often, but
I channel it into productive change less often, instead chasing the dual
dopamine hits of likes and outrage."

Further underscores for us as a society to find time to step away from
technology and experience the "world" (nature, art, human interaction, etc.).

------
rficcaglia
“As we have learned, scientific advancement eventually happens if the laws of
physics do not prevent it.”

The dinosaurs probably disagree. Eventually they might have developed tools
and then technology but then planet-scale comet destruction and/or volcanoes
got in the way.

We however don’t need a cataclysm to delay or indefinitely defer scientific
progress...we have various self-inflicted ways to impede progress: religion,
dictators, racism, consumerism, spending money on bombs vs. education, denying
climate change, tolerating genocide, electing crazy people to run nuclear
armed nations, taxing graduate students, the McRib, etc.

~~~
kelnos
Sure. Altman even points that out, to a less-fleshed-out degree (emphasis
mine):

> More important than that, _unless we destroy ourselves first_ , superhuman
> AI is going to happen, genetic enhancement is going to happen, and brain-
> machine interfaces are going to happen.

I think it's a really good point. As long as the laws of physics don't prevent
something, and we can't find a clever workaround, progress tends to march on
forward. Really the only thing that could stop that is some sort of earth-wide
cataclysm, whether it's natural or human-made.

I think you overestimate a lot of those human-made things, though. The
countries of the world are very interconnected these days, to be sure, but I'm
optimistic that religion, dictators, etc. can't stamp out all progress; I
believe there will always be places in the world where this kind of progress
is at least tolerated, if not actively embraced and encouraged.

~~~
fusiongyro
"Unless <highly probable event>, <extremely unlikely event> is _going_ to
happen" might be truthful but it isn't what Sam is really trying to say. And
there is a big gap between stalls like the McRib and epoch-defining asteroid
collisions.

~~~
rficcaglia
I was defineitely going for humor...but in all seriousness, if you take the
McRib as just 1 sku in a billion+ that humans produce that are both addictive
by design, inflict direct physiological harm, and also contribute to climate
change and planet scale pollution which are very much epoch-defining
phenomena, yes you could very easily rank the McRib as equally threatening to
all human life as a very large asteroid.

That said, the real dominant species on this planet, tardigrades, could likely
survive even a McRib-induced climate catastrophe. So if tardigrades merge with
AI, we are totally and completely fubard!

------
md224
We may be heading toward a merge, but I'm not sure it's with machines. I think
it could be with each other _via_ machines.

Cybernetics refers to this as a metasystem transition:

[https://en.wikipedia.org/wiki/Metasystem_transition](https://en.wikipedia.org/wiki/Metasystem_transition)

(Shameless plug: if you want to see what it looks like when a hivemind speaks,
check out [https://reddit.com/r/AskOuija](https://reddit.com/r/AskOuija))

------
jonathanberger
If Sam believes the merge has already begun, then his, Musk’s, and other’s
predictions about superhuman AI are really just describing a rather boring
fact about the world.

His examples of the merge in progress are social media determining how we feel
and search engines deciding what we think. The problem here is the
anthropomorphizing that humans have done throughout history to understand
things they can’t fully explain. Gods don’t get angry and cause rain and
search engines don’t make decisions about what we think. A search engine is
just a complex math formula that can’t make a decision anymore than a
calculator “decides” to output 4 when someone types 2 + 2.

Bostrom and others in this camp always have a hard time describing what
“superhuman” will look like or mean. But I will guess in 2075 that the goal
posts will be moved and it will be said that in 2017 we hadn’t realized that
there was already superhuman AI, as evidenced by calculators that could
already do math faster than a human.

------
fossuser
The friendly AI goal alignment problem is a pretty interesting part of this
and there’s interesting work going on there.

The basic idea is how do you effectively align a general intelligence’s goal
with humans so that it uses its intelligence to solve problems in a way that
aligns with our own human utility functions.

If humans figure out general AI before goal alignment it may have outcomes we
don’t want.

[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/)

[https://youtu.be/EUjc1WuyPT8](https://youtu.be/EUjc1WuyPT8)

~~~
joshmarlow
I kind of think that we are already experiencing the fall-out from the value-
alignment problem with corporations. Corporations (and the market in general)
seem rather good at optimizing revenue/profit - the problem is that corporate
profit doesn't always align with other human values - which ties into some
other comments here about disrupting human attention-span being profitable.

Perhaps work on AI goal alignment can help us build a more humane market
place, or vice versa, research into macro-economics can inspire some ideas in
AI goal alignment.

~~~
Karrot_Kream
Unfortunately, I think this is because we have stopped questioning whether
corporate values really do end up matching human values. I think when
capitalism was seen as a competitor to communism in the Cold War era, people
took a slightly more critical eye at economic systems, but now there's no
competition so there's no pressure for corporate values to align.

~~~
Danihan
Corporate values _do_ align with human values in a free market

It's just that the values and goals of many humans are unaligned with the
values and goals of many other humans.

Which results in a cacophonous marketplace.

------
rsp1984
_" Our phones control us and tell us what to do when; social media feeds
determine how we feel; search engines decide what we think."_

Ok, let's see. I check my Android phone perhaps 5 times a day. It's in silent
mode all the time and has a broken screen since 2.5 years and I don't care. I
have perhaps 10 apps installed of which I use maybe 5 on a semi-regular basis.
I check Facebook / LinkedIn perhaps 1 time a day and when I do I'm always
slightly annoyed by the amount of useless crap in my feed. I don't use or need
Snapchat or Insta or Twitter or any other social media. I doubt I even need
Facebook. I do use Google, a lot, but mostly as a gateway to Stackoverflow or
to satisfy my curiosity about something I've read somewhere else on the
internet. I doubt it has any influence on what I think.

Am I really such an outlier?

~~~
weareschizo
Take public transportation during rush hour and look at what people are doing
on their phones. Yes, you are an outlier. And good on you.

------
tvural
I think it's unclear how much progress has been made on the superhuman AI
problem. We haven't pinned down a good definition of intelligence, or figured
out what it is that makes monkeys smarter than mice and us smarter than
monkeys. We do have a lot of progress in specific domains, like image/speech
recognition, but it's hard to tell whether they're on the critical path to
superhuman AI because we don't know what the critical path is yet. That makes
the timeline unpredictable, but biased a priori towards "far away". It's
possible that better hardware will accelerate progress, but with CPU clock
speeds flattening out, significantly better hardware is not guaranteed in the
future.

------
abstractbill
"Until I made a real effort to combat it, I found myself getting extremely
addicted to the internet."

This seems to imply sama is resisting the inevitable merge. Can't we instead
try to steer it in a more positive direction? Where are the startups trying to
keep you addicted to the internet in a way that improves your life? They don't
seem to exist, because the incentives aren't right -- it's way more profitable
to keep you hooked in ways that make your life worse. How do we fix those
incentives?

~~~
beaconstudios
they do exist - consider the gamifying elements of Khan Academy, Duolingo and
other e-learning sites for example. The problem seems to be that they are out-
earned by the sites that feed into our narcissistic and negative impulses -
social media, addictive games, "recommendation engines" on shopping sites and
such.

~~~
throwawayjava
_> > Where are the startups trying to keep you addicted to the internet in a
way that improves your life?_

 _> consider the gamifying elements of Khan Academy, Duolingo and other
e-learning sites for example_

It's worth noting that I've heard a lot of criticism of e.g. Khan Academy from
certain components of the mathematics education community. Concretely, they
feel that the material there is designed to increase engagement and provide a
sense of accomplishment ASAP, and that the learning processes that fit those
goals are detrimental to development certain mathematical skill sets. I.e. you
might learn how to compute derivatives really well, but at the expense of
really understanding limits or the fundamental theorem.

Mind you, mathematics educators have always been an internally divisive bunch,
so take from it what you will.

------
d--b
I have never been a fan of theories that give agency to superhuman phenomena.

In Sapiens, the author argues that wheat enslaved humanity into producing more
of it.

Sure it's an interesting way to think about this, but it's not what is
happening. People grow wheat because it has a lot of nutriments and can be
kept in grains, and can be harvested twice a year.

Similarly you look at your phone because you find it useful not because it
enslaved you into doing it.

Sure you lose some things when you have the worlds knowledge at your
fingertips, but you win a lot too. When you had to look up the GDP of France
in a book, it was harder to have fact-based arguments...

Anyway, I think General AI can become a very bad thing, but we shouldn't
confuse it with "phones are controlling our lives" cause that simply isn't
true.

------
creep
The cybernetics movement has always sat uneasy with me.I don't see how a full
transition to improvements in the physical plane will help human beings who
are primarily intuitive, feeling beings. We are already disconnected from our
bodies, now why would we continue to attempt to augment our minds and limbs
with machine parts. I don't want to be a machine, I love my brain. My brain
has emergent properties far more complex than any machine, and even if we
could build computers that mimicked the brain or parts of the brain, I would
like to keep my own. There is no other brain exactly like mine. My brain took
millions of fucking years to create-- it's amazing that I'm alive. Machines
are not alive. One could attribute a small amount of consciousness to them, if
one were especially philosophically inclined, but still machines seem anti-
thetical to the way we exist that it seems so wrong to develop towards
"merging" with them. I have no trouble using a machine to facilitate the
communication and visualization of my ideas, but this thing will not become a
part of me.

------
chx
The impossibility of these beliefs is summarized well in this comment made
today
[https://news.ycombinator.com/item?id=15869657](https://news.ycombinator.com/item?id=15869657)
regarding AlphaZero:

> Makes you wonder what will happen when instead of the rules of chess, you
> put in the axioms of logic and natural numbers. And give it 8 months of
> compute.

The answers are more realistic:

> How do you score this computation? What's your goal? There's no checkmate
> here. (this was mine)

> If you're talking about formal proofs or maths, I'm not sure how this would
> apply in general as the branching factor for each 'move' in a proof is
> efficiently infinite.

Also, there was a talk where a Google engineer admitted that a car you can put
your kid into to drive them to school is still more than three decades away.
From other sources, this is even more likely because the trolley problem
doesn't seem solvable and so we would need to drastically decrease potential
interaction between pedestrians and self driving cars which requires building
guard rails, reforming transit and so on.

Don't subscribe to the hype.

Not to mention Katherine Bailey's excellent article (well, all of them on
medium are on AI and really good reads):

[https://medium.com/@katherinebailey/why-machine-learning-
is-...](https://medium.com/@katherinebailey/why-machine-learning-is-not-a-
path-to-the-singularity-540d957ef847)

> One thing that both the pessimistic and optimistic takes on the Singularity
> have in common is a complete lack of rigor in defining what they’re even
> talking about.

~~~
fsloth
"Don't subscribe to the hype."

I think the fact that internet guides people behavior on a massive scale is a
clear indication that machine/human hybrid cognition is not a hype, it's a
thing and it's starting just now.

What makes one go to the fridge? A sense of hunger. What makes one go to the
Facebook? A different yearning.

Before, people have controlled machines, and they have not induced addiction.
Now machines can induce a wide range of emotions.

This is the first step - the channel is open.

On the other hand, when will there be more than random noise and clever hacks
at the machine end - I don't know.

Only that we are combined as a species on a cognitive level already by the
internet.

One could say that the combination began when writing and money was invented -
one coordinating thought, the other labour.

Machines have made this suprapersonal interaction much faster, and much
powerfull.

It would be silly to say it's just a fad. It's not a Skynet scenario, it's not
the matrix. But - people are affected, and algorithms are getting more clever.

I'm not saying we are going to have an AI overlord. But I am saying the
interconnectedness, the algorithms and the addictive quality of interaction is
leading us definetly into a new territory.

------
hammock
>Our self-worth is so based on our intelligence that we believe it must be
singular and not slightly higher than all the other animals on a continuum.

Speak for yourself. I doubt Sam meant to appear ignorant of other perspectives
on the world. However it would be nice to consider them when writing general
insights, as opposed to simply tunneling his own point of view.

------
maldusiecle
> Our phones control us and tell us what to do when; social media feeds
> determine how we feel; search engines decide what we think.

This really reads like self-satire.

~~~
LeoJiWoo
I agree, its definitely due to a Sam's bubble/filter. Its way way too
generalized for the general population.

~~~
dibujante
The piece reads like "Boy, it sure is strange that all humans now work in
silicon valley venture capital!"

It's just silicon valley bubble platitudes top to bottom. He needs to take a
breather somewhere on the other 99% of the planet.

------
scandox
> It is a failure of human imagination and human arrogance to assume that we
> will never build things smarter than ourselves.

No it's 21st Century IT-bloke failure of imagination and arrogance. A failure
to imagine change: specifically starting to hit real walls and running out of
superficial areas of expansion to distract you from the lack of intrinsic
progress. It's 21st Century IT-bloke arrogance imagining that the context that
has rewarded you with money and power, is also at the heart of an earth-
shattering scientific revolution...and not just (which is significant) another
industrial revolution leading to new and all too human plateaus.

------
LeoJiWoo
I'm not convinced on the "merge" happening. I fear humans cling to that idea,
because we want a biological part of our selves to pass on.

I expect a technological singularity to supplant/replace biological life
completely.

~~~
Danihan
Sure. Merge first, then that.

~~~
LeoJiWoo
A "merged" singularity will lead to a non biological singularity ?

I don't think I understand.

~~~
Danihan
Yes, of course it would.

Since sooner or later, biological systems would be inferior to manufactured
ones.

------
conatus
Interesting piece, though I feel flawed in at least some pretty fundamental
ways.

1\. Humans have always been "merged" with technology. It is becoming more
pervasive now, more powerful and to some extent more intimate, but it has
always been the case. For example, the evolutionary advances of homo sapiens
were in part a result of our ability to "out source" elements of digestion to
cooking, enabling our intelligence to outstrip rival animals and hominids.
Other examples: farming, writing, the printing press, telegraphs and so on.

2\. For most of human existence humans have believed in forces superior to
themselves, whose intelligence, power and strength out strip their own:
whether God, gods or other metaphysical forces. Its amusing to me how
theological tropes reappear in these writings with a high degree of
regularity. Now one might argue that the difference is that AIs "really
exist". But crucially the idea that humans have always considered themselves
totally "top of the pile" seems radically false. It is at best a very limited
notion in human society. Even modernity, increasingly secularised, was quick
to assert that human mastery was at best an illusion, e.g. the psychoanalytic
or Darwinian revolution.

3\. The obsession with AI being the largest existential threat to the human
species seems hubristic in the extreme given that a very current and very real
threat is already here and it is often the most poor that are already feeling
its effects: catastrophic climate change.

~~~
Shoothe
> 1\. Humans have always been "merged" with technology.

This can be also generalized to other animals see "Extended Phenotype" by
Richard Dawkins (
[https://en.m.wikipedia.org/wiki/The_Extended_Phenotype](https://en.m.wikipedia.org/wiki/The_Extended_Phenotype)
).

------
reza_n
A lot of the examples brought up are innovations in communication, not AI.
Phones, social media, and search are communication platforms. Just like books,
newspapers, telegraph, telephones, internet, they enable communication,
interaction, ideas, innovation, etc. Even what we consider AI today is just a
complicated birds nest of human driven math. Sure, we can go ahead and
classify phones as co evolving AI, but the same can be said of plants, trees,
and animals that have co evolved with us as well.

------
agitator
I have had the same thought experiments that lead to this inevitable
conclusion, that we are the vehicle that will create a new, immortal, more
efficient intelligence and there won't really be a place left for us slow, in-
evolved, inefficient, ape-like creatures. It's an unpleasant thought, but what
other conclusion do you all see? I see some people arguing that "this is a bit
out there" etc. but whether it's sooner, or later, i think it's inevitable.

The interesting thing to me is that life and evolution propagate because of
the laws of physics. It's theorized that life is a chain reaction that evolves
out of a simple necessity to be as efficient as possible. So this new life for
will potentially depart from that natural basis for evolution.

What do you guys think though, do you really think a merge will happen? This
is obviously a long term existential and depressing discussion, but really,
when an intelligence with much more potential than ours arrises, will there
really be any point in us lingering around? Do we even have a chance at this
merge? I mean, I guess I see the urgency, we would need to start now, so that
any innovation in AI is really linked to improving our own cognition from the
get-go, otherwise we are just a stepping-stone for life originating from this
solar system.

~~~
Schwolop
My objective function is that I continue to have new experiences. I hope any
intelligence we create recognises that this is not the same as continuously
different experiences, and so putting me in a simulation for an eternity of
pre-determined happiness isn't appropriate. I hope anything we create
recognises that bringing us along for the ride is part of the intent.

~~~
intended
why?

We bring dogs along because they are dumb pets, but we kill wolves and took
their habitat for their own.

There’s a resource cost to “finding new experiences”. Which conflicts with the
“survive and Procreate” drive necessary for any successful system.

The primary survival loop will command the resources.

Why should it save you?

------
sajid
I fear this is wishful thinking. Advances in AI are progressing a lot quicker
than advances in neural interfaces. So we will most probably have superhuman
AI long before we have neural interfaces.

And at that point it's game over for homo sapiens.

~~~
coding123
Its likely NI/HCI are necessary to get to General AI.

~~~
pixl97
Why, other than "I just think so"?

------
teej
I’m all for being forward-thinking but this is a little out there.

~~~
dgritsko
What, specifically, are you referring to? "[S]cientific advancement eventually
happens if the laws of physics do not prevent it" sounds pretty accurate -
sure, we might get stuck in a local maxima for a bit, but there's no reason to
think that progress towards the things Sam is talking about will stop in any
meaningful way over the long term.

~~~
paganel
> but there's no reason to think that progress towards the things Sam is
> talking about will stop in any meaningful way over the long term.

You could have said the same about the philosophical stone, which people
thought that was possible to "discover" back in the late 1700s, or about us,
humans, physically reaching other solar systems and especially other galaxies,
which some people thought possible for a while after WW2. It's a belief
similar to how some religions started, definitely similar to the early
Christians' belief that the second coming was only a decade or two away.

~~~
dgritsko
True, but there's a lot of things being talked about in this blog post (e.g.
superhuman AI, genetic enhancement, and brain-machine interfaces). These are
all different things, and I was genuinely curious as to what the OP found to
be "a little out there".

------
kolbe
>Double exponential functions get away from you fast.

Still exponential, though.

Also, as a general criticism, there's a big difference between people getting
addicted to the internet, getting dumbed down by it and sucked into things
like a youtube hole, and my idea of The Merge. The things you describe sound
more like an automated soap opera or opiate addiction than the singularity.

------
jderick
It seems unlikely we will have any sort of effective governance for this (look
at our current political system). At some point someone will invent AI that
lets them gain an extreme advantage of some kind (financial, political or
military). This accelerates current inequality and leads to revolution. Post
revolution a new AI is created to manage earths resources for the benefit of
all. Whatever AI is created will be flawed somehow and will eventually cause
great damage to the human race. Alternative AIs will be created to improve or
combat the incumbent AI and a sort of evolution of AIs will occur. Although
AIs originally were created to optimize for the human race, survival of the
fittest leads to AIs exploiting loopholes in their objective functions to find
ways to replicate and hoard resources for their own survival. Humans will
still be accomodated to some degree, but in more and more unnatural and
distorted ways.

~~~
mapleoin
Heh, that reminds of this:
[https://en.wikipedia.org/wiki/Butlerian_Jihad](https://en.wikipedia.org/wiki/Butlerian_Jihad)

------
GuiA
_> It is a failure of human imagination and human arrogance to assume that we
will never build things smarter than ourselves._

If you define intelligence as “being really good at chess” or “factoring prime
numbers”, sure, computers are already smarter than us. If you define
intelligence as “knowing when to let your child make mistakes on their own and
when to help them”, or “knowing how to conduct an orchestra”, it doesn’t seem
so extreme anymore.

In fact, the opposite statement rings just as true:

 _It is a paragon of human arrogance to assume that we will build things
smarter than ourselves in every conceivable way._

The road ahead looks more like a planet with its ecosystems ravaged by
resource extraction, with buggy computerized systems we don’t understand
running people’s lives in harmful ways (eg see all the writeups about machine
learning reinforcing systemic biases) than a world full of meta humans in
symbiosis with inconceivably intelligent machines of their own design.

~~~
Danihan
>“knowing when to let your child make mistakes on their own and when to help
them”

That can be tested and quantified, based on end state goals.

>The road ahead looks more like a planet with its ecosystems ravaged by
resource extraction

In such a world, the savviest and more strategic 1% would thrive, just like
they always do. Eventually though, they will need to merge with machines to
continue making the cut. It's too advantageous not to.

------
golergka
> If two different species both want the same thing and only one can have
> it—in this case, to be the dominant species on the planet and beyond—they
> are going to have conflict.

The term "dominant species" really jumped at me as I was reading this piece.
It's so fantastically vague that it begs the question - would us and AI have
the same definition for what "dominant" is?

We're not building AI individuals with self-preservation instinct, hunger for
resources and sexual drive. We're building super-individual systems like
Google, Facebook and trading systems, that can be much more intelligent than
we are but also have vastly different built-in "purposes" (just as evolution
have "built-in" purpose of eating, having sex and caring for our family into
us).

I think that in the end, the thing we're building will end up much closer to
Solaris ocean than Terminator.

~~~
pixl97
Well, the issue is we are _not building general AI systems at all_ __yet __.
If and when we can, the game changes, and we pretty much cannot figure how it
will until it happens.

There is no other general intelligence at this point that can rival human
intelligence, we have a systems sample size of 1. Making predictions based on
such a wide range of possible technologies is hopeless at this point.

Why would a fixed placement general learning computer system that takes input
from millions of possible sources evolve in the same way that a mobile robot
with general learning intelligence? Will both be possible? What about swarm
intelligences?

Too many questions with no possible way to answer them.

------
coding123
Most people in their prime today will say no to implants. I suspect that 2%
may be ok with it. Each generation however will have a higher and higher
percentage OK with it. Especially considering at a certain point it will be
the competitive advantage to merge oneself to tech. Similar to developers
taking derivatives of speed today.

When would this start? Probably not limited to the pace of AI but the pace of
human computer interfaces catching up. I suspect the largest increase of usage
will be the non-invasive helmet electro-magnetic style that is already proven
to work to some degree.

Generally too, HCI can help us create a massive amount of training data.

~~~
Danihan
It also depends if the implants are visible / detectable or not.

------
mnm1
Maybe the reason the whole world isn't getting behind this is because the
proponents of such theories have yet to provide a single shred of evidence
that this is happening. At least it's interesting science fiction, but it's
quite delusional to think people will get behind this considering the sorry
state of "AI" at the moment. When the rest of us look at "AI" we don't see a
single shred of intelligence because it doesn't exist. How about at least a
prototype? But no, "AI" proponents don't even have that. And please, hold your
arguments about how self driving cars are intelligent; self driving does not
equal intelligence. Anyway, there are much better articles and arguments than
I can make on the subject, many of which show up here on hn quite frequently.
I personally will definitely take this "merge" seriously once we have even a
hint of proof that we can create intelligence at all, let alone intelligence
greater than ourselves. Currently there is zero proof anyone has ever created
anything intelligent in the sense that the word "intelligent" is generally
applied to humans. The technology is about as far along as technology to
teleport star trek style: not at all.

------
njarboe
Advancement in nuclear technologies was basically shut down since the 1970's.
Thiel's thesis is that almost no technological progress has occurred since
then. I tend to agree. Computer technologies were only allowed to progress so
quickly because people did not think computers were dangerous. Imagine any
other new product where you could state that the product is "as is" and claim
no responsibility for its functionality, purpose for use, or damage caused by
malfunction. I would prefer this concept for most things but our "safety
first" society definitely does not.

The one avenue open to tech advancement became so powerful that it eventually
began to bleed into the physical world and we are starting to see other tech
slowly advancing again. Maybe if the tech wealthy of Gen X and later have
sufficient power in society after the Boomers pass the baton (if they ever
do), we will let the technology keep advancing and see this merging. There is
also the fact that the US is no longer a hegemon. Good luck stopping people in
China from doing things banned in the West.

~~~
throwawayjava
_> Advancement in nuclear technologies was basically shut down since the
1970's_

That's not true at all! Civilian nuclear power in the USA (and most of the
west) has stalled out since the 70's, but nuclear technology has seen plenty
of advances over the intervening time frame. See esp. modern nuclear
propulsion systems. Super impressive.

Also, this isn't a problem caused exclusively or even primarily by safety
regulation in the nuclear sector. The evidence is in the failure of nuclear
power throughout the world, despite significant variations in safety
regulations.

In fact, if you look at root cause analyses for the failure of nuclear energy,
you'll find that _lack_ of regulation is a major reason for the failure of the
nuclear energy. If the FF industries had to pay for their externalities,
nuclear would be extremely viable (at least in the 80s-00s; now it'd have to
compete with solar and wind).

 _> Computer technologies were only allowed to progress so quickly because
people did not think computers were dangerous._

Again, this is a pretty wild assertion... Moore's law >>>> regulatory
environment. Seriously. If the output of nuclear power plants had grown
exponentially for multiple decades, we'd be in nuclear-powered paradise.

~~~
njarboe
I am not sure what you mean by "modern nuclear propulsion systems". Some links
to details of those would be great. I'm hoping for applications outside
military ship and submarines, which have been around since the 1960's. There
were actual engines for nuclear rockets, nuclear airplanes, nuclear excavation
techniques, etc in the 1960's. Definitely problems with those systems, but we
stopped trying. Nuclear test ban treaty, etc. I'm not saying that was not the
correct path for a better future (hard to say), but society definitely stopped
working on that stuff.

"Moore's law >>>> regulatory environment"

I don't see how this is a disagreement with what I was saying. Imagine if
society had the same amount regulation on building software systems as dealing
with radioactive stuff. Moore's law would not exist.

"If the output of nuclear power plants had grown exponentially for multiple
decades, we'd be in nuclear-powered paradise." Why didn't that happen? Lots of
reasons, but I think it could have and still could given a chance.

~~~
throwawayjava
_> I'm hoping for applications outside military ship and submarines, which
have been around since the 1960's._

I'm referring to military ships and submarines. I'm not really sure where the
financial incentive for nuclear comes from for anything commercial?

TBH it seems like you have a culprit (regulation) in search of a victim
(nuclear utopia), and as a result, you have a solution (nuclear power) in
search of a problem. If that's the case, there are really many much better
examples.

 _> Nuclear test ban treaty, etc._

The NTBT covers intentional detonation of nuclear weapons... I'll be the first
to concede that regulation has perhaps hamstrung the private sector nuclear
weapons market :-)

 _> engines for nuclear rockets_

Radiation is a bitch and lead is heavy. Conventional rockets that we need for
human space flight already can, with modification, get us to where we've
wanted to go so far. So we've instead focused our resources on doing useful
stuff once we get there. NASA doesn't have inf money.

This is something that people continue to research. What's holding it up is a
lack of priority in decisions about what science to fund and a complete lack
of any private sector market large enough to justify the investment, not any
sort of regulatory barrier.

 _> nuclear excavation techniques_

...I'm having an extremely difficult time thinking of a use case where this
would make any sense. Nuclear bombs are more powerful than conventional
explosives. That power has an enormous tactical advantage in wartime because
you can take out a city with a single missile instead of weeks worth of
bombing runs.

But that's not really a particularly beneficial feature in e.g. a commercial
mining operation. Plus radiation's a bitch.

Help me out? What are the possible use cases here?

 _> nuclear airplanes_

Radiation is a bitch and lead is heavy.

 _> I don't see how this is a disagreement with what I was saying. Imagine if
society had the same amount regulation on building software systems as dealing
with radioactive stuff. Moore's law would not exist._

This isn't clear to me. I think the hardware would've still be invented.
Perhaps the business impact would've been smaller, but regulation of
commercial software systems wouldn't have stood in the way of the enabling
physics and engineering research.

Also, see the Ford quote about Microsoft; if the nuclear industry worked like
the software industry, we wouldn't even be here to have this conversation.
We'd have BSoDed our way to nuclear Armageddon some time around 2001.

 _> nuclear power plants had grown exponentially for multiple decades... Why
didn't that happen?_

Physics is a bitch. More precisely, I'm unaware of any serious conjectures by
modern nuclear scientists that there are obvious advances on the horizon that
could give us exponential improvements to reactor output even in the short
term. Let alone over 3+ decades.

Aside from power, and perhaps inter-planetary travel, nuclear is a case of
"tried it, doesn't work for fundamental reasons" or "wtf now you're just
throwing nukes on things for fun". And in the case of power, we have a lot of
data points from a lot of different regulatory regimes, all of which point to
"this is a quite expensive way to make power which won't pay off until fossil
fuel externalities are finally internalized... i.e., never"

~~~
njarboe
I hope someday the risks from radiation will be evaluated on equal par with
other risks of a modern society and nuclear power options will be viable
again. After decades of technology stagnation in rocket launch business, Musk
and company has orbital class rockets landing back and the launch pad and is
looking into nuclear rockets. I wish him luck.

~~~
throwawayjava
IMO risk is absolutely not the primary reason that nuclear-powered rockets
don't exist.

~~~
njarboe
What is the primary reason? Many people would love to work on such a project
and some people would fund them. Would you agree that fundamental physics does
not preclude a functioning nuclear rocket? Where would one build, much less
test, such a device as a nuclear-powered rocket? We can't even agree in the US
where to bury inert (mostly) ceramic radioactive waste. A live, radioactive
(low-level, but still radioactive) exhaust stream is not going to happen in
the US in the current culture.

~~~
throwawayjava
_> Would you agree that fundamental physics does not preclude a functioning
nuclear rocket?... What is the primary reason?_

Yes. No demand.

 _> Would you agree that fundamental physics does not preclude a functioning
nuclear rocket?_

Fundamental physics also doesn't preclude a nuclear powered ferris wheel the
size of the empire state building.

There are lots of things that humans can do but don't do.

 _> A live, radioactive (low-level, but still radioactive) exhaust stream is
not going to happen in the US in the current culture._

This hasn't stopped us from building an enormous fleet of nuclear ICBMs and a
rather large fleet of nuclear power plants.

------
pdog
What does a rapidly improving AI have to gain by "merging" with our
notoriously error-prone, finicky biological hardware? It's nearly certain that
intelligent machines will choose to replace humans, not enhance them.

Our "successful" descendants will probably be the unmerged, low-tech survivors
living on the outskirts of a new "Machine Age" civilization.

~~~
chadgeidel
I believe the merging Sam is referring to here is augmenting human
intelligence with non-intelligent (but better) machines. Kurzweil talks about
this as well, and was one of the original options in Vinge's "Singularity"
([https://edoras.sdsu.edu/~vinge/misc/singularity.html](https://edoras.sdsu.edu/~vinge/misc/singularity.html))

~~~
pdog
_> Unless we destroy ourselves first, superhuman AI is going to happen...
Perhaps the AI will feel the same way [that intelligence is singular] and note
that differences between us and bonobos are barely worth discussing._

He's referring to a superhuman artificial intelligence, not merely an
augmented human intelligence.

------
norswap
I'm still skeptical about super-human-intelligence AI, now or within a few
decades.

There just doesn't seem to be any evidence for this kind of development. In
fact the only differences to 10 or 20 years ago, when people weren't bullish
on this nonsense (my opinion) is that we have much more compute power now, and
good results with deep learning. Deep learning is "just" (the quotes are big
here) a search for a function that approximates a process of interest.

We are miles (more like multiple earth-circumferences) away from anything
approaching general intelligence. And if that's not what is meant, then AI is
already super-human in some domain of interests. But then it already was
decades ago.

Of course, we'll keep getting increasingly incomprehensible and useful
algorithms, although I believe we already reaped the low-hanging fruits and we
are not going to 10x what we have now.

If I may venture a wild guess, progress towards general intelligence might
come from learning more about our own cognition.

~~~
polock
I'm still skeptical about super-human-intelligence AI too. But we can't stop
this and need to think about how we can live with them peacefully.

------
sounds
Sam wants to talk about the utility function of a superhuman AI. Ok, what are
the likely outcomes?

(For sake of discussion, let's just accept that a superhuman AI will exist
soon.)

Asimov's "Three Laws" point out that a utility function is just a program like
any other program. It has no inherent moral code. If it prefers "the good of
mankind," it is because the engineers made it so.

How long until someone makes one that actively destroys mankind?

Sam seems to accept that a single superhuman hostile AI is an extinction-level
event. Friendly AIs are non-events, but a hostile AI is an existential threat.

There are counter-arguments: humans are resilient; our monoculture hasn't
wiped out all avenues of escape; governments are still human-dominated and
unlikely to surrender to AI control.

I haven't decided yet, but I am convinced there are concrete actions _right_
_now_ that have a strong effect on the outcome.

The real contest, though, is that humans just don't care, and AIs are
tireless, flawless machines.

------
aaavl2821
"the merge" may or may not happen in our lifetimes. current AI tech has a
ceiling, even if we havent hit it yet. that ceiling may be AGI, it may not,
who knows

The bigger question, if we want to make human life better, is: should we worry
about AGI more than other things?

One can argue that if people want to help their fellow humans, theyd get the
most value for their time, say, volunteering with the many 8th graders in east
palo alto who can't even spell their first name, rather than trying to prevent
an AI apocalypse that they cant even predict with any degree of accuracy

I think it is good and important that people develop AI responsibly and think
about these things, but does this topic really deserve more public attention
than the many other threats and challenges that humanity faces?

I know sam and elon and others are very smart and have large megaphones, but
we should certainly question their priorities

------
tw1010
Sorry if this is more negative than HN permits, but am I the only one who gets
the feeling that the style of this post sounds kind of smug? Ending a
paragraph with "And gradual processes are hard to notice" and an invisible
smirk kind of leaves a bad taste in my mouth.

~~~
callumlocke
I am genuinely not sure what you're getting at. What do you find smirky about
that sentence?

------
codingdave
> Our phones control us and tell us what to do when; social media feeds
> determine how we feel; search engines decide what we think.

All of this is only true is you let it be true. And if that is a basis for
thinking we are moving into the singularity, then there is an echo chamber
informing that conclusion. There are plenty of people who do not live their
life via devices or social media. There are also plenty of youth rejecting
those choices. But, almost by definition, you aren't hearing from those people
online.

If anything, there is a voluntary split going on between those who are
embracing tech as a central core of their life and those who reject it. And a
subset of people like myself who make their living at it, then go to a home
without it.

~~~
Apocryphon
Not to mention, all of those examples are, as they say, socially constructed.
A far cry from the speculative science involved in creating true MMI.

------
eqmvii
Was there this much hand-wringing during previous AI boom/bust cycles? There's
a lot of fear and emotion swirling around AI/machine learning right now, and
I'm curious if that's been the case in the past as well.

~~~
LeoJiWoo
Well drones have really captured the imagination of the public this time, and
we have very public and popular tech leaders warning the public about AI.
Combine that with the 24/7 media clickbait cycle, I think we have a unique
situation.

~~~
jcstauffer
In addition, the machines didn't have so much control of our news input and
conversation the last time around.

------
tramGG
I'm super pro singularity. I've been watching what I think will be the key to
that next step of growth: Decentralized AI.

Using the blockchain for decentralized access to distributed machine learning
models and creating a heterogenous network of autonomous agents that can
collaborate, learn, and grow will be huge. One of the companies I see doing
that right now is [https://synapse.ai/](https://synapse.ai/) and it's pretty
epic if you dig into their yellow paper.

When we start building a global brain where everyone can contribute, then
we'll really start seeing what the future can hold.

------
PaulHoule
With all due respect.

You would feel better (particularly in regards to the emotions you describe at
the end), Sam, if you put yourself on a "information diet".

~~~
maxaf
Luddite!

J/K, I'm a luddite too.

~~~
PaulHoule
I view HN (and soon the world) through a filter that is designed with my
information and emotional needs combined. I quit Facebook years ago. I pick
and choose the technology I use.

~~~
bloudermilk
Are you speaking of a literal filter? Or is this more of a personal practice?

~~~
PaulHoule
A technological filter integrated with personal praxis.

------
nathas
As a human, you can imagine the existence of a color you've never seen.
However, it's greater than what you are able to perceive as a human. We can
only see our slice of the spectrum. Therefore it's impossible to describe or
create that color.

As far as I'm concerned, this is the same with AI.

You can imagine an AI that is smarter, bigger, more capable than humanity, but
realistically we can't describe that.

We can't create something that is greater than our own limitations, the same
way we can't create a color that we can't perceive.

Humanity is bound by it's own intellect, so any AI could only ever be as smart
as we are.

~~~
vatys
> We can't create something that is greater than our own limitations

If that were true we would still be single-celled organisms. Or do you mean it
can't be done _intentionally_?

~~~
nathas
Intentionally. I suppose emergent behavior could create this AI, but I'm
pretty heavily skeptical there.

------
tschellenbach
What we call AI is very good at pattern recognition. I haven't seen examples
yet though of AI learning quickly. It can teach itself how to play chess, but
it takes a very large number of attempts before it becomes good. The rate of
learning for a human is still much faster than for an AI. (We just hit a
plateau faster). I'd put my money on a child that has played 10 games of chess
vs a computer that's learning from scratch and has played 10 games. I wonder
if there have been any studies on trying to speed up the pace of learning for
AI.

~~~
ealexhudson
It's difficult to compare. A child learning chess invariably has an adult
around to comment on their game and improve it, and frankly after 10 games if
they still remember the rules that's an achievement (if the child is young).

If you did the same approach with a child that could fully comprehend the
rules from the start, playing another child, both of whom had never played
before, I really don't think they would have learned as much as the computer
would have done. It would be an interesting experiment - my bet would be that
the children had invented another game and were playing that instead.

------
TelmoMenezes
Independently of the time frames, I also believe that the merge is our
specie's only hope for survival.

With apologies in advance for the self-promotion, here's a paper where I
present my arguments for alternative scenarios:

[https://arxiv.org/abs/1609.02009](https://arxiv.org/abs/1609.02009)

In short, I argue that non-evolutionary superintelligences are not stable
(they eventually go inert), while evolutionary superintelligences are a very
serious existential threat to our species.

------
theandrewbailey
> I believe the merge has already started, and we are a few years in. Our
> phones control us and tell us what to do when; social media feeds determine
> how we feel; search engines decide what we think.

I think this is a misunderstanding about human nature. Humans are thinking and
feeling beings who are responsible to others. They are only slaves and
automatons if they choose to; most aren't. Many who are have hints and
feelings that what they are doing is unnatural.

~~~
pixl97
>They are only slaves and automatons if they choose to; most aren't.

I completely disagree. Most are. In fact we have study after study showing
this the case.

You want to focus on the modern automatons, such as phones and social media,
but we have plenty other examples. Focus on the clock. Focus on the law. Focus
on societal and cultural norms no matter the negative effects that these
things have. These things have driven us since antiquity.

------
jancsika
> The algorithms that make all this happen are no longer understood by any one
> person.

I smell epicism.

How did it become chic for women to smoke cigarettes back in the first quarter
of the 20th century?

------
dibujante
> I believe the merge has already started, and we are a few years in. Our
> phones control us and tell us what to do when; social media feeds determine
> how we feel; search engines decide what we think.

Wishful thinking.

And you can't just go appealing to complexity, either. Economies are too
complex for any actor to understand; we haven't lost our individuation.

------
aidos
I enjoyed reading Homo Deus: A Brief History of Tomorrow (by Yuval Noah
Harari) this year which focuses on this subject.

------
superbaconman
I feel like we'll be lucky if we can even get artificial hearts by 2025 that
don't significantly increase the risk of stroke. The various tech seems to be
around, but I don't know if the experience to combine it all does.

------
guelo
The real unstoppable algorithm is capitalism. Capitalism is funding the
exponential advancements in AI at Google, Facebook and other places. But it
does so for capitalism's purpose, the algorithms get better at grabbing our
attention because attention=profit. Profit is the only goal that capitalism
optimizes for. The only reason that AI will ever start destroying humans is if
there's a profit motive. Which might very well happen at some point, but we
should focus on the real motivation not the tools it uses. Capitalism won't
give up powerful profit tools very easily. Sam's hope for worldwide
coordination has not worked against climate change, capitalism sensed that
threat and started attacking our political systems and propaganda channels and
so far it has won that battle handily.

------
golemotron
I'm disturbed by the fact that many people in SV just say "oh well, nothing we
can do" when discussing technology while implying the exact opposite when
addressing human nature.

------
Balgair
>...and brain-machine interfaces are going to happen

> Most guesses seem to be between 2025 and 2075.

No way.

Disseroth is on a fast-trak to win a solo(!) Nobel for opto-genetics and
Clarity, sure, but we are at _least_ a century away from a wet-ware interface,
if they are possible at all. The BRAIN initiative was effectively a failure
(many reasons here) and the Connectome projects are essentially coming up with
'brains be different, yo'. Hell, we just discovered that the immune system is
in the brain at all, like 3 years ago. We have not idea how many astrocytes
and glia are in your brain (50% or 90%?) or how they are regulating synapses
(maybe they are the primary regulators). What the hell are vestigial cilia
even doing in the brain anyways? The list continues for miles of .pdfs.

Repair of neurons would be a necessary step for wet-ware, and still we have a
damnable time trying to get people to dump ice-water on their heads as their
father is dying. We are _decades_ away from a cursory understanding of a wet-
ware interface that won't just glia-up in a year or put you on drugs for life
and at a 10,000x risk for strokes. We know electrodes don't work in the brain
and the drug cocktails don't either.

Opto-genetics is a _great_ discovery (use light, not electrodes) for
interfacing, but the damn Abbe' diffraction limit (a huge physics limitation)
screws you. ~125,000 um^2 of light at the focus versus a 25 um^2 neuron's
soma. Maybe, yeah, for peripheral nerves where you can 'multiplex' along the
length of a long fiber bundle, you can get away with a wet-ware interface. But
cortical? Not gonna happen. You can use STED techniques, but you'll cook the
brain to get the resolution down first. Opto is good only for applications
where you aren't limited by Abbe', that's not the cortical areas.

> We will be the first species ever to design our own descendants

Maaaybe. However, what is a 'family' then? Your kids may not look or 'be'
anything like you. All the families that will have done so will essentially
have adopted a child, as far as the genes go. Plus, that kid will be 'whicked
smahrt' if I'm reading this correctly. Not a lot of people do that even today,
for many reasons. How will the kids think of their 'dumber' parents? Will they
be 'parents' to them, or more like the cat, but with an inheritance? I think
the initial forays are key here, and those forays will not be happening in 1st
world countries, but much more 'familial' based ones like Korea and China.
Places where the distortion of the family will be even more 'cutting' to the
societal fabric.

------
le-mark
_I believe the merge has already started, and we are a few years in. Our
phones control us and tell us what to do when; social media feeds determine
how we feel; search engines decide what we think._

Someone needs to take a vacation from their devices, it seems. I feel like
this overstates and dramatizes the situation to a large degree.

------
ulyssesgrant
can anyone recommend a sci-fi book that investigates this line of thought
(what happens when AI can learn without human intervention, for benefit and
detriment of society)?

------
brandon272
What are some current examples of exponential AI advancement?

------
acoleman616
It's Elon's biggest fear for very good reasons...

