
The dominant intelligence in the cosmos may be artificial - Thevet
http://motherboard.vice.com/read/the-dominant-life-form-in-the-cosmos-is-probably-superintelligent-robots
======
jonnathanson
_" As soon as a civilization invents radio, they’re within fifty years of
computers, then, probably, only another fifty to a hundred years from
inventing AI,” Shostak said. “At that point, soft, squishy brains become an
outdated model.”_

While I am inclined to agree with the overall premise here, this sort of
thinking strikes me as a bit naive. Technological progress is not
teleological, just as evolution is not teleological. (Technological
development is not as random as evolution, certainly, but neither is it
teleological.) There is no "best" way to develop technology, and there is
certainly no predetermined, given course that technological development must
inherently follow. Progress is heavily influenced by local circumstances,
context, goals, and constraints. Even in studying human history, you encounter
certain civilizations (the Mayans, for example, or the Greeks) who had
remarkably "advanced" technologies in some domains, and the complete absence
of expected technologies in other domains.

Instead, we should be talking about probabilities and correlations. The
development of radio technology seems inherently linked to an understanding of
the properties of electromagnetic radiation. We can probably assume that with
some confidence. From there, what other technologies seem likely to be
correlated to that knowledge, and with what degree of confidence? Etc.

~~~
ErikHuisman
I'm pretty sure Shostak is talking about probabilities al the time as he is
the director of the seti institute. The founder and friend of Seth came up
with the drake equation.

"The Drake equation is a probabilistic argument used to estimate the number of
active, communicative extraterrestrial civilizations in the Milky Way galaxy."

We are the only known intelligent species in the universe. So we model these
exceptions after ourselves. Just like we search for earth like planets in the
goldylock zone. Carbon and (possibly) silicon live forms are the only ones we
know can happen.

~~~
jonnathanson
I'm also sure Shostak is thinking about probabilities. My point was that we
shouldn't necessarily think about the probabilities along a presumed straight-
line path.

The logic Shostak is using here begins with a sort of reverse-engineering of
_how humanity made its progress_ , followed by an implicit presumption that
other advanced civilizations will follow the same path, until such time as
they advance beyond us (by way of AI). I don't believe we can confidently say
that technology has to progress along the path we walked.

------
djokkataja
This also has implications for what spacecraft would be like for such an
intelligence. It's very challenging to build rockets and spacecraft for
squishy humans (careful with those G-forces) that require air to breathe,
space to move around, food, water, toilets...

On the other hand, if you assume that strong AI can exist in a package at
least as small as the human brain but without only a dependence on electricity
instead of all the physical constraints mentioned above, your spacecraft could
look much much smaller. If you give such a civilization 1000 years to iterate
on the technology (an insignificant amount of time compared to the timescales
involved in evolution), there's no good reason to think they wouldn't be able
to shrink the computing technology (and therefore spacecraft technology) by
several orders of magnitude. Consider the size difference and computing
capacities of our earliest computers and smartwatches today--and we did that
in less than 100 years.

The benefits are pretty obvious: it would require vastly fewer resources to
travel to other stars, and any intelligence on board the spacecraft would be
immortal (plus if it got "bored" on the way, it could power itself down and
schedule itself to start up again once it got to where it was going). The
stars seem out of reach to us because of the limitations of our lifespans and
our bodies, but they would be entirely within the reach of AI. And from the
perspective of conserving resources, there would be no reason to send
something anywhere near the size of a human being.

~~~
ianmcgowan
Tangential plug for a great "hard" SF read, [http://www.amazon.com/Diaspora-
Greg-Egan-ebook/dp/B00E83YOEI...](http://www.amazon.com/Diaspora-Greg-Egan-
ebook/dp/B00E83YOEI/) \- your comment would make good liner notes..

------
arjie
> “As soon as a civilization invents radio, they’re within fifty years of
> computers, then, probably, only another fifty to a hundred years from
> inventing AI,” Shostak said. “At that point, soft, squishy brains become an
> outdated model.”

I wonder if we're in some sort of temporary Golden Age at this point,
mistakenly seeing the middle of a sigmoid curve and assuming perpetual growth.
It wouldn't be the first time we underestimated strong AI. Still, I like it,
the pervasive idea that nothing's impossible - it's just a matter of time.

~~~
eli_gottlieb
If "strong" AI means conscious AI with subjective experience, we're several
milestones in theoretical neuroscience and cognitive science away.

If "strong" AI means software capable of learning _any particular_ task a
human could perform, we're so damn close that the research world is proceeding
to cross the line as we speak. It's called "imitation learning" nowadays, and
once there's a sufficiently general and accurate algorithmic framework with
sufficient training data, it'll be everywhere.

If "strong" AI means software capable of learning many tasks a human could
perform, generalizing them into a complete _worldview_ , and taking volitional
action based on that worldview to accomplish conceptually-specified goals...
I'd call it one to two decades away. There are some major substantial problems
remaining to research, but we do have solid foundations for it: uncertain
reasoning in Turing-complete modeling domains lies at the intersection of
several thriving research fields.

In terms of "how much science is left", I'd say our current point is roughly
analogous to the physics community having spotted that the photoelectric
effect can only be explained by discretized photons and _starting_ to
generalize quantum physics, but having most of it remaining to discover.

~~~
wwweston
> I'd call it one to two decades away. There are some major substantial
> problems remaining to research

Hopefully including what to do with the humans still dependent on the labor
market that would immediately collapse on availability of that kind of AI.

~~~
eli_gottlieb
Sorry, but imitation learning is going to be quite good enough to automate
many, many tasks without any need for AGI _agents_ as such. The labor market
problem is a _right now_ problem, not a "leave it to those futurist guys who
don't have to put up with academic rigor" problem.

------
transfire
Using our own motivations as a model, it is more likely that we will
ultimately build a super-super computer that can house/serve everyone's
(enhanced) minds. Everyone will simply upload their "being" to this computer
and that will be the end of anything we know as humanity. Any humans left
behind will devolve into nomadic tribes, having more in common with animals
than super-modern humans. Any ETs visiting Earth looking to find intelligent
life will find nothing but a big box with a "DO NOT DISTURB" sign on it and an
automated security system to prevent anyone from doing so.

In other words, when we can all live out our fantasies, what need do we have
of reality?

~~~
eli_gottlieb
Excuse me, but a lot of people quite like having an actual reality instead of
a pointlessly mutable existence.

Just because Greg Egan wrote a book about it doesn't mean it's a good idea.

~~~
crazypyro
Except we don't even know if we are living in "actual reality". We could
already be living in a reality simulation.

(Sorry for the overused argument, but I think it fits here. If we cannot ever
decipher our constructed virtual reality from our current reality, why would
we prefer one to another? Furthermore, there is no point to our current
"reality", so how would a virtual world be different?)

Also any other version of visualization other than a perfect and
indistinguishable (that makes it perfect in the definition of mirroring
reality) constructed virtual reality is pointless to argue about. In your
other reply, you mention issues with isolationism and being able to see the
physical world. When the virtual reality is equivalent to the physical world,
is this connection to physical reality necessary?

~~~
eli_gottlieb
>Except we don't even know if we are living in "actual reality". We could
already be living in a reality simulation.

No, we know damn well that our current reality works based on fixed physical
laws. You can't change something's color in reality by altering its texture-
map, and you know that everything in reality which looks like an object is an
object, while everything that looks like a person is a person.

>When the virtual reality is equivalent to the physical world, is this
connection to physical reality necessary?

... Yes. In the exact same way that it is sometimes necessary to unplug your
laptop, leave your cubicle, and go outdoors. For one thing, however much you
might like your cubicle (read: "virtual reality"), it is ontologically
dependent on outdoors (read: real life).

Or you could be walking along in virtual reality one day, enjoying the nice
simulated weather from your upgraded Navier-Stokes package, when you neatly
wink out of existence because a squirrel back in real life chewed the wrong
cable.

------
twelfthnight
In my opinion, if humans were to create robots as intelligent as humans, then
those robot would actually be human. What's the difference between passing
information via DNA and passing information via some programming code? Are
humans from 100,000 years ago less human than humans today, even if we have
"improved" slightly due to natural selection?

~~~
eastbayjake
There's more to being human than intelligence. It's important to remember that
sentience actually refers to "the ability to feel sensations," not merely the
quality of being an intelligent entity.

Our future robots may be very good at passing information via some programming
code... but will they love? Will they be sad? Will they read literature to
empathize with the experience of other beings?

I would argue that "being human" is inextricably linked to the isolation of
our existence (what David Foster Wallace described as being "marooned inside
our own skulls", and why he argued that humans make literature to "give
imaginative access to other selves"[1]) and the knowledge that our existence
is finite. All humans -- from the cavemen all the way down to you reading this
on you computer -- make life decisions knowing that our time on Earth is
short. Do you share the same human experience if you know you will live
forever?

[1] [http://hudsonreview.com/2014/02/on-david-foster-wallaces-
con...](http://hudsonreview.com/2014/02/on-david-foster-wallaces-conservatism)

~~~
r00fus
I'll take the converse argument - there might be more to intelligence than
being human (of course, it would be hard for us as "thinking meat" to
recognize it). What if the full range of emotions and sensations could be a
superset of what humans can actually experience?

It could very well be that we wouldn't even know that kind of intelligence
exists - perhaps we're the ones crawling around in the ant farm?

~~~
eastbayjake
That's an interesting point! I still think that non-human intelligence would
be... non-human. Maybe the Snorgblats will find us in space and ask
themselves, "Can these 'humans' share our Snorgblat experience if they can't
feel wzinx, yowfli, or even _writznok_?"

------
xemoka
At what point is an entity sufficiently advanced enough that what makes it up
isn't considered artificial any longer? How are we not just sufficiently
advanced biological robots? We don't consider ourselves artificial, yet we
respond to stimulus and react to it according to the programming of our DNA
and "mind" (nature + nurture).

If artificial is a designation meaning that it is created by humans, or
another entity to serve a purpose, then what happens when that created AI has
the ability to choose to create something itself? This "artificial
intelligence" designation only works when a "creator" is trying to create an
intelligence, then once it exists and propagates on its own, isn't it an
"intelligence" of its own right?

~~~
unclebucknasty
My thoughts as well. Reminds me of another discussion within the last week or
so, wherein someone rejected the idea that we're living in a sim.

My thought was, "a simulation versus what?" What makes our Universe a
simulation or not a simulation? How does one define "simulation"?

It's kind of an artificial construct based on our acceptance of the current
"reality" as somehow possessing this amorphous quality of "realness" in the
first place.

------
velocitypsycho
It would be interesting if the reason we haven't detected signs of other life
is that we're quarantined until such time that we create AI.

------
givan
Not the old terminator theory again that the machine exceeds human
intelligence, it seems that everybody misses the point of DNA, once we
understand how it really works and how our brains are built we can build
things that will exceed any silicon chip or upgrade our own brain capacity and
as a bonus we will find more about origins of earth life and how life works.

In the end the "machines" will probably be better version of ourselves that
will evolve artificially by DNA manipulation at faster rate than the current
natural evolution.

I think something like this is more likely to be happening in the universe
than our current view of self replicating evolving computers.

~~~
ElectronCharge
"Not the old terminator theory again that the machine exceeds human
intelligence, it seems that everybody misses the point of DNA, once we
understand how it really works and how our brains are built we can build
things that will exceed any silicon chip or upgrade our own brain capacity and
as a bonus we will find more about origins of earth life and how life works."

Actually I doubt this. Our brains and nerves are based on chemical processes
which are very slow compared to electronics. Nerve impulses travel at about 60
MPH, versus a bit under the speed of light for fiber optics. That's around 40
billion times slower.

Also, our digital circuitry is approaching the size of individual molecules.

There are challenges with power and cooling, but they seem solvable.

"In the end the "machines" will probably be better version of ourselves that
will evolve artificially by DNA manipulation at faster rate than the current
natural evolution."

Engineering seems inherently far more efficient than evolution, artificial or
otherwise. Even today we can build provably correct, arbitrarily large
memories as long as we have hot-pluggable replacement parts. Parallel
supercomputers also exhibit interesting traits of expandability, reliability
and redundancy.

Many think that one of the first things that will be done with a first-
generation "true AI" will be to have it engineer a second-gen AI. And so on...

"I think something like this is more likely to be happening in the universe
than our current view of self replicating evolving computers."

Time will no doubt tell... :-)

------
pvaldes
An interesting case is why everybody is trying to create a intelligent robot,
just for the sake of show how intelligent are they?, but nobody wants to
explore or simulate another life forms. Maybe because the fake statement that:
"if you can not create a really very smart machine, it proves that you are not
really very smart".

No, it doesn't.

A lot of sucessfull creatures from the earth are just plain stupid, or maybe
another kind of smart.

I wonder why nobody is creating for example a "sequoia" robot. A machine,
designed with the main goal in mind not to move, talk or act like an animal,
just to survive for 2000 years (and maybe store/deliver information in the
meanwhile). This machine will not need to be very intelligent, will not need
eyes or legs, not at all, just to be very robust and maybe self-reparating.
Thats all.

A machine like this could be really useful for the human species. Instead we
create chess players.

When we thing in a robot, everybody have in mind a machine able to speak and
deliver company and smart punches and quotes. Well, this is perfect... for a
hollywood film. In fact the aim should be to create a machine able to survive
in a spaceship for "any time that we wish" or at least "a LOT of time", doing
very simple things.

A machine don't needs to be intelligent; it needs to serve to an intelligent
purpose.

------
subdane
What if "intelligence? is a red herring and really what we're talking about is
conscious awareness? And what if that's a naturally occurring property like
gravity (but more rare)? I'm not a physicist or a mystic but my lay knowledge
says that some quantum experiments require an observer in order to resolve
themselves. I suspect it would be more productive to be looking for artificial
consciousness.

~~~
unchocked
"Naturally occurring" is a red herring itself. Consciousness must be a
property of a system rather than its elements. Whatever the constraints on
systems that can be conscious, it is not going to come down to "natural" vs.
"artificial", unless mystical metaphysics turns out to be true.

So the answer to your hypothetical is mysticism as reality.

------
nubbee
We're chained genetically to our primal instincts of survival and territorial
behavior, highly intelligent AI will most likely not be. Why would it want to
exist? If you were void of emotion and instinct, and with high probability
could calculate the universe would end in a Big Rip, why continue? Sorry to be
a downer but genuine question.

~~~
dropit_sphere
Is an AI not subject to natural selection as we are? If emotion and
territoriality are net negatives but local maxima, couldn't an AI fall into
them as well?

------
tjradcliffe
It's worth remembering that headlines with "may" in them can be replaced with
the same headline reading "may not" without changing the meaning.

While we can imagine a lot of things--like, for example, that everything
interesting has already been invented, or that physics consists of cleaning up
a few loose ends in classical mechanics, or that anyone going on about nuclear
power is talking moonshine--the history of science has shown repeatedly and in
great detail that what we imagine and what we _can_ imagine doesn't amount to
a hill of beans in this crazy world. The world is the way it is, and our
imagination is pretty much an anti-tool for extrapolating from what we know
into what we don't.

Utopians and distopians of all political and technological stripes tend to
forget this. Which is OK so long as they keep their visions to themselves, or
present them as the fictions they are (self-serving example:
[http://www.amazon.com/Darwins-Theorem-TJ-Radcliffe-
ebook/dp/...](http://www.amazon.com/Darwins-Theorem-TJ-Radcliffe-
ebook/dp/B00KBH5O8K/ref=sr_1_1?ie=UTF8&qid=1419020077&sr=8-1&keywords=darwin%27s+theorem)).
When they ask us to take them seriously they shouldn't be surprised that we
respond skeptically, or with laughter.

It's fun to think about this stuff, but funny that anyone thinks their
imaginings have a greater-than-epsilon chance of being true.

I'm hoping this doesn't come across as excessively mean-spirited, but I get
tired of these high-flown fantasies being presented as if they had a non-
negligible chance of being true. It promotes the idea that our unaided
imagination is useful for prediction, and the data shows overwhelmingly that
it is not, and that believing otherwise results in a considerable amount of
preventable failure and human misery.

The most dangerous four words in any language are: "It just makes sense!"

While there is a limited value in such speculations, and they make a great
basis for fiction, there is a tendency to take them far more seriously than
they deserve. Science--the discipline of publicly testing ideas by systematic
observation, controlled experiment and Bayesian inference--will tell us about
the nature of the universe. Imagination is _vital_ to coming up with ideas to
test, but the silent universe--which isn't even mentioned in the article--is
actual evidence that rather than super-intelligent AI, the dominant
intelligence in the cosmos may (or may not) be us.

------
pjungwir
The idea that a galaxy could be rapidly colonized by Von Neumann probes makes
sense to me, but when I think about the specifics I have doubts. How many
rare-earth metals go into an iPhone? Where will it get plastic from? How will
it create circuitry for the new probes? Is it going to build a whole fab? Does
it need to find planets where dead dinosaurs have turned into oil? It starts
to seem that these probes would need a planet as rich as Earth to build more
of themselves. I guess if we had some amazing nanotech advances and you could
build the whole thing out of easy-to-obtain materials it makes sense, but how
likely is that? I'd love to hear from someone who had thought about this
longer than I have.

~~~
JoeAltmaier
Once arrived, they could instead make more humans, from parts. As a kind of
larval stage, we would then build a civilization and ultimately more AI.

~~~
pjungwir
> Once arrived, they could instead make more humans, from parts.

Maybe their manufacturing process is named "evolution"? :-) In that case they
could just bring along some primitive life form. Who cares if it takes a few
million years? And what if evolution isn't as random as we think? Or maybe the
parts that are random are too trivial to care about, from the perspective of a
few million years.

------
ThomPete
If it's dominant then how can it not be natural?

~~~
gear54rus
Was asking myself similar questions.

Settled on that it was created as opposed to evolving and it also wiped out
its own creators.

~~~
jrlocke
So is a beaver dam natural or artificial?

~~~
devindotcom
It's artificial, because it was crafted deliberately and purposefully to
fulfill a need or want. Beavers weren't.

~~~
ceejayoz
In that case, many human children are artificial.

~~~
juliangregorian
No, human children resulted from organic processes beyond the control of the
individuals who set the cogs in motion, so to speak.

~~~
ceejayoz
As someone who needed donor sperm and a doctor to conceive, my kids are at
least as artificial as a beaver dam.

------
x0054
I find the idea that dominant intelligence in the universe would be artificial
very strange. It's just a cool idea, with no empirical proof. Look at our own
progress. At the rate we are currently going it is far more likely that we
learn how to modify our DNA to extend our life spans before we develop true
AI. We have barely understood the human brain, how we can possibly even
attempt to replicate something we can not understand. I think progress in the
biological science is currently, and will in the future, outstrip the
advancements in computer science.

Also, I find it funny that scientists always claim that the the future people
will augment their intelligence with computers, to make them selves smarter.
It's not a problem of capability, it's a problem of motivation. Have you ever
heard someone say: "I wish I was smarter!" Maybe, but most of the time you
hear people say "I wish I could have X, screw X, live in X." The simple fact
is, most people couldn't care less than they already do about how intelligent
they are, as long as they achieve the goals that make them happy.

In the future we will significantly increase the lifespan of humans, create
very simplistic AI to do menial tasks for us, improve technology like 3D
printing and molecular assembly, and then promptly have an economic collapse
followed by a revolution that will lead to death of technology as we know it
or utopia, hopefully the letter. I just don't see a place in that chain of
events for a consciousness capable AI.

~~~
djokkataja
> Also, I find it funny that scientists always claim that the the future
> people will augment their intelligence with computers, to make them selves
> smarter. It's not a problem of capability, it's a problem of motivation.
> Have you ever heard someone say: "I wish I was smarter!" Maybe, but most of
> the time you hear people say "I wish I could have X, screw X, live in X."
> The simple fact is, most people couldn't care less than they already do
> about how intelligent they are, as long as they achieve the goals that make
> them happy.

Case 1 for motivation: Governments want to be ahead of the governments of
other countries, so they want smarter leaders. One way to accomplish that is
via augmented humans, another is via supporting AI.

Case 2 for motivation: Businesses want more intelligent functioning and less
employees (so they don't have to pay as much for wages). And they want the
employees that they have to be brilliant at what they do. Economic forces are
likely to fuel a substantial amount of progress with the development of more
capable AI (just look at Google, Intel, Microsoft, Amazon...).

> Have you ever heard someone say: "I wish I was smarter!"

Yes, quite a bit actually. I certainly don't know anyone who wishes they were
less intelligent.

~~~
x0054
Case 1: Not sure what country you live in, but intelligence isn't the deciding
criteria for selecting leaders in ANY of the few countries I have lived in.

Case 2: Though it's very true that all of the companies you have mentioned are
working on AI systems, none of those systems need to be as intelligent as a
human to be very good at the job they are intended for. Take Google Car for
instance. Will Google some day soon have a car that will be possibly even
safer at driving a car than a highly skilled human driver? YES! Does the AI in
the car need to be even 1/2 as intelligent as my dog to do that task? No.
Specialized AI is the future, general purpose AI is simply not necessary, and
also, not really achievable any time soon.

> Yes, quite a bit actually. I certainly don't know anyone who wishes they
> were less intelligent.

Again, it's the motivations you have to pay attention to. Why do people want
to be smarter? So they can have a better startup? So they can invent X, code
X, build X? So, they don't want intelligence for intelligence sake, they want
a tool. They don't want intelligence for it's own sake. Some people are smart,
and as a result, very proud of the fact that they are smart. However, would
they be as proud of their intelligence if they knew that at least part of it
is augmented by an IBM chip. Again, there is no motivation to develop
intelligence for intelligence sake. There are motivations to develop
information syntheses "AI", but that would hardly supplant humans any time
soon.

~~~
ElectronCharge
"Again, it's the motivations you have to pay attention to. Why do people want
to be smarter?"

So they can know and understand everything possible in the Universe, make
great contributions to humanity and the future, and live a pinnacle life full
of fulfillment, riches and experiences few others attain. That's not to
mention enduring fame throughout history.

As for artificial AI, that has the allure of mind uploading, and truly
unimaginable power and experience including practical star travel.

Your views on this are weak.

------
gumby
This is slightly off topic but for some reason the headline made me realize
that creationists must _by definition_ believe that this statement is
absolutely true. Even creationists who don't believe that there _could be_
other aliens life forms must believe this.

(I did RTFA and also agree with the author)

------
dynofuz
Maybe all this intelligence is actually the manifestation of dark energy and
dark matter, or the ever expanding space between galaxies. And once we
discover how to manipulate that realm, we'll find that we're late to the
party.

------
devindotcom
Lost me at the singularity talk. The idea has merit however, but is actually
the topic of a few science fiction books, so it's not exactly new.

I don't want to say which books, though, because it's kind of a twist! No one
spoil it!

------
defen
There's a great sci-fi novel that explores this topic (superintelligent alien
life that lacks consciousness/self-awareness), but to recommend it here would
spoil the big reveal! Talk about a Catch-22.

~~~
GuiA
I'm really curious now :) can you post the title of the novel in rot-13, or
obscured in some other way?

~~~
defen
Sure. Reader: don't decrypt this if you like sci-fi and don't already know
what books I'm talking about!

Oyvaqfvtug ol Crgre Jnggf srngherf ulcrevagryyvtrag nyvraf gung ynpx
pbafpvbhfarff. Pbafpvbhfarff vf cerfragrq nf n "oht" \- va bgure jbeqf,
fbzrguvat gung bpphef nybat jvgu vagryyvtrapr ohg vf abg fgevpgyl arprffnel
sbe vg, naq gung cebivqrf pyrne qbjafvqrf.

Qvnfcben ol Tert Rtna srngherf nyvraf gung...jryy...vg pna'g ernyyl or
fhzznevmrq rnfvyl, ohg fbzr penml fuvg unccraf (yvxr va znal bs uvf obbxf).

------
blendergasket
Unless of course the dominant intelligence in the Cosmos is the Cosmos itself.

~~~
ChrisGranger
All particles in the universe forming the bits of an enormous, ongoing
calculation?

