
Grey goo - morazyx
http://en.wikipedia.org/wiki/Grey_goo
======
dbz
No one may find this interesting, but in virtual worlds, this is a very big
problem. For example, in Second Life, if a bug report/ticket contains the
terms "Grey Goo" (or) "Gray Goo" it is immediately taken care of. To the point
where, if you are a worker who can handle it, you will be given a phone call
and you must drive to your office immediately- it doesn't matter what else you
are doing. Grey Goo can destroy a server in a matter of seconds.

~~~
asdflkj
Presumably they have automatic safeguards against it, and these emergencies
occur when somebody outsmarts them. I wonder if this can be generalized to an
interesting theory--how do you prevent all-consuming self-replicating behavior
in a system with minimal intervention otherwise? Is this question even
meaningful?

~~~
m_eiman
It should be pretty easy: keep track of the ancestry of each object and kill
off any family tree that's expanding too quickly if the server gets
overloaded.

~~~
Natsu
You'd think so, but when all of those objects are (potentially) things
purchased by players, it gets a bit more complex than kill it at the root.
Also, I believe that (at least at one point), objects could attach scripts to
other objects. So the family tree wouldn't work in all cases.

Internally Second Life calls their countermeasure the "gray goo fence" but I
can't find many good descriptions of how it actually works online, except some
mentions that the restrictions on how much something can "rez" increase
exponentially. This link was the best description of it I could find:

<http://alphavilleherald.com/2006/09/linden_lab_grey.html>

I believe that, since the attack mentioned therein, they've done other things,
such as restricting certain scripting functions to trusted people. But folks
have been known to find exploits anyhow.

Of course, feel free to correct me if you actually play Second Life. I've only
been following news of it from the outside. I've never actually played it.

I've never understood why people are that worried about it with nanomachines,
though. We already have gray goo (though much of it, like algae, is green). I
mean, most single-celled organisms are capable of that kind of mass
replication. And they can be found pretty much everywhere already. I'd be more
worried about accidentally engineering a super-virus or whatever than gray goo
per se.

------
axiom
This assumes that some artificial organism we construct would be more robust
and efficient at self-replication than run-of-the-mill single celled
organisms.

~~~
asdflkj
What's wrong with that assumption? There are some efficient mechanisms that
evolution hasn't invented because they could not be reached by a series of
gradual refinements. The wheel has long been said to be one of them, but I
guess now we know that that's not strictly true. But nature's wheels are all
very small and are bound to remain small because of their purpose, even though
many organisms could benefit from large wheels.

It's true that replication is evolution's specialty, but maybe the initial
construction of grey goo would require some specialized environment that
doesn't naturally occur. Nuclear bomb isn't so complicated that evolution
couldn't figure it out, for example, but it didn't and won't anytime soon.

~~~
nl
_Nuclear bomb isn't so complicated that evolution couldn't figure it out, for
example, but it didn't and won't anytime soon._

That's kinda true, but there was a natural nuclear reactor:
<http://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor>

Also, the Sun seems to work pretty well.

If course, neither of these _evolved_ though biological mechanisms, though,
but they did arise in a natural environment. I find it difficult to imagine
circumstances under which _evolution_ would "figure out" a nuclear bomb - ie,
I'm agreeing with your point that evolution usually requires a series of
gradual refinements rather than a big jump forward.

~~~
asdflkj
Fascinating link.

The nuclear bomb was a needlessly confusing example--I could have just
mentioned explosives in general. Evolution is all about weapons, and
explosives make for damn good ones, and it's not hard to imagine a realistic
organism that uses them, unlike a nuke-wielding organism. Only I'm not sure
that no organism uses explosives. I was already wrong once today about what
nature can't do!

~~~
TNO
Bombadier beetle? <http://en.wikipedia.org/wiki/Bombadier_Beetle>

~~~
jacquesm
I'm surprised at the creationist angle in that article, it's not as though
that has any bearing on informing people about an insect.

------
JulianMorrison
It's going to need a power source external to what it eats. Contrast: there
already is grey goo built to work on local (not transmitted) energy, and it's
called bacteria. It doesn't eat the world, because local energy isn't really
that abundant relative to the cost of carrying around the means to utilize it.

------
Groxx
Self-replicating-nanobot Earth-death final-state.

Things like this always make me think of Sam's Archive's "Geocide" page.
Which, by the way, points out that this does not in fact bring about the
_destruction_ of the Earth: <http://qntm.org/destroy>

~~~
huma
There's another site called "Exit Mundi" that has a lot of doomsday scenarios.
In particular, the grey goo problem: <http://www.exitmundi.nl/graygoo.htm>

~~~
bryanh
What a powerful concept. It really does boggle the mind.

------
gojomo
In a battle between Grey Goo and a Paperclip Maximizer, who wins?

<http://wiki.lesswrong.com/wiki/Paperclip_maximizer>

~~~
dmoney
Grey goo. A grey goo unit just needs to produce more grey goo units. A
paperclip maximizer needs to produce both paperclips and other paperclip
maximizers. So the goo is more efficient.

~~~
gojomo
But the Paperclip Maximizer is smarter!

~~~
dmoney
I don't know if that's necessarily true by their definitions. As I see it they
could be the same intelligent devices with different programmed goals. One
group of nanobots you tell, "Make as many copies of yourself (including this
instruction) as possible." The other you tell, "Make as many paperclips as
possible."

I think they would make it their first order of business to destroy each
other, postponing their primary goals in order to make weapons or enlist the
help of allies. If they failed at destroying each other, they would reach some
kind of peace treaty. Or maybe they would try peace first, realizing that war
has too high a chance of wiping them out.

------
stretchwithme
There are all kinds of disastrous applications of technology waiting for us.

like an microorganism or device designed to impregnate every women on the
planet

or maybe people will start modifying their offspring to worship them

my personal favorite are tiny robots that interfere with the unjust use of
force worldwide, causing the collapse of most governments

~~~
m_eiman
The last one will never happen, since it's the government that defines "just".

~~~
stretchwithme
No, its whoever invents such a technology that will define it. :-)

------
michaelfairley
_Self-replicating machines of the macroscopic variety were originally
described by mathematician John von Neumann, and are sometimes referred to as
von Neumann machines._

I suppose their memory holds both code and data?

~~~
JeanPierre
Well, von Neumann was John McCarthy's menthor, and McCarthy made LISP. There's
no wonder why programming code (e.g. functions) in LISP is data, and why LISP
is heavily used in AI.

------
techiferous
"They might be "superior" in an evolutionary sense, but this need not make
them valuable."

A good reminder for businesses. If your only goal is to dominate the market,
it is not a worthy goal.

------
thunk
Drexler thinks it's nonsense:

<http://nanotechweb.org/cws/article/indepth/19648>

~~~
jessriedel
No, Drexler doesn't think it's nonsense. He just thinks that it will be
_unnecessary_ to make replicators for the manufacturing purposes he has in
mind. He says

>In particular, it turns out that developing manufacturing systems that use
tiny, self-replicating machines would be needlessly inefficient and
complicated. The simpler, more efficient, and more obviously safe approach is
to make nanoscale tools and put them together in factories big enough to make
what you want.

(note that he explicitly acknowledges the safety risk) and

>The popular version of the grey-goo idea seems to be that nanotechnology is
dangerous because it means building tiny self-replicating robots that could
accidentally run away, multiply and eat the world. But there’s no need to
build anything remotely resembling a runaway replicator, which would be a
pointless and difficult engineering task. I worry instead about simpler, more
dangerous things that powerful groups might build deliberately - products like
cheap, abundant, high-performance weapons with a billion processors in the
guidance systems.

This does _nothing_ to diminish the risk of replicators if they are, in fact,
created. And there are all sort of possible problems where replicators would
be essential. For example, we may want to release replicators into the
environment to clean up certain kinds of pollution which can't be easily
brought to a central facility.

~~~
thunk
That's like saying, "This does _nothing_ to diminish the risk of a Moon-based
Turbo Death Ray if one is, in fact, created." We might be able to make one if
we tried really hard. But it would be pointless and difficult, and there are
so many more pedestrian failure modes that it seems pointless to worry about
it.

Drexler thinks we underestimate the difficulty of building run-away
replicators. Nature's had 4 billion years and hasn't managed it. Yes, I'm
aware of the wheel argument.

~~~
jessriedel
First, you say building replicator is pointless without giving an argument.
This is clearly wrong. I have already given you one example of the use of
creating replicators, and here's another: weapons.

Second, you say it is difficult with many failure modes. So was going to the
moon. How can you possibly think it's so difficult that it will _never_ get
done? In a thousand years?

Third, you cite Drexler to claim that we underestimate the difficulty. (Who
underestimates it? Me? The irrational people who Drexler is afraid will take
away his funding, or the handful of academics who seriously consider the
issue?) The argument for extreme caution does not rely on it being _easy_ to
build run-away replicators, only that it is reasonably possible and that the
results are catastrophic. Can you really argue with 99% certainty _against_
the feasibility of future technologies without any sort of restriction based
on physical law?

Fourth, you say you are aware of the wheel argument...so...what is your
reponse? Should we also consider the laser argument? The computer argument?
The space-ship argument? Or the argument from any of the nearly countless
things that humanity has created in the past 40 years that never existed in
the previous 4,000,000,000 that life was around?

~~~
thunk
I apologize, but I don't have the time to put together a considered response
right now, though I wish I did. Maybe this evening.

I do think it's interesting the amount of anger I encounter whenever I even
remotely question one of the Singularitarians' babies. There seems to be more
... emotional attachment ... than is healthy for skeptical inquiry.

~~~
jessriedel
You would encounter less frustration if you engaged others' arguments rather
than dismissing them out of hand. That doesn't mean you have to argue with
every crank on the internet, but there's not much point in writing a comment
which declares an argument wrong without giving an explanation for why.

I'd very much like to hear what you have to say, because even when I've
discussed this with academics who work in nanotech (though I've never spoken
to anyone working directly on replicators), I've never heard a better argument
than "it's really, really hard".

------
lopezka
Reminds me of what humans are doing with this planet.

~~~
ErrantX
I suppose ultimately it models any organism at its simplest level.

