No one may find this interesting, but in virtual worlds, this is a very big problem. For example, in Second Life, if a bug report/ticket contains the terms "Grey Goo" (or) "Gray Goo" it is immediately taken care of. To the point where, if you are a worker who can handle it, you will be given a phone call and you must drive to your office immediately- it doesn't matter what else you are doing. Grey Goo can destroy a server in a matter of seconds.
Presumably they have automatic safeguards against it, and these emergencies occur when somebody outsmarts them. I wonder if this can be generalized to an interesting theory--how do you prevent all-consuming self-replicating behavior in a system with minimal intervention otherwise? Is this question even meaningful?
You'd think so, but when all of those objects are (potentially) things purchased by players, it gets a bit more complex than kill it at the root. Also, I believe that (at least at one point), objects could attach scripts to other objects. So the family tree wouldn't work in all cases.
Internally Second Life calls their countermeasure the "gray goo fence" but I can't find many good descriptions of how it actually works online, except some mentions that the restrictions on how much something can "rez" increase exponentially. This link was the best description of it I could find:
I believe that, since the attack mentioned therein, they've done other things, such as restricting certain scripting functions to trusted people. But folks have been known to find exploits anyhow.
Of course, feel free to correct me if you actually play Second Life. I've only been following news of it from the outside. I've never actually played it.
I've never understood why people are that worried about it with nanomachines, though. We already have gray goo (though much of it, like algae, is green). I mean, most single-celled organisms are capable of that kind of mass replication. And they can be found pretty much everywhere already. I'd be more worried about accidentally engineering a super-virus or whatever than gray goo per se.
What's wrong with that assumption? There are some efficient mechanisms that evolution hasn't invented because they could not be reached by a series of gradual refinements. The wheel has long been said to be one of them, but I guess now we know that that's not strictly true. But nature's wheels are all very small and are bound to remain small because of their purpose, even though many organisms could benefit from large wheels.
It's true that replication is evolution's specialty, but maybe the initial construction of grey goo would require some specialized environment that doesn't naturally occur. Nuclear bomb isn't so complicated that evolution couldn't figure it out, for example, but it didn't and won't anytime soon.
If course, neither of these evolved though biological mechanisms, though, but they did arise in a natural environment. I find it difficult to imagine circumstances under which evolution would "figure out" a nuclear bomb - ie, I'm agreeing with your point that evolution usually requires a series of gradual refinements rather than a big jump forward.
Well, we already have "grey goo": microbes. They just can't convert everything into more copies of themselves, so it's not like all matter on earth is going to be consumed.
Also, all a nuclear bomb requires is enough fissionable material in a small enough space that the reaction is run away. I see no reason to suppose that a spot particularly rich in uranium (or other such materials) couldn't get squeezed hard enough to explode, or that a critical mass somehow formed by natural processes (particularly those which put it under very high pressure).
I suspect that the reason we haven't observed this happening is because most of the material is not present in concentrated form and because it decays over time.
But just so you know, people have accidentally assembled critical masses by hand. There's no special magic to it, other than getting a large enough quantity of suitable material in the first place (which is really, really hard). You can read some of the scary things that happened here:
The nuclear bomb was a needlessly confusing example--I could have just mentioned explosives in general. Evolution is all about weapons, and explosives make for damn good ones, and it's not hard to imagine a realistic organism that uses them, unlike a nuke-wielding organism. Only I'm not sure that no organism uses explosives. I was already wrong once today about what nature can't do!
How useful is the wheel without roads? In terms of mpg efficiency, weight efficiency, and off-road capability, the wheel seems comparably less efficient than evolved legged locomotion. At least for the purposes that are more important for animals.
I suspect InclinedPlane was...well, thinking of surfaces other than inclined planes. Some terrain can be crossed with legs but not with wheels (of comparable scale). I assume he was talking about that, rather than making a quantitative comparison across flat surfaces.
I don't have any specific numbers, unfortunately (it's hard to find mpg ratings for oxen, for example). However, if you can provide an example of a 1 horse power wheeled vehicle which can take 1 to 2 passengers plus moderate cargo across more than 20 miles per day of rugged terrain including shallow streams and rivers and can be powered entirely by resources obtained in situ then I'll cede the point.
According to a biologist friend, the biggest challenge in working with genetically modified organisms is keeping them alive. Even in a sterile lab they generally end up getting exterminated by native organisms.
It's going to need a power source external to what it eats. Contrast: there already is grey goo built to work on local (not transmitted) energy, and it's called bacteria. It doesn't eat the world, because local energy isn't really that abundant relative to the cost of carrying around the means to utilize it.
I don't know if that's necessarily true by their definitions. As I see it they could be the same intelligent devices with different programmed goals. One group of nanobots you tell, "Make as many copies of yourself (including this instruction) as possible." The other you tell, "Make as many paperclips as possible."
I think they would make it their first order of business to destroy each other, postponing their primary goals in order to make weapons or enlist the help of allies. If they failed at destroying each other, they would reach some kind of peace treaty. Or maybe they would try peace first, realizing that war has too high a chance of wiping them out.
Depends who starts the game first. The P.M. would subvert the goo, if it got a chance. A running goo swarm might be hard to subvert, though, being very distributed. I suspect the P.M. would take off and nuke it from orbit.
No, Drexler doesn't think it's nonsense. He just thinks that it will be unnecessary to make replicators for the manufacturing purposes he has in mind. He says
>In particular, it turns out that developing manufacturing systems that use tiny, self-replicating machines would be needlessly inefficient and complicated. The simpler, more efficient, and more obviously safe approach is to make nanoscale tools and put them together in factories big enough to make what you want.
(note that he explicitly acknowledges the safety risk) and
>The popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply and eat the world. But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task. I worry instead about simpler, more dangerous things that powerful groups might build deliberately - products like cheap, abundant, high-performance weapons with a billion processors in the guidance systems.
This does nothing to diminish the risk of replicators if they are, in fact, created. And there are all sort of possible problems where replicators would be essential. For example, we may want to release replicators into the environment to clean up certain kinds of pollution which can't be easily brought to a central facility.
That's like saying, "This does nothing to diminish the risk of a Moon-based Turbo Death Ray if one is, in fact, created." We might be able to make one if we tried really hard. But it would be pointless and difficult, and there are so many more pedestrian failure modes that it seems pointless to worry about it.
Drexler thinks we underestimate the difficulty of building run-away replicators. Nature's had 4 billion years and hasn't managed it. Yes, I'm aware of the wheel argument.
First, you say building replicator is pointless without giving an argument. This is clearly wrong. I have already given you one example of the use of creating replicators, and here's another: weapons.
Second, you say it is difficult with many failure modes. So was going to the moon. How can you possibly think it's so difficult that it will never get done? In a thousand years?
Third, you cite Drexler to claim that we underestimate the difficulty. (Who underestimates it? Me? The irrational people who Drexler is afraid will take away his funding, or the handful of academics who seriously consider the issue?) The argument for extreme caution does not rely on it being easy to build run-away replicators, only that it is reasonably possible and that the results are catastrophic. Can you really argue with 99% certainty against the feasibility of future technologies without any sort of restriction based on physical law?
Fourth, you say you are aware of the wheel argument...so...what is your reponse? Should we also consider the laser argument? The computer argument? The space-ship argument? Or the argument from any of the nearly countless things that humanity has created in the past 40 years that never existed in the previous 4,000,000,000 that life was around?
I apologize, but I don't have the time to put together a considered response right now, though I wish I did. Maybe this evening.
I do think it's interesting the amount of anger I encounter whenever I even remotely question one of the Singularitarians' babies. There seems to be more ... emotional attachment ... than is healthy for skeptical inquiry.
You would encounter less frustration if you engaged others' arguments rather than dismissing them out of hand. That doesn't mean you have to argue with every crank on the internet, but there's not much point in writing a comment which declares an argument wrong without giving an explanation for why.
I'd very much like to hear what you have to say, because even when I've discussed this with academics who work in nanotech (though I've never spoken to anyone working directly on replicators), I've never heard a better argument than "it's really, really hard".