

Digital Reality: A Conversation with Neil Gershenfeld - christianbryant
http://edge.org/conversation/neil_gershenfeld-digital-reality

======
bawana
Back in the day, reading and writing was the privilege of an elite class. Did
the world measurably improve when everyone could do it? Yes of course. We have
more knowledge, more diversity of things, we are healthier, live longer. Our
lives are 'richer. We can pack more people into smaller spaces. We can
overstress our ecology, pollution is a human invention. We progressed from the
middle ages' definition of a witch to a the Nazi's definition of a subhuman.
We traded clubs for guns. We are still humans with an intelligence that has
evolved for survival and predation. Beneath the veneer of civilization we
designed to thwart the animal, we still embody a beast that is conflicting us
daily, and sometimes rages to the surface when hormonally fueled.

In using digital communication and digital computation, we enable our
imaginations to roam vast landscapes of information but thankfully we are
sandboxed in the real world. The new ideas we generate usually can only be
implemented by changing our own behavior or immediately local environment. If
we happen to discover a meme that is popular, we get a lot of people to copy
us, maybe get VC funding and make an entity to powertool it. Or maybe get a
rich government to fund it in the name of defense.

But can you imagine what might happen if we blindly empower savage humanity
with fabs that can make tools that can alter the fabric of life. I have no
doubt that it will happen. And we will see wonderful things. I always wanted
gills - to be able to swim underwater for hours. I'd settle for that
mouthpiece from Star Wars. But how can we protect ourselves from the mutants
that will misuse this? Why empower Al-Quaeda?? Our military already deploys
these fabs to FOBs to print weapon parts, etc. Are Islamic militants not
educated that they could not operate a fab liberated from an African village?
When IEDs start looking like MREs, it will be too late. Neil was tasked with
answering this question veiled in the civilized form of 'measuring social
impact'. But he did not answer it.

Some might say this is an argument for integrating an AI. A superhuman one
that could decide when to shut the machine off if it was doing bad things.
might someone hack the code and insert NOP instructions to disable the AI but
keep the function of the code running the machine? Maybe, but an AI would
detect that change and delete itself before that happens. And maybe in
designing a safe AI, we'll develop a new definition of sentience and
intelligence. But the fabs are here now and deep learning isnt there yet. And
we havent even begun integrating Asimov's laws into the code that runs AI. Do
these machines have a remote disable switch? Will Neil eat his words like
Einstein, who tried to take back his suggestion of using nuclear weapons to
end WW2?

~~~
christianbryant
Yes, tech is moving faster than human social evolution and we continue to push
ourselves hard to speed up the one while neglecting the other, or simply
hoping that tech will solve our social problems. However, does it make this
exploration or yearning to make possible the seemingly impossible wrong?

Here's a task: For each of the very valid points you raise, propose a solution
that would allow these incredible new technologies to exist while preventing
the terrible consequences of the scenarios you outline from being an issue.

Now you're on to something...

------
gbog
Just half-way through it, I hope oh I hope he is getting to small self-
assembling digital parts. In fact just like in New Heroes 6: tiny bots
assembling themselves into bigger things.

[edit: I hope he is going there because I dreamt about this 6 months ago...]

