
A Brain Built from Atomic Switches Can Learn - burkaman
https://www.quantamagazine.org/a-brain-built-from-atomic-switches-can-learn-20170920/
======
sowbug
I think there's a chance we'll build an AGI within the next few decades. But I
doubt we'll understand much better how it works than we understand today's
human brain.

It will be an enormous advance, of course. It'll probably perform better than
humans, it'll probably live indefinitely, there's a chance we'll figure out
how to clone it, and we'll have much more ethical latitude to experiment with
it. But I'm not sure we'll be able to explain how we built it; rather, the
best we'll be able to say is that we helped it happen.

~~~
zaptheimpaler
Agreed.. the more i think about it the more it seems there may be nothing
meaningful to "understand" past a certain point (not to say we are at that
point now). If understanding is reducing complexity but the brain turns out to
be a huge network, there may be an irreducible complexity to it.

~~~
sowbug
Perhaps the AGI will itself be what helps us pierce that veil of complexity,
like an ELI5 that can dumb down its own mechanism for mere humans.

Just imagine where we'd be today if we'd had a clone of Richard Feynman in
every high-school math and physics class for the past 40 years.

------
oriel
I'm oddly reminded of Asimov's "I, Robot", where the positronic brain was
basically described as a silvery (platinum and iridium) device that Just
Worked. And nobody knew why.

[https://en.wikipedia.org/wiki/Positronic_brain](https://en.wikipedia.org/wiki/Positronic_brain)

------
ThomPete
This is IMO a strong indication of "emergent complexity" and that
consciousness isn't a thing but rather a whole system with no specific center
of origin.

The interesting questions is whether any system can become aware as long as
the feedback loop involves some interpretation of a pattern-recognizing
feedback loop with memory and as long as it reaches big enough complexity.

I am aware that this isn't proving anything about consciousness but it's in
line with some of my one pocket-philosophical thoughts.

~~~
visarga
For consciousness you need embodiment and goals. Goals shape attention, which
shapes perception, which generates consciousness. The environment and its
dynamics are essential in creating representations and learning behavior. So I
think that in order to have consciousness you need a loop made of environment
+ agent + reward, not just a recurrent neural net with memory.

~~~
ThomPete
Yet here we are, a product of the blind watchmaker. If we can develop
embodiment and goals, why shouldnt it be possible for other systems?

~~~
nine_k
Embodiment and goals sort of preceded intelligence, not the other way around.
A lizard has them. A dragonfly has them. A flatworm has them.

~~~
ThomPete
Exactly we are just aware of ours.

~~~
nine_k
Sadly, not always.

------
montecarl
This was one of the most interesting quotes in the article:

"The way to do this is by training the device: by running a task hundreds or
perhaps thousands of times, first with one type of input and then with
another, and comparing which output best solves a task. “We don’t program the
device but we select the best way to encode the information such that the
[network behaves] in an interesting and useful manner,” Gimzewski said."

Since the "weights"/connections in their network are not easy to modify, they
must instead figure out what kind of machine they have made by feeding it data
and looking at the results. This does seem limiting in that the encoding might
have to be arbitrarily complex to get the desired output.

~~~
SilasX
So ... a large part of the intelligence is actually coming from the user's
decision of what encoding to use.

------
trhway
>Applying voltage to the devices pushes positively charged silver ions out of
the silver sulfide and toward the silver cathode layer, where they are reduced
to metallic silver. Atom-wide filaments of silver grow, eventually closing the
gap between the metallic silver sides. As a result, the switch is on and
current can flow. Reversing the current flow has the opposite effect: The
silver bridges shrink, and the switch turns off.

makes me wonder if there is any potential as a non-volatile memory chip.

~~~
aperrien
I think that this is the same basic concept as a memristor; current flowing
one way probes the state, enough current flowing the other way changes it.

------
rl3
> _The silver network “emerged spontaneously,” said Todd Hylton, the former
> manager of the Defense Advanced Research Projects Agency program that
> supported early stages of the project._

I'm torn as to whether the article was intentionally editorialized to be
mildly disconcerting, or if it actually is.

Setting aside the seemingly perfect Skynet quote for a moment: this a
substrate that self-organizes itself in response to electrical input—it
displays emergent behavior in similar fashion to the human brain—and it does
so while consuming an amount of power that's far closer to the brain than
modern integrated circuits computing the same task.

I have a couple of questions:

1\. The research sounds like it's been ongoing for quite a long time. What's
preventing the creation of larger-scale silver nanowire meshes? Or networking
many 2x2mm meshes together at ridiculous scale?

2\. What's the latency like between the artificial synapses? As far as I'm
aware, synaptic latency in the human brain is on the order of milliseconds.
Meanwhile latency in traditional ICs is on the order of nanoseconds if not
picoseconds.

~~~
rtpg
My guess is that even if they built a bigger mesh, they wouldn't know what to
do with it. Having a smaller mesh lets you explore applications a bit better.

