
Learning transistor mimics the brain - EndXA
https://liu.se/en/news-item/laraktig-transistor-harmar-hjarnan
======
jillesvangurp
One popular topic in science fiction is the notion of backing up people to a
computer network, the cloud, etc. Actually, I'm eagerly awaiting Neal
Stephenson's upcoming novel on this topic.

Of course the process of backing a brain up is a bit challenging and not
exactly a solved problem. E.g. scanning microscopically thin slices of brain
is possible but still magnitudes off in terms of resolution. Also, doing that
typically destroys the brain under study.

IMHO it is much easier to backup non organic brains; like proposed in this
article. And building those is a lot easier/feasible than it is to pick apart
existing ones. With stuff like this and general improvements in electronics,
we're approaching the moment where computer systems matching e.g. the neuron
count in humans with the equivalent in transistors, electronic neurons are
becoming a reality. Backing those up is going to be lot easier.

Which brings me to the point that there's actually only going to be a brief
period where electronic brains are not vastly more capable than our own ones.
Raises all sorts of interesting questions when you start thinking about that.

~~~
neolefty
How is the theoretical energy efficiency of these polymer transistors? The
last time I checked, silicon was much hotter than neurons, even considering
theoretical limits, even though the size is fine.

~~~
abathur
No comment on your actual question, but your last sentence reminded me of some
research I saw an article about in the last few years premised on learning
from nature and trying to address both power-delivery and heat-diffusion
problems posed by "stacking" by accomplishing both with a fluid. Here's IBM's
page for the research:
[https://www.zurich.ibm.com/st/energy_efficiency/redox_flow.h...](https://www.zurich.ibm.com/st/energy_efficiency/redox_flow.html)

The intro section of one of their recent articles is a little meatier than the
research site:
[https://ieeexplore.ieee.org/document/7967719](https://ieeexplore.ieee.org/document/7967719)

------
xiphias2
Achieving even 100x improvement in the energy efficiency of training /
inferencing neural networks sounds too good to be true. Still, as ASICs are
getting to their limits, it makes sense to try lower level changes in the
architecture.

------
cr0sh
Some of these new devices remind me of the old "memistor" technology used by
the ADALINE/MADALINE machines in the 1960s:

[https://en.wikipedia.org/wiki/Memistor](https://en.wikipedia.org/wiki/Memistor)

These are not to be confused with "memristors"; the latter being a two-
terminal device, while the memistor is a three-terminal device.

If you look at the original papers describing the ADALINE/MADALINE machines,
there's a good description on how these devices were originally made - there's
nothing about them that couldn't be duplicated today in one's garage or
workshop.

So - if you have a hankering to build your own hardware neural "logic"
element...

~~~
tubetime
this is the 1960 paper about ADALINE that discusses the memistor: [http://www-
isl.stanford.edu/~widrow/papers/t1960anadaptive.p...](http://www-
isl.stanford.edu/~widrow/papers/t1960anadaptive.pdf)

