
A Critique of Pure Learning: What Neural Networks Can Learn from Animal Brains - hardmaru
https://www.biorxiv.org/content/10.1101/582643v1
======
raspasov
Very well said IMO.

Found the bit about planes/birds vs humans/AI very insightful:

"But it remains controversial whether further progress in AI will benefit from
the study of animal brains. Perhaps we have learned all that we need to from
animal brains. Just as airplanes are very different from birds, so one could
imagine that an intelligent machine would operate by very different principles
from those of a biological organism. We argue that this is unlikely because
what we demand from an intelligent machine—what is sometimes misleadingly
called “artificial general intelligence”—is not general at all; it is highly
constrained to match human capacities so tightly that only a machine
structured similarly to a brain can achieve it. An airplane is by some
measures vastly superior to a bird: It can fly much faster, at greater
altitude, for longer distances, with vastly greater capacity for cargo. But a
plane cannot dive into the water to catch a fish, or swoop silently from a
tree to catch a mouse. In the same way, modern computers have already by some
measures vastly exceeded human computational abilities (e.g. chess), but
cannot match humans on the decidedly specialized set of tasks defined as
general intelligence."

~~~
Iv
The constraints put on self-driving cars already show that we clearly expect
machines to exceed human performances.

We don't want an AI to show all the bias of humans, we mostly don't care about
the emotions, the struggle to remember things, the unreliability of memory,
imperfect communication, survival instinct, bodily needs.

Our brain evolved to be able to survive, reproduce, be part of a fitter group.
None of these parts of the loss function is useful in most AI problems.

We don't ask a robot to drive a forklift like humans do, we want it to be
faster (better, stronger) and more precise. To not tire, to not make mistakes,
to not bore, to not wonder about the meaning of life.

The more AI advance and the more we understand the human brain, the clearer it
seems (to me at least) that the fact that we are able of rational thought at
all is accidental to other specialties that our brain has (we struggle to add
small numbers, but we have accelerated circuits to read emotions, pose,
injuries).

We may want to learn a bit more about the way our social brain works, but for
all the rational things, I suspect the way our brain works is pretty hacky.

~~~
arethuza
I'm not convinced that we actually use "rational thought" very much - isn't
there a theory that what we think of as "rational thought" is really a
narrative constructed after the fact to explain our own deeper thought
processes to ourselves in a very post-hoc fashion.

I suspect this is why "old fashioned" symbolic AI floundered - it was trying
to mechanise how we think we think (or rather how we used to think we think)
rather than the actual underlying processes.

------
bra-ket
>Genome doesn't have sufficient capacity to specify every connection
explicitly .. and can encode about 1GB of information (page 6)

but genome (raw DNA sequence) is only small part of much more complex and
dynamic system, which includes regulatory networks, metabolic pathways, RNA
interference, cell signalling networks and whatnot [0], surely the number of
bits that can be encoded by this is much higher than GB, probably by many
orders of magnitude. Genome is just a blueprint of machinery that creates
another machinery.

This is not to take away from authors point (and complex systems can be
created with simple rules and just a few bits) but innate structure itself can
be very complex, I'm wondering if there are any neurophysiological studies
showing evidence of a built-in universal grammar, Chomsky's old idea [1]

[0]
[https://en.wikipedia.org/wiki/Systems_biology](https://en.wikipedia.org/wiki/Systems_biology)

[1]
[https://en.wikipedia.org/wiki/Universal_grammar](https://en.wikipedia.org/wiki/Universal_grammar)

~~~
PhilWright
You cannot use 1GB to encode more than 1GB of information. A 1GB zip file does
not encode 1TB of information. Sure, you might have a 1GB zip file that
expands to 1TB of data but not information. That is because the 1TB of data
has lots of redundancy and inefficiency in how the data is represented that
allows the high level of compression.

~~~
orbifold
This is only true for closed systems. The nervous system incorporates a lot of
external information through learning. Similar to how you need much less
information to define a 1TB storage device. The 1GB figure also neglects
epigenetics.

------
didymospl
For me this is the most interesting article I've seen on HN this month, thanks
for posting this. I also love the subtle yet justified nod to Kant in the
title.

------
Wook133
I'm currently at work so I only read the abstract but isn't this an obvious
alternative to how things are learnt when comparing biological neural networks
and artificial neural networks? And don't we call some incidences of this
"pre-wiring" instinct?

------
rudolfwinestock
Wasn't the original point of "neural networks" to imitate biological brains?

