
Apple, Google, and Facebook Are Raiding Animal Research Labs - laurex
https://www.bloomberg.com/news/features/2019-06-18/apple-google-and-facebook-are-raiding-animal-research-labs
======
majos
My attempt at a more accurate summary of the evidence from the article: some
animal research labs focus on neuroscience, and some of those neuroscientists
use machine learning. Sometimes, they get good enough at using machine
learning that tech companies hire them. Some of the senior people -- say, the
type that could lead a lab at a top university -- may eventually get paid over
a million dollars per year.

My impression is that the article is trying to make a connection like "tech
companies are trying to hack our brains...so they're getting in bidding wars
over neuroscientists!" when the reality is more like "tech companies are
always looking for people with machine learning expertise, and some
neuroscientists fit that description, and in some cases their research is
directly relevant".

------
ma2rten
I think the story is exacturated. There are some people who move from
computational neuroscience to machine learning research, but it's not that
common.

~~~
bobwaycott
Did you mean _exaggerated_?

~~~
ma2rten
Yes!

------
bognition
Put another way, PhD's who know how to work hard on projects that take years
to complete are leaving academia for high paying jobs at big tech companies.

------
gnode
> To the relief of some ethicists, we're a long way from AGI, [...]

I often hear statements of this kind, but do they have basis? While there's
not much to suggest that AGI will be developed soon, it doesn't seem sensible
to me to say it's a long way away, as we still don't understand how difficult
the problem is, nor do we have a clear path toward its development. It may be
that it is achievable straightforwardly, with a breakthrough cognitive
architecture.

Also, regarding ethical concerns, I don't think AGI is the problem, but rather
the broader domain of super-intelligence. Plausibly, super-intelligence could
result from the combination of the human mind with non-generally intelligent
machinery, resulting in the same dangers.

~~~
mantap
I am convinced that we have the hardware already for AGI. Can an exaFLOPS
scale Google datacenter with millions of TPUs really be less capable than the
20 watt ball of jelly we all carry around?

The barriers to AGI are in the algorithms. Nobody knows how long it will take
to achieve unsupervised learning that is as general as the human brain. It
might take 200 years of slow but steady progress or alternatively somebody
could publish a breakthrough in a paper tomorrow and we could have parity by
the end of the year.

~~~
trevyn
> _Can an exaFLOPS scale Google datacenter with millions of TPUs really be
> less capable than the 20 watt ball of jelly we all carry around?_

It is demonstrably less capable in some important dimensions, yes.

~~~
gnode
The brain is very complex, but it's not well understood how well that
complexity contributes to its computation. Observable phenomena may contribute
to the computational capacity of the brain, or may just introduce noise which
impairs it.

Do you know of any tasks where the brain has demonstrated better performance
than a computer, where the hardware is known to be limiting? I.e. where there
can not be an unknown better program the computer could run.

~~~
UnpossibleJim
Part of the complexity involves that it doesn't run purely on binary. It
involves branches that have 4 to 6 nodes from a single input, with varying
strength of signal (chemical) across a synapse, which could be a different
chemical release. People are trying to simulate this with a simple on/off
switch and things get very difficult, very quickly.

~~~
gnode
If you look at an integrated circuit like a CPU at the gate level, you will
find complexity which goes beyond the logic being implemented -- the snaking
physical routing of the traces; the width and layer of the traces; supply and
ground lines; the orientation of the gates; the number of fins on a multi-gate
finFET. Someone without a high-level understanding may wrongly assume that any
of these things contribute to the logic.

Sometimes specific details are important, yet do not encode higher-level
information. Supply lines must be able to carry enough current, so changing
their size may break the logic. Clock distribution lines must propagate their
signals at the correct rate, so changing their length may break the logic.
Neither of these factors need be considered in a high-level emulation of a
processor.

In the same way, complexity in the brain may be irrelevant to its function,
redundant, or even problematic. As an example: the vascular anatomy of the
brain is complex, but not generally considered to be instrumental to
cognition; it's similar to the Vdd and ground, but shapes the neural matter
around it. As we don't understand the mechanics of how intelligence arises
from neurons, it would be wrong to assume all their properties are
instrumental.

Given that an integrated circuit is an engineered system, and the brain is a
evolved biological system, and evolution is in many ways more prone to
unnecessary complexity because of local optima traps, I would speculate that
much of the brain's complexity is likely not instrumental to intelligence (yet
may be contributory).

