
Discovering How the Brain Works Through Computation - rbanffy
https://engineering.columbia.edu/press-releases/discovering-how-brain-works-through-computation
======
jmiskovic
Yesterday I watched a fascinating talk The Ghost in the Machine by AI
researcher Josha Bach. He claims that computational math is enough to support
the conscious thought, both in living things and simulated environments. The
whole talk is densely packed with ideas and concepts (for example nice
illustration of concepts behind back-propagation NNs at 19:20). Here [0] he
lays out his current model for mind. I have no idea what to make of it, but
the whole talk is incredibly lucid and contains zero fluff or BS (unlike
linked article).

[0] [https://youtu.be/e3K5UxWRRuY?t=1702](https://youtu.be/e3K5UxWRRuY?t=1702)

------
seven4
“For me, understanding the brain has always been a computational problem,”
says Papadimitriou, who became fascinated by the brain five years ago.
“Because if it isn't, I don't know where to start.”

Interesting - we do tend to approach problem sets via the lens that is readily
available to us.

Slightly unrelated but I've always been fascinated by how useful it is to
overlay two areas of interest and the insights that present themselves when
you focus at the intersection. What's more it seems to work with disparate
ideas as well as it does with adjacent ideas. I'm convinced it's a useful way
to ideate/think of ideas for startups/businesses.

~~~
throwaway_pdp09
> we do tend to approach problem sets via the lens that is readily available
> to us.

Is it possible to do otherwise? We use the most appropriate tool for a job,
the one we're most familiar with. I can't see an alternative. Could be being
short-sighted though.

~~~
rbanffy
Sometimes you look at the problem and realize that, first, you'll need to
build new tools to analyze it.

------
seesawtron
Quite interesting. Maass (senior author) has done some pioneering work on
Liquid State Machines and Echo state networks (LSTMs/RNNs).

In this paper they define a formal system intended to model the computations
underlying cognitive functions that is done via an "assembly", a set of
excitatory neurons all belonging to the same brain area, and capable of near-
simultaneous firing.

The main assumption in their model that I do not agree with is that of random
synaptic wiring of the circuits. There is considerable research that argues
that synaptic connectivity is not random but specific in several ways [0,1].

[0] Sporns O. (2011). The non-random brain: efficiency, economy, and complex
dynamics. Front. Comput. Neurosci. 5:5 10.3389/fncom.2011.00005

[1] Motta et al. (2019)
[https://science.sciencemag.org/content/366/6469/eaay3134'](https://science.sciencemag.org/content/366/6469/eaay3134')

------
beefman
Paper:
[https://www.pnas.org/content/pnas/early/2020/06/08/200189311...](https://www.pnas.org/content/pnas/early/2020/06/08/2001893117.full.pdf)

------
eleitl
> “So, we have finally articulated our theory about the nature of the “logic”
> sought by Axel, and its supporting evidence,” says Papadimitriou, who is
> also a member of the Data Science Institute. “Now comes the hard part, will
> neuroscientists take our theory seriously and try to find evidence that
> something like it takes place in the brain, or that it does not?”

So don't get too excited yet.

~~~
DennisP
Seems like instead of jumping to neuroscience and fMRI, they should implement
the model and see whether it actually works on practical problems. All they
have is a simulation which they say "fits the data," which I assume is
neuroscience data.

Demonstrate a breakthrough in machine intelligence and neuroscientists will
probably get interested.

~~~
FakeRemore
Computational modelling is already something that's fairly widespread for
trying to understand the brain. The only novel thing here might be how his
model is designed.

~~~
DennisP
Well yes, but it still seems relevant to just ask _does this work?_ If you
implement it, does it produce some kind of intelligent behavior? If not then
you've disproven the hypothesis, and if so then it might have practical
benefit.

~~~
FakeRemore
I mean, the whole point of implementing this would be to see what insights can
be gained.

From the study:

> The basic operations of the Assembly Calculus as presented here—projection,
> association, reciprocal projection, and merge—correspond to neural
> population events which 1) are plausible, in the sense that they can be
> reproduced in simulations and predicted by mathematical analysis, and 2)
> provide parsimonious explanations of experimental results (for the merge and
> reciprocal project operations, see the discussion of language below)

This was an initial study, and I'm sure they're going to continue putting out
papers exploring the model and how it compares with experimental data. It's
not like everybody's going to see this one study and say "He's right! We
should all use this model!" It's more that as more evidence is provided, the
model becomes more relevant and it might be considered by others to be useful.

~~~
DennisP
By "experimental data" do you mean something like "this had a pattern of
neural activity that looks like what we've observed in biology," or do you
mean "we tried driving a self-driving car with this method and it worked
really well." The latter is more what I'm talking about, whether it's
robotics, image classification, game playing, etc.

~~~
FakeRemore
You do realize you can't just jump straight into "Let's see if this thing can
drive a car", right? There's years of research and development that's going to
happen before anything like that. And it's not like they're designing this to
get a car-driving AI. They're trying to find an accurate model of the brain.
Maybe, if results are promising, this can be turned into something like that
in 10-20 years, but I wouldn't count on it. Maybe sooner if it turns out to be
particularly promising. Chances are this is going to radically evolve in
different directions. They might hit dead-ends. They might make valuable
insights about how some parts of the brain work, but can't generalize it or go
from there to a general problem solving intelligence. There are all manner of
problems, and I don't think you realize how complex the brain actually is when
you're starting from first principles like this. They're at the level of
simulating what individual neuron clusters do. That's like looking at
electrons interacting and expecting to build a bridge by manipulating them
individually.

~~~
DennisP
I do realize, but I'm not saying let's try to jump right into general
intelligence. Deep neural networks do all sorts of reasonably smart things
right now, including all the things I mentioned. Seems to me we are at a point
where we can test neural models to see whether they actually perform functions
useful to an organism.

~~~
FakeRemore
The first neural networks appeared in the 50s (with significant limitations),
and proper research into them and appropriate funding first started in the
80s. NN's didn't become a thing overnight, and neither will this. I'm not sure
why you're expecting this thing to be proven _now_. Science is slow. It takes
time to build evidence and "prove" things and figure out how useful a model
is.

~~~
DennisP
Perceptrons were applied to image recognition tasks right from the beginning:
[https://en.wikipedia.org/wiki/Perceptron#History](https://en.wikipedia.org/wiki/Perceptron#History)

Applying them to real tasks revealed their capabilities and limitations, and
when computers got better people took it from there.

