
Georgia Tech Researchers Demonstrate How the Brain Can Handle So Much Data - espeed
http://www.cc.gatech.edu/georgia-tech-researchers-demonstrate-how-brain-can-handle-so-much-data
======
MrQuincle
Article is behind a paywall.

What kind of random projections? Reservoir computing can be seen as random
projections: echo state networks, liquid state machines, extreme learning
machines. Does it involve extra constraints? Sparsity? Nonnegativity? Which
random projection schemes do not conform similar to humans?

Very difficult to distill more info out of this!

~~~
espeed
Here's a link: "Visual Categorization with Random Projection"
[http://www.cc.gatech.edu/~vempala/papers/categorization.pdf](http://www.cc.gatech.edu/~vempala/papers/categorization.pdf)

~~~
MrQuincle
Thanks!

It randomly projects to a lower dimensional space (contrary to the reservoir
computing methods I mentioned). That's indeed one of the standard ways to do
dimension reduction.

What looks new to me is to use random weights in a sliding window and by
combining FAST features. It introduces computer vision structure to random
projections in such a way. Interesting!

------
fauigerzigerk
I'm not sure if this is really how humans do it. Actually I'm not even sure
humans recogise all types of images in the same way. It seems to me that
humans use logic in some situations and lower level features in others, or
maybe both to some degree.

For instance, imagine a view from a car onto a dark country road ahead, lined
with trees. The road bends so you can't see around the corner, but you see a
light getting brighter. Humans would conclude that there must be an oncoming
car and they would focus on where exactly the headlights appear relative to
the middle of the road.

Nothing else in the picture would matter much. The one thing that does matter
cannot be concluded from the picture itself. So humans can compress that image
as two points (the headlights) and the a line that represents the middle of
the road. Pretty good dimensionality reduction I would say.

Machine learning systems can obviously be trained to do something very
similar. But my point is that if you take away the situational context
necessary for focus, what's left may not tell us much about how humans process
images in most cases.

Also, how many examples of car crashes does a machine learning system need in
order to recognise where the headlights should definitely not be? How many
head-on car crashes do humans need in order to learn from their mistakes? But
that's a wholly different subject.

Please don't take this as a criticism of the paper. I haven't (fully) read it
and I know next to nothing about random projection. I'm just generally
wondering about how lower level features and high level reasoning are
interconnected. It doesn't seem to be a one way street (which doesn't
necessarily mean that they are on a collision course though).

~~~
MrQuincle
The point here is that dimensionality reduction does not need to be done by
inferring higher level concepts like headlights. That there are scenarios in
which humans have easy models at hand is common knowledge.

You can randomly throw stuff away! That's the new thing here.

Of course it is a very rough setup. However that the brain performs random
projections is an interesting hypothesis and they actually compared it with
human subjects which is a plus.

Compared to model-heavy or deterministic scenarios in which there are "boring"
dimensionality reduction techniques, random projections might also explain
secondary effects, such as better resilience against overfitting.

~~~
fauigerzigerk
_> The point here is that dimensionality reduction does not need to be done by
inferring higher level concepts like headlights._

I get that, and it is a very useful result.

But what I was wondering about is whether humans can ever "switch off" high
level reasoning when they perceive sensory information. Even when the image is
supposedly abstract, humans may be making stuff up, and that might influence
perception and dimensionality reduction, similar to what's happening in a
Rorschach test.

And that's why I don't believe that what the paper shows is actually "How the
Brain Can Handle So Much Data" \-- the title of the linked article.

------
dharma1
Here's a good video explaining random projection. It's basically taking high
dimensional data and reducing the dimensionality so that it's faster to
process. I don't have any background in statistics but it was still intuitive
to follow this talk.

[https://youtu.be/V9zl09w1SGM?t=10m27s](https://youtu.be/V9zl09w1SGM?t=10m27s)

I skimmed through the OP article, wasn't immediately obvious how they deduced
that brain does something similar, but I would have thought better
understanding of actual neural processing on a biological level is required.

That's not to say neurons/brains don't reduce data complexity to process it
fast - I'm sure they do. But probably how exactly that happens is also
important. Nature has had a long time to work this out.

------
conjectures
"We extracted small patches from images, just like they do in neural networks
research. Then we used neural networks and humans to identify the images
patches were drawn from. Because humans can do this and neural networks can
too humans must work like neural networks."

Hmm. Did I miss something?

~~~
gone35
A lot: the whole _random projections_ thing.

~~~
conjectures
The random projections that happened to be blurred or overlaid image patches?

My point isn't about whether RP makes for good ML algorithms, it's about
whether human vision is patch-based.

------
dharma1
these guys are working on optical hardware for significantly faster random
projections.. looks interesting

[https://sites.google.com/site/companylighton/home/press](https://sites.google.com/site/companylighton/home/press)

