Hacker News new | past | comments | ask | show | jobs | submit login

However, there definitely are analogies! E.g. early work in convnets was inspired by the architecture of cat brains.

I think the fields have useful things to say to each other, but we're getting over a (maybe justified) taboo in talking about machine learning methods being biologically inspired.




The origins of that analogy are very flimsy:

1) Hubel and Wiesel discover simple and complex cells in cat's V1 in the 60's. They came up with an ad hoc explanation that somehow the complex cells "pool" among many simple cells of the same orientation. No one to date knows how such pooling would be accomplished (that selects exactly simple cells of similar orientation and different phase, not vice versa), or whether that pooling is only on V1 or elsewhere in the cortex.

2) Fukushima expanded that ad hoc model into neocognitron in 80's, though there is exactly zero evidence for similar "pooling" in higher cortical areas. In fact, higher cortical areas are essentially impossible to disentangle and characterize even today.

3) Yann Lecun took neocognitron and made a convnet which worked OK for MNIST in the late 80's. Afterward the thing was forgotten for many years.

4) Some few years ago Hinton and some dude who could write good GPU code (Alex Krizhevsky), took the convent and won ImageNet. That is when the current wave of "AI" started.

In summary, covnets and very loosely based on an ad hoc explanation to Hubel and Wiesel findings in primary visual cortex, which today in neuroscience are regarded as "incomplete" to say the least (more likely completely wrong). Now this stuff works to a degree, but really all these biological inspirations are very minimal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: