McCulloch's argument was that perhaps the gross behaviour of a NN as layers of simple transfer functions is where the real action is, and the rest of the details are just gravy.
The fact we now give this to undergrads as homework suggests that there was some value to this idea.
Students in computer science may implement a perceptron as a homework problem. Students in biology don't do that, nor do they use perceptrons to learn about brains, because perceptrons bear only faint resemblance to biological neurons. Reproducing important biological features of real neurons requires much more complicated software.
I'm not denigrating perceptrons or other neuro-inspired approaches to classification. I'm just pointing out that perceptrons are not a faithful model of neurons.
But it turns out that they don't have to be. We know that radically different low-level implementations can approximate the same higher-level functions given a large enough network and enough training (eg. half-precision floating point, integer, or even binary ANNs, not to mention the wide variety of activation functions such as relu, sigmoid, tanh, maxout, softmax, etc.), and we've seen increasingly varied ANN architectures applied to the same tasks with good results, so I would expect this to continue to hold true for ever more sophisticated tasks.
I am certain, BTW, that further study of biological neurons will continue to yield insights for the design of ANNs, but it does not at all follow that ANN design will become more similar to biological NNs as a result. Given the completely different substrates, simulating a biologically plausible NN in order to perform a task (for purposes other than gaining further understanding of biological NNs, that is) would be incredibly wasteful and unnecessary, even if your goal is to create an AGI of some sort.
I was disagreeing with someone who wrote that we understand how neurons work and that perceptrons model them "quite well." They do not model biological neurons well at all. I agree that biological fidelity is not important for building useful ANNs.
I presented (a vulgar summary of) McCulloch's hypothesis, not my own. And since I didn't use the words "quite well", you are not entitled to put them in quotes.
OK, thanks. Too late to edit. Adjust flames accordingly. Of course a perceptron is not an accurate model of a biological neuron. But as a reduction to a minimal model it's still pretty darn interesting.
http://www.genesis-sim.org/
https://www.neuron.yale.edu/neuron/what_is_neuron