Hacker News new | past | comments | ask | show | jobs | submit login

>Brains are complicated, and there is a huge amount of heterogeneity in how people process information and think about mathematics (and indeed all topics, but it is clearer in mathematics perhaps). Some are very visual, some are big on calculation.

What do you mean by "associative model"? That doesn't map to anything I've heard of in cognitive science, statistics, machine learning, or Good Old-Fashioned AI.

But actually, I would expect different behaviors from an animal that learns language via a purely correlational and discriminative model (like most neural networks) versus a causal model. Causal models compress the empirical data better precisely because they're modelling sparse bones of reality rather than the abundant "meat" of correlated features. You should be able to generalize better and faster with a causal model than with a discriminative, correlative one.




I think I meant correlational, but it was really just a placeholder for "a different model". You could replace the chimp with some kind of alien whose thinking model is completely, well, alien to us – but still proves its intelligence by virtue of having a spaceship.

I'm not necessarily saying that different models lead to exactly the same behaviour. Clearly, chimps' models don't generalise as well as ours do and they don't have a language model that matches ours in capability, for example, which leads to different behaviour. But given that their behaviour is generally thought of as less intelligent as opposed to not intelligent at all, it seems like the mechanism itself is not the important thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: