>I further think that statistical/probabilistic models of language are better for both goals.
Could you give some concrete examples? As a linguist, I don't see that statistical models are currently giving us much insight in those areas where current syntactic theory does give some insight. So for example, we don't seem to have learned much about relative clauses, ergativity, passivization, etc. etc. through these models. On the whole, statistical methods seem very much complementary to traditional syntactic theory. This seems to be Chomsky's view also:
"A quite separate question is whether various characterizations of the entities and processes of language, and steps in acquisition, might involve statistical analysis and procedural algorithms. That they do was taken for granted in the earliest work in generative grammar, for example, in my Logical Structure of Linguistic Theory (LSLT, Chomsky 1955). I assumed that identification of chunked word-like elements in phonologically analyzed strings was based on analysis of transitional probabilities — which, surprisingly, turns out to be false, as Thomas Gambell and Charles Yang discovered, unless a simple UG prosodic principle is presupposed. LSLT also proposed methods to assign chunked elements to categories, some with an information-theoretic flavor; hand calculations in that pre-computer age had suggestive results in very simple cases, but to my knowledge, the topic has not been further pursued."
Anyway, if you want to pursue this critique of Chomsky further, I'd recommend a bit more background reading. This article gives a fuller explanation of the views he was outlining at the conference: http://www.tilburguniversity.edu/research/institutes-and-res...
>Or do you think Chomsky meant something else by that?
He presumably means what he said, namely that merely creating accurate models of phenomena has never been the end goal of science. You acknowledge this yourself when you say that you take both modeling and explanation to be part of science.
Could you give some concrete examples? As a linguist, I don't see that statistical models are currently giving us much insight in those areas where current syntactic theory does give some insight. So for example, we don't seem to have learned much about relative clauses, ergativity, passivization, etc. etc. through these models. On the whole, statistical methods seem very much complementary to traditional syntactic theory. This seems to be Chomsky's view also:
"A quite separate question is whether various characterizations of the entities and processes of language, and steps in acquisition, might involve statistical analysis and procedural algorithms. That they do was taken for granted in the earliest work in generative grammar, for example, in my Logical Structure of Linguistic Theory (LSLT, Chomsky 1955). I assumed that identification of chunked word-like elements in phonologically analyzed strings was based on analysis of transitional probabilities — which, surprisingly, turns out to be false, as Thomas Gambell and Charles Yang discovered, unless a simple UG prosodic principle is presupposed. LSLT also proposed methods to assign chunked elements to categories, some with an information-theoretic flavor; hand calculations in that pre-computer age had suggestive results in very simple cases, but to my knowledge, the topic has not been further pursued."
Anyway, if you want to pursue this critique of Chomsky further, I'd recommend a bit more background reading. This article gives a fuller explanation of the views he was outlining at the conference: http://www.tilburguniversity.edu/research/institutes-and-res...
>Or do you think Chomsky meant something else by that?
He presumably means what he said, namely that merely creating accurate models of phenomena has never been the end goal of science. You acknowledge this yourself when you say that you take both modeling and explanation to be part of science.