Because GOFAI just observably doesn't work. The ideas are brittle, can't generalize and abstract the way is needed, has made very little progress recently (if any) an AI context, and you just _don't see_ anything that would argue otherwise.
In contrast, ML methods do work, observably and clearly, and they work in a ridiculously general way, to a degree larger than almost anyone thought (or even thinks) is reasonable for them to.
And it's not just my opinion; there's a reason AI conference attendance has shot up a factor of 10 or so in the last few years, why NeurIPS is the leading one (and even historically GOFAI conferences are majority NNs), why the big AI labs with big AI cash are all doing NNs, and why all of a sudden AI is a popular topic outside academia.
If this doesn't answer your question, perhaps answer the opposite; how do you know that it's wrong?
>> If this doesn't answer your question, perhaps answer the opposite; how do you
know that it's wrong?
I know the literature. It's my job.
>> And it's not just my opinion; there's a reason AI conference attendance has
shot up a factor of 10 or so in the last few years, why NeurIPS is the leading
one (and even historically GOFAI conferences are majority NNs), why the big AI
labs with big AI cash are all doing NNs, and why all of a sudden AI is a
popular topic outside academia.
That's still an opinion- "it's not just my opinion, everyone says so". A.k.a.
"It is known", in Dothraki. And of course it is of no consequence who's
spending money on what and who's submitting papers where. The volume of
research was never a criterion for its quality. Heed thee well the legend of
our Lord Geoff Hinton's years in the academic wilderness and how he emerged
victorious with the laws of deep learning in his hands.
I think what you've said so far has convinced me you're expressing a personal
opinion that is strongly held without a good reason to do so. You make
sweeping statements with great certainty, but you don't really seem to know
how you know the things you know, so you end up "knowing" some things that you
don't really know. For instance, you claimed that "GOFAI" successes just
"aren't there" but I listed a few, like Deep Blue or MYCIN - and you didn't
seem to have heard of these before (I'm more surprised about not knowning of
Deep Blue than MYCIN).
You also claim that "these approaches are not AI". That's a "No True
Scottsman" right there. Except there really is no True Scottsman (i.e. "AI" in
the sense you use it)- ask Yoshua Bengio:
Bengio: In terms of how much progress we’ve made in this work over the last
two decades: I don’t think we’re anywhere close today to the level of
intelligence of a two-year-old child. But maybe we have algorithms that are
equivalent to lower animals, for perception. And we’re gradually climbing this
ladder in terms of tools that allow an entity to explore its environment.
Spectrum: Will any of these ideas be used in the real world anytime soon?
Bengio: No. This is all very basic research using toy problems. That’s fine,
that’s where we’re at. We can debug these ideas, move on to new hypotheses.
This is not ready for industry tomorrow morning.
> For instance, you claimed that "GOFAI" successes just "aren't there" but I listed a few, like Deep Blue or MYCIN - and you didn't seem to have heard of these before (I'm more surprised about not knowning of Deep Blue than MYCIN).
At this point I think we're just hopelessly talking past each other. Of course I know about Deep Blue. I didn't know about MYCIN, but, like, “MYCIN was never actually used in practice”, so I don't feel particularly bad about missing that one.
But neither of those challenge my point. If you want to go back in time 30 years, then sure, if you want to be an AI expert, then you have to know GOFAI. That's what the ‘OF’ stands for.
> I know the literature. It's my job.
Yah I read the literature too. (Albeit it seems a very different subset.) That's not an argument though.
But none of those sources says that e.g. search or planning are not AI fields. That was your original claim, if I'm not mistaken? Anyway it doesn't matter. It's a very strange thing to say and I was just trying to understand what made you say it- strictly out of curiousity.
I too can quote Hinton -from memory and without a link. I remember him saying that the next big thing in AI will come from a grad student who distrusts everything he (Hinton) has ever said. Unfortunately, I won't be that grad student- I haven't heard everything that Hinton has ever said.
I best summarized my claim when I said the following. Whether or not it's an ‘AI field’ is not very interesting to me, as long as the following holds.
---
Like, my point is not about whether you can find the odd person trying to solve intelligence with grammars, or what were GOFAI conferences still harbour GOFAI research in the corners, my point is that a) these approaches don't work as a way to actually tackle AI, the problem, b) the vast majority of the field does not take them as seriously as a method of doing so, regardless of other uses, and c) therefore it's natural, not ‘impossible’, to gain AI expertise without having much care for those parts of the field.
In contrast, ML methods do work, observably and clearly, and they work in a ridiculously general way, to a degree larger than almost anyone thought (or even thinks) is reasonable for them to.
And it's not just my opinion; there's a reason AI conference attendance has shot up a factor of 10 or so in the last few years, why NeurIPS is the leading one (and even historically GOFAI conferences are majority NNs), why the big AI labs with big AI cash are all doing NNs, and why all of a sudden AI is a popular topic outside academia.
If this doesn't answer your question, perhaps answer the opposite; how do you know that it's wrong?