Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.
Researchers such as Hawkins are well aware of Hofstadter's ideas, and Hofstadter's grad students take his ideas out into the world of AI research with no real need for Hofstadter himself to personally attend conferences. Every one of them would love to use any idea that has been overlooked by the rest of the field to make a name for himself/herself with some career-making breakthrough that can do what humans can do but other AI systems can't.
Hofstadter himself spoke here at Stanford a few years ago to a standing room only audience. I don't dispute the notion that mindsets and political agendas can delay the acceptance (or work on, or resources to) a good idea for years, but anything of use in his work will eventually be put to use. He can keep doing what he's doing, and brainstorming with his grad students, and anything useful they find will be disseminated.
When I look at my daughter developing, from baby to infant to child. Hasn't that been a constant, intensive training? As she recognizes stuff, I give feedback. After a while she starts correlating stuff, and signals for me to give feedback. By the time she's an adult, she will have full control of her intelligence, but also no understanding.
Maybe what we are missing is just the algorithm for information storage and retrieval. If we can master Genetic Algorithms, why not Celular Databases? Or Chemical Procedures?
So, you and Watson are "very similar" just because both systems don't have a perfect understanding of themselves? You don't know that. Your premises look true, but your conclusion don't follow from them (or at all). Actually you probably know that no matter how you spin it, you and Watson are very different.
So don't say you aren't, it's misleading. Not only to others, but to yourself as well. Try to find a meaningful similarity instead.
Truly original ideas are fragile and delicate. They require careful nurturing and devoted protection if they are to eventually flower.
It may be that Hofstadter sees far more deeply than I, and approaches like NuPIC and deep belief networks that seem different to me and therefore in need of synthesis are to him transparently isomorphic and dead ends. The effort it would take him to make me understand why this is so would cost him precious time and progress on his true path.
It is an interesting question why he doesn't want to collaborate with other people, but he is far from alone in that m.o. and he kind of answered it.
I'm still learning AI (my training and dollar-paying job is in chemistry; I am really drawn to Hofstadter's "thinkodynamics" analogy). I think there's something to what you say. I'm playing around with the idea that a perceptron can be used to produce low-level sensory input to the analogy-crafting machinery that Hofstadter outlines in "Creative Analogies" - which, if you haven't read, you should read.
During my free time (which has been lacking of late, hence fewer commits), I'm playing around with something of a manifesto towards this idea of merging Hofstadter's concepts with contemporary AI: https://github.com/ityonemo/positronicbrain
As far I understand Hofstadter's approach, it has involved something of a call for a synthesis for a long time.
But your argument is sort of doubly ridiculous given that's confusing whatever personal loggerheads Hofstadter and researchers are at with what approach they are pursuing and then confusing that with what approach would actually work.
There have been attempts to understand intelligence with intelligence (logic, symbols, reasoning etc.) for 30 years, to not much effect, now AI and machine learning are advancing quite steadily, so why the snark? All evidence suggests that the way the brain itself learns things is statistical and probabilistic in nature. There are also new disciplines now, like Probabilistic Graphical Models, which are free of some of the traditional downsides of purely statistical methods, in that they can be interpreted and that human-understandable knowledge can be extracted from them. This is something that really seems promising, and to some extent is an union of the old and new approaches, despite the claims of a big division, but it is hard to see much premise in purely symbolic methods invented merely by some guy somewhere thinking very hard.
I for one am very happy that people seek inspiration in the way human brain works, that's what science is, if you just come up with things without consulting the real world it's not science, it's philosophy, the one discipline that has yet to produce a single result.
What you're talking about, I think, is various approaches within the latter group of researchers. I defended Hofstadter in another reply because I find his goals worthwhile in and of themselves - in a "basic research" sense. Discarding anything that's not an optimal solution - the attitude taken by a couple of responses here - ignores a whole lot of interesting science, and as a scientist it bothers me quite a bit.
That said, once we're talking about the goal of understanding the human mind, GOFAI is, to be sure, incredibly old-fashioned, and you won't find me defending Hofstadter anymore as far as his approach. His goal is a worthwhile one, but you're right that his approach shouldn't really be considered an 'underdog'.
Personally, I think the best hope for understanding what intelligence is, in a general sense, comes from non-equilibrium thermodynamics, as in the sort of research goin on here:
but that's a can of worms for another post.
As an aside, regarding your last comment, I completely disagree on your view of philsoophy. It may not have produced results but it has guided science. But I agree with the thrust of your sentiment: an AI researcher with the goal of understanding the human mind should be spending as much time studying humans (as in, doing Psych or Cognitive Science experiments) as programming AIs.
I agree that the signal-to-noise ratio in philosophy is very low. (I also strongly agree with the rest of your comment.) But let's be fair: it was philosophy that produced
1. Formal logic
2. Occam's razor
3. Scientific method
4. Church-Turing thesis
2. Yeah, William of Ockham was a philosopher. Score one for philosophy.
3. It looks to me as if almost everything important in the history of the scientific method is down to scientists rather than philosophers -- though, since the word "scientist" wasn't coined until the 19th century and disciplinary boundaries used to be more porous than they are now, they were often called "natural philosophers" and often did a certain amount of what-we-now-call-philosophy as well as what-we-now-call-science.
Francis Bacon is the usual chief suspect for introducing something close to the modern scientific method. He was an experimental scientist as well as a philosopher (though, it seems, not a particularly good one). Galileo was a scientist. Newton was a scientist. I suppose you might want to go back to Aristotle (though I wouldn't) -- but, actually, Aristotle was trying to do science as well as philosophy.
4. Since both Church and Turing were both trained and employed as mathematicians, it seems rather strange to credit the Church-Turing thesis to philosophy. (So far as I can tell, all the other important people in its history -- Goedel, Kleene, Post, Rosser, etc. -- were mathematicians too.)
You might consider these topics philosophical by definition. If so, the conclusion would seem to be that even philosophy is often best done by scientists and mathematicians, which doesn't speak well for philosophy as a discipline.
Leibniz was top notch in many, many things. Philosophy, math, linguist, lawyer, diplomat, engineer, psychology and sociology... lots of things. He was as particularly bad way to start your list which is supposed to support an argument that it was mathematicians who developed formal logic, since he shatters entirely the distinction you are trying to draw upon.
I think you have misunderstood my argument, which was not that formal logic was developed only by mathematicians with no philosophers involved (that would be nuts) but that saying it was done by philosophers as opposed to anyone else is quite wrong. And that argument would go through just the same even if we classified Leibniz exclusively as a philosopher (which would be just as wrong as classifying him exclusively as a mathematician; my apologies, by the way, for being sloppy about that).
I have to admit I can't imagine how what I wrote turned (on its way into your mind) into an attempt to draw a sharp dichotomous distinction between mathematicians and philosophers, but evidently it did and I'm sorry that I evidently wasn't clear enough. Yes, people can be both mathematicians and philosophers, or both scientists and philosophers, or all three; yes, the boundaries are fuzzy sometimes; it was no part of my intention to imply otherwise.
But answering isn't actually philosophy's job. It's mostly for producing question, and science will in turn answer those questions (or make it irrelevant).
Chess was considered "an AI problem" - and quite a hard one - back when nobody knew how to write a program that could play a good game. Now chess is beneath consideration because (a) programs can play it, (b) we actually understand how those programs work.
Uh Yeah, you don't understand "Hostadter", you can't be bothered to spell his name right, but you're willing to attack him.
Just FYI, Hofstadter was never in the mainstream of GOFAI but always advocate something of a different path; a path not taken - for good or ill but not the same path as you caricature. You might take notice that your link makes no mention of Hofstadter, for example. Hofstadter's work on analogies, for example, was by no means a pure symbolic approach.
And the rest your post is thus ... ignorant and irrelevant [Edit: by the fact that it's not about Hofstadter, just someone's critique of someone else' AI grr]. Kind of an embarrassment having it here.
I believe he is rooted in the GOFAI tradition, because he tries to understand intelligence by introspection, and I happen to believe this is impossible, since what we are aware of consciously is just a surface level result of processes in the brain we are completely unconscious of. Whatever descriptions of his work I can find, sound pretty much like GOFAI too - you just come up with a program that does something supposedly intelligent:
I am also not attacking him, I am just sceptical, as I would be if someone who is not working in mainstream physics and is ignored by it claimed a breakthrough in understanding matter or whatever.
Hofstader begins with symbolics but definitively goes off them.
The point of something like a "strange loop" is that it can't be encompassed by an ordinary symbolic system.
For someone who takes the idea of a system that's impossible to model further, IMO, check out the biologist Robert Rosen's Life Itself and Essays on Life Itself. He pushes more in the direction of complex systems theory than self-reference (maybe a physics version of the same idea?) but his writings are brilliant. Less fun than Hofstadter's, but more rigorous and with a necessary appreciation of the biological and physical sciences. Anyways, I'm rambling.
This comparison between complementary approaches is an apt analogy for most fields, where the focus shifts every once in a while, when one of the approaches largely hits a wall and most people switch to the other one. A while later, the trends will almost inevitably reverse and draw inspiration from other approaches. The unfortunate thing is that there's no dialogue between the two camps, which makes it that much harder to port good ideas from one context to the other.
I could provide examples from physics research, or for that matter, trends in static-vs-dynamic blogs :P Also, the more "applied" the field, the shorter these cycles are.
DH is the most well known guy of a small, stubborn group of AI developers who still believe that "human thought" can be reasoned about and can be understood in isolation, and that we can build intelligence without simply reducing it to statistics or to brain anatomy.
I applaud his efforts, and find some of the programs he's written both creative and refreshing.
That said, I think one really interest article here is: "Human-Level Artificial Intelligence Must Be an Extraordinary Science" by Nicholas L. Cassimatis
Brings a lot of challenges into focus.
What you're proposing is an overly reductionist ontology, IMO, and yet I agree that statistics are important, if not critical. (Hint: not the boring linear kind of statistics that you so often see in Big Data)
No one is arguing that they've found a domain that statistics can't be applied to or gathered from.
What is implied is that statistics should not be chosen as the preferred main engine of conciousness--because as far as we can tell conciousness is more than just a gestalt voting between the cells of the host it's housed in.
But, why waste time with this?
You cannot not reduce it to statistics, vast majority of your cognitive activity is statistical in nature, in the more general sense of the word.
Most here believe statistics are globally applicable, even in domains dealing with the odd, esoteric, or random. With that said..
- Hofstadster tries non-strictly-statistical-methods
- "but you can't be non-statistical!"
- lumps hofstadter into statistical ai > "but hofstadter is non-conformist!"
- create seperate subgroup which hofstadter can exist within with his experiments
... and on and on. Is this the kind of recursion geb was talking about?
A lot of us think we know statistics are universally applicable. It's a waste of time to state, and adds little to the discussion overall. Sorry that it came off so negatively, either before in my first post or now. I don't mean it that way.
If you want to understand Godel's proofs then I recommend the book "Godel's Proof" by Ernest Nagel and James R. Newman:
Instead of Hofstadter's GEB, read some of his papers, e.g.,
"Analogy as the Core of Cognition"
But there are others who have focused longer on analogy, e.g., George Lakoff:
"Metaphors we Live by"
"Where Mathematics Come From: How The Embodied Mind Brings Mathematics Into Being":
"Women, Fire, and Dangerous Things"
I own the Nagel and Newman book and probably read it every two years or so.
I also own the FARG book which summarises the work of the Fluid Analogies group. I don't think these papers are as interesting or exhilarating as GEB so I have to disagree with you there.
I didn't really dismiss the book: I read it attentively in it's entirety and, as anyone who has read it knows, that is a big book. But in the end I found nothing new or thought-provoking. Entertaining, yes; enlightening, no. "Where's the beef?" came to mind over and over as I moved through the text.
Hofstadter is certainly bright, has a voluminous memory and can be an entertaining writer but GEB is not IMO a contribution to AI. My expectations were undoubtedly too high.
Because he'll have other things to do in the next year?
I find the analogy to Einstein at the end of article especially funny. I think it's much more likely that people will look upon current defenders of "good old fashioned AI" like they now do upon people who still looked for ether after Einstein's discoveries.
In particular, how often do you see a journalist with this much insight?
> When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
FCCA is really brilliant.
I think you're confusing good AI with being 'smarter than humans'.
An AI can still pass the Turing test if it believes in Scientology or any of the other nonsesne beliefs that humans have.
Perhaps a terminology change is in order? I would call what people like Norvig and Hinton and Ng do "machine learning", while I would probably use "artificial sentience" to describe the Hofstadter-Dennet camp, as that seems to more closely capture their ultimate goal.
The world needs both approaches, and hopefully the time will come when the gap is bridged.
Edit: Googling, it seems that Hofstadter's response is along the lines of Haugeland. That by describing the translator as a man, we are improperly being asked to identify with him, when in the actual metaphor, the man is only an implementer in a larger system. The larger system actually does understand Chinese. So the claim is that the Chinese Room thought exercise is actually a fallacy.
Hofstadter provides a detailed response to the Chinese Room thought-experiement in "Le Ton Beau de Marot".
See, that's the point, as incredibly awesome and useful as the anti-gravity elevator might be, mankind can't wait around for someone to invent it, just raise stuff, or travel in the vertical dimension. And hence all our modern AI systems (including google, siri, robots, warehouse management systems, etc etc) are powered by this approach.
So should we scrap stairs an elevators in pursuit of anti-gravity? Certainly not, we NEED them right NOW. But does this mean we should dreaming about, and working towards anti-gravity? HELL NO!! We need that too.
And hence, as much as i LOVE Hofstadter (i have had the same approach to AI ever since i was a kid), i still have a very PROFOUND respect for modern approaches because they help me create some functionally amazing software.
If anything, the human mind seems to me to be a particular algorithm that is flexible, but trades that flexibility for capability in certain problem areas. Using a transportation metaphor, it's like walking versus air travel. Walking is incredibly flexible when it comes to where you can go, but air travel is by far the optimal route to get from coast-to-coast, although you are limited to travelling between airstrips. I feel like focusing on the human brain as the "true" intelligence is like claiming that walking is the only true transportation, instead of focusing on optimal routes for each problem.
In short, the camps have different goals. One is trying to solve problems optimally. The other is trying to understand what intelligence means, biologically. AI as a sub-discipline of Engineering, vs. AI as a sub-discipline of Cognitive Science, perhaps.
This would be a great scenario for a sci-fi novel / movie. An AI that threatens the human race not by achieving sentience and attempting world domination, but by achieving sentience and playing Sudoku instead of doing its job...
Hofstadters lecture about analogy on youtube http://www.youtube.com/watch?v=n8m7lFQ3njk
Also some earlier work on the subject
I have also written a review on this very interesting book "Surfaces and Essences: Analogy as the Fuel and Fire of Thinking"
I cannot recommend "Creative Analogies" more. I have purchased no less than four copies (two for myself; two for others, including K. Barry Sharpless, who once made a remark about AI that was reminiscent of some of the ideas in CA) over the years. It's even better than "Surfaces".
When I was in college (and GOFAI was still alive) GOFAI researchers themselves portrayed him as very much an outsider.
The better solution is to come up with creative experiments (with human experiments) that can arbitrate between competing implementations. With modern technologies like virtual reality, you can create almost any counterfactual situation and that to decide between theories that make the same predictions "in the real world".
I was under the impression that Wilhelm Wundt was the father of psychology.