Yes, we can do a lot that we couldn't do 30 years ago. However, this only falls under the banner of "AI" because AI has been redefined from the promises of 50 years ago [1], when the goal of AI was to build "human-level" intelligent agents. The "Dartmouth Summer Research Project on Artificial Intelligence" proposal [1] in 1955 when revered AI researchers Marvin Minsky, (inventor of Lisp) John McCarthy, and (inventor of information theory) Claude Shannon claimed "a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer". They failed at this lofty goal of solving the AI problem over a summer, and over time the field of AI was redefined to be narrow fancy mathematical tricks to solve domain-specific problems. Most "AI" is not intelligent at all, instead they are approaches for searching some search space with a non-general-purpose tool, such as genetic algorithms, neural networks, or first-order logic theorem provers. A "true AI" would discover regularities [2], or patterns, in any search space, and exploit these patterns to incrementally improve its ability to navigate the search space. Examples of regularities in the world are that navigation is often best performed by sticking to the regularly appearing feature that we call a "road" instead of attempting to climb over buildings, and that navigation can be effectively guided by attempting to minimize the Euclidean distance between the current location and the goal location. Instead, most of the currently available tools are essentially performing a stochastic search through the state space, possibly guided by some heuristics.
While a lot of progress is being made in the redefined and inaccurately named field of AI, object recognition and speech recognition are still narrow domains. They are not using general purpose approaches, but instead tweaking and tuning specific algorithms to these areas. In reality, I am not championing for "human-level AI", as I think it would be very difficult to duplicate a human's strengths and deficiencies [3] accurately. Instead, I am saying that we should be encouraging research that produces a general intelligence; software that can recognize and exploit regularities (patterns) in the state spaces of many problem domains, and apply the abstracted knowledge gained from earlier experiences to new domains and problems. Such software would have a tremendous impact on society, assisting us in solving and analyzing many problems and likely leading to an "intelligence explosion" after crossing some threshold at which the software is able to recursively improve itself. [4]
You asked "just about every specific useful task humans do better than computers, there are a few dozen AI researchers working on it. Why is that not a reasonable approach to the problem?" How do we take all these domain specific solutions and get a cohesive general intelligence out of them? I see these domain specific algorithms as being possibly useful to a general intelligence as an optimization to shift some of the load off of the more general purpose algorithms, but I don't see how throwing a bunch of face recognition, path finding, neural network and genetic algorithms into a box would automatically cause a general intelligence to pop out. These research areas seem mostly tangential or supportive to genuine Artificial General Intelligence research.
I think the problem is still that there isn't really a clear definition of general intelligence or how it should be embodied on a computer. I agree that putting a bunch of different algorithms for different tasks in a box won't create something people would be willing to call general intelligence, but what do you expect a general purpose intelligence algorithm to do exactly? That is, if I handed you one, how would you test it? What are the inputs and outputs?
Don't get me wrong, I'd love to create a general artificial intelligence. It's just not clear to me what that means. In the absence of a good definition of the general problem, I think it's perfectly reasonable to pick an extremely hard specific problem (like visual object recognition), and focus on that, under the assumption that in order to completely solve this specific problem you will end up solving the (undefined) general problem.
"A "true AI" would discover regularities [2], or patterns, in any search space, and exploit these patterns to incrementally improve its ability to navigate the search space." -- Actually this sounds like Eurisko, a heuristic search program that modifies it's own heuristics http://en.wikipedia.org/wiki/Eurisko
Also, it's worth pointing out that the different techniques used in different subfields of AI are not always that different. There has been some work in creating very general powerful frameworks that explain a lot of specific algorithms, like Markov logic networks http://en.wikipedia.org/wiki/Markov_logic_network which are sufficiently powerful that they subsume all of logic and statistical graphical models (which is to say maybe 90% of all recent machine learning algorithms). With these sorts of general frameworks new algorithms and approaches in one subfield get ported over to other subfields, and there is more interaction then you might think.
Shane Legg and Marcus Hutter of IDSIA did some recent work on a metric for machine intelligence. ( http://www.vetta.org/documents/ui_benelearn.pdf ) Marcus Hutter's PhD thesis produced a theory of "Universal Artificial Intelligence" called "AIXI" based on Solomonoff induction. Sadly, AIXI is incomputable, and the time and space bounded version "AIXItl" is still impractical. http://www.hutter1.net/ai/aixigentle.htm
But you're right: we don't know much about intelligence nor how it should be embodied on a computer. This is why we should encourage some really smart people to focus on the problem! I sincerely think this is an area that would yield to research. Might take a long time and a lot of brainpower though...
Lenat's Eurisko is pretty cool, and I don't know much about it, but it doesn't sound like Eurisko is as general as is needed.
While a lot of progress is being made in the redefined and inaccurately named field of AI, object recognition and speech recognition are still narrow domains. They are not using general purpose approaches, but instead tweaking and tuning specific algorithms to these areas. In reality, I am not championing for "human-level AI", as I think it would be very difficult to duplicate a human's strengths and deficiencies [3] accurately. Instead, I am saying that we should be encouraging research that produces a general intelligence; software that can recognize and exploit regularities (patterns) in the state spaces of many problem domains, and apply the abstracted knowledge gained from earlier experiences to new domains and problems. Such software would have a tremendous impact on society, assisting us in solving and analyzing many problems and likely leading to an "intelligence explosion" after crossing some threshold at which the software is able to recursively improve itself. [4]
You asked "just about every specific useful task humans do better than computers, there are a few dozen AI researchers working on it. Why is that not a reasonable approach to the problem?" How do we take all these domain specific solutions and get a cohesive general intelligence out of them? I see these domain specific algorithms as being possibly useful to a general intelligence as an optimization to shift some of the load off of the more general purpose algorithms, but I don't see how throwing a bunch of face recognition, path finding, neural network and genetic algorithms into a box would automatically cause a general intelligence to pop out. These research areas seem mostly tangential or supportive to genuine Artificial General Intelligence research.
1. http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
2. http://en.wikipedia.org/wiki/Kolmogorov_complexity
3. http://www.singinst.org/Biases.pdf
4. http://en.wikipedia.org/wiki/Technological_singularity