There is an arms war here between Bing and Google - we can thank Microsoft for pressuring Google into making this happen sooner than they otherwise would have.
(1) If new theories of AI proved to be commercially viable, you would expect that people would continue to develop new theories in the pursuit of money and fame, not "abandon" them because the results are "good enough."
(2) There are still lots of smart people working on AI in academia today that were around in the 80s. (And the 70s. And the 60s.) They remain keenly aware of the breakthroughs that occurred then, because they were the ones making them. And they're familiar with the technology of today, because they're using it in their new research. So if any big rocks were left unturned prematurely, I'm sure they're being examined.
Still no idea if that's true of AI, but I found a couple of interesting cites:
Patrick Winston, director of MIT's Artificial Intelligence Laboratory from 1972 to 1997, echoed Minsky. "Many people would protest the view that there's been no progress, but I don't think anyone would protest that there could have been more progress in the past 20 years. What went wrong went wrong in the '80s."
Winston blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the "mechanistic balkanization" of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms. "When you dedicate your conferences to mechanisms, there's a tendency to not work on fundamental problems, but rather [just] those problems that the mechanisms can deal with," said Winston.
They don't go into detail about how the early attempts to commercialize contributed to the problem. This site does, but seems less trustworthy:
In the early 1980s, dark clouds also settled over the MIT Artificial Intelligence Lab as it split into factions by initial attempts to commercialize Artificial Intelligence (AI). In fact, some of MIT's best White Hats left the AI Lab for high-paying jobs at start-up companies.
So it sounds like some smart people in academia in the 80s think that some stones were left unturned, or turned too slowly, and that part of the problem was a refocus on making money on existing discoveries. According to that AI winter link, the tech mostly wasn't ready for primetime yet, presumably making it even harder to raise funds for new research.
Of course I could be wrong, but I think there's a good chance that a lot of "Good Old AI" stuff is still valid, but that it was just too early for it then. Maybe it still is, time will tell.
Elon Musk and the electric car industry is one example.
Actually, electric cars were popular before gas cars were.
But I think your argument falls down, when we inspect the merit of the world "popular".
Electric cars, have never been mainstream.
Why? Because they are selling it without mentioning why they won't fail where every other attempt has. There are huge difficulties in this. Have they turned a corner on the research that changes something? I don't see it in what they've so hinted at so far.
First, they are Google, and therefore possess huge quantities of data and the ability, courtesy of their uber map reduce prowess and ultra-fast custom hardware, to make sense of it.
Second, they bought Metaweb (makers of Freebase) and with it some of the best semantic expertise out there. Toby Segaran is a brilliant dude. His O'Reilly book "Programming the Semantic Web" explains in 20 pages what most books take 150 pages to do: the concept of a URI based graph database and how it enables data to be merged from multiple sources and reasoned over with applications.
I only hope Google open-sources some of their research here for the rest of us.
But also, have you read any of the papers involved? Datalog is pretty simple. It's a restricted, forward chaining prolog. Once you know that, you can recreate most of it from that description alone.
I thought that while flawed cpedia (from one of the cuil founders) was a much more interesting push on this idea then Googles one currently is.
There is a bunch of data that google can use because it is made explicitly available. But many sources don't want that.
As an example, consider "book me flights for the cheapest route between lisbon and kiev".
It is a trivial thing to do, provided you can get airline data.
But you can't scrape ryanair's website because they willingly put counter measures in place (e.g. captchas) so you cant do that.
 e.g. http://richard.cyganiak.de/2007/10/lod/
Imagine an integrated Siri with those kind of capabilities. It doesn't have to be fully automatic. Letting a secretary do stuff also isn't fully automatic, (s)he's there to optimize your time into doing only the important decisions (sign on agreements, clicking confirm after having seen the price..).
I guess my point is that if we get to the point were a bot can do generic requests without the aid of a human a captchas will probably not be able to stop it.
This "AI" is just scrapping and replacing wikipedia while serving ads.
I think the situation is closer to "true AI is the key to a usable if situationally depend knowledge graph". Because the world doesn't have single knowledge graph that you can learn and use in all situations. Certainly, you can find a lot of common instances where the average works but once you're past that, you need the kind of understanding of language that present day systems are far from having.
The logical way to overcome this via a "data first" brute force approach is to build personalized knowledge graphs of every potential customer. Which is in effect what every statistically sophisticated large business is attempting.
Both are valid, I'm just curious.
I like WolframAlpha.
Having done research in this field doesn't qualify you to have authoritative opinions on search or AI. I don't necessarily disagree with you on search, but I don't see why a dataset would be the key to AI. AI is a function, not a dataset.
I always try to mention his when people get over excited a out the semantic web. Most of the important stuff that we would like to automate we also insist that a bot not be allowed to do it. It's pretty scitzo in my opinion.
As someone who has done research in the field, did you read the Gizmodo review of Siri?
The set of "knowledge graph" problems which is not, in fact, AI-complete, strikes me as much smaller than most doing research in the field would like us to think.
I hope you won't try to argue that "book me a room" etc. isn't AI-complete. There's an uncanny valley there, and it's deep. You can use Siri for lots of trivial tasks in which bizarre failures are hilarious rather than disastrous, but booking hotel rooms isn't in that set. She can be the best secretary in the world 9 times out of 10 or even 99 out of a 100, but the other times she's an insane robot who wouldn't at all mind sending you to Cebu to get kidnapped by the MILF...
2001: Tim Bernes-Lee publishes "The Semantic Web" in Scientific American Magazine
2006: Microsoft acquires Powerset
2009: Microsoft touts Bing as a "decision engine"
2010: Google acquires Metaweb
2012: Google introduces "Knowledge Graph"
2007+: work on machine operable Wikipedia (DBpedia, Semantic MediaWiki, Wikidata)
2010: Apple acquires Siri
2010: Facebook introduces Open Graph
2011: Google, Bing, Yahoo (schema.org) agree to index embedded data
I'm having trouble getting your point here. A schema can be thought of as metadata about the data to which it is applied; so "semantics of the schema" are also implicitly "semantics of the data" in a sense. I guess I'm missing some nuance about the point you're trying to make... would you care to elaborate?
The meaning of semantics (semantics of semantics?) can get philosophical real quick. Here I'm just rolling with the OP's notion of things, not strings.
They find customers who want something specific, businesses who will sell them that, and make them meet. And then give the people who sell very good analytics that show exactly how much value they are getting, so they can bid up each other for the ads until Google captures almost all the value. During the housing boom, some mortgage-related keywords climbed into mid three figures. That is, each and every time someone searched the matching keywords and clicked on an ad, several hundred dollars changed hands.
I see this as a positive example of search innovation and competition. More globally relevant information, not just a tighter filter bubble.
Marie got two, one in physics and one in chemistry, the former jointly with Pierre. This is notable because it makes her one of the very few people to receive Nobels in multiple categories.
Now let's hope they'll also have an API for all the extra semantic info they collect with search.
It would also try to learn new patterns from facts it already knew, like if it notices Jobs and Apple in a sentence it can hypothesize that sentence pattern is about a leader of a company.
For the life of me I cannot remember the name of the project or even the university, hopefully someone else has heard of this.
IMO the linked video does a better job at introducing the feature than the article.
Non-programmers will likely not be poking around on Google's engineering blogs, and will pick up the Press Release that will have been pushed to the likes of CNN and the BBC.
This is the central "Official Google Blog", not an engineering blog. From the other articles on the blog (most of which are written by marketing and other non-engineering disciplines -- see the last lines), it's pretty clear that the intended audience is anyone who's interested in Google, not just programmers. I know lots of non-programmers who like to read these, especially since a lot of news sites link to and quote these articles.
On the other hand, I'm glad something is coming out of the metaweb acquisition.
Search felt good enough at some point. It no longer does, at least for me. I don't know if my expectations have become too high, or if search engines have become worse, though.
Maybe it's just that other knowledge areas don't just put every little thing on the web to be indexed and easily searched, or maybe I just grock programming well enough to read between the lines and follow implications, I don't know.
Finding how/why the lighting switch on a gas stove could cause a (mild) electric shock. I read the first 3/4 pages of results, got one or two very low quality forum postings and that is all. Somebody in the world must have written about this problem, e.g. in a manual for gas repair engineers. Google didn't find it.
Where wild ducklings sleep at night. I found lots of articles about how to look after pet ducklings, which aren't relevant. I found one article about how wild adult ducks sleep at night, although it was in my view of dubious provenance. Nothing about wild ducklings. I spent only a few minutes searching as I was using my mobile phone (in a park), so had higher needs for finding the answer in the first few results than the above.
I suspect that people get different experiences of google according to: a) what kinds of knowledge they tend to search for, b) their skill at using it.
I increasingly find myself using Quora, and asking new questions on it, for the kinds of queries above.
Also, I think these types of queries (people, places, things) rarely trigger ads. Based on the example queries from the post (Taj Mahal, Marie Curie, Matt Groening) there are no ads at all.
Fortunately, white-hat SEO is very easy to describe without mentioning search engines at all: copy editing, fact checking, designing for accessibility, and so on are valuable skills regardless of what algorithms search engines happen to use for ranking today. Write content for your users and you don't have to worry about optimizing for search engines.
Find all blogs of motorcycle journies 50 km nearby my current position.
"Standing on the shoulders of giants"
Do they still use that slogan?
(I am baffled that someone would block google docs, you have my sympathy)
Google, I expected them to look right at the camera and talk to me (for example, like Matt Cutts here: http://www.youtube.com/watch?v=ofhwPC-5Ub4).
Never mind the fact that what attracted me to Google was its sober interface, its minimalist approach of results on the web (that's also why I like Hacker News). Never mind that Google tries to fit more information per inch square for no good reason and sacrifice readability of the "normal result", after all they are still a search engine.
But more worrying is that they are going to make assumptions without putting sources. It's a very Orwellian approach to answers and that's something we should grow out of. There's a reason why we need sources: because what's written is just one of the way to see an event or somebody.
And Google, really your new Blogspot is awful, useless eye-candy.
 At least from what's shown on the blog.
Your use of 'Orwellian' indicates you are just using buzzwords.
It's legitimate. The point is that showing answers without sources makes Google the de facto "arbiter of truth". When Google's database updates its version of truth, that now becomes what is (or what has always been). There is no indication of different perspectives, or any analysis for how its version of "truth" was derived. That is very Orwellian.
It's hard to use 'Orwellian' when the entity you're accusing is entirely dependent upon other sources and exercises no editorial control.
There's a lot of trust in Google with that statement. If this changes, how would you know? That's what's Orwellian about it.
It's not hard to imagine the results being silently tweaked by Google - not to say that they will do this, but it's a real danger, because it'd be very bad and hard to detect if they did do this at some point in the future, after we'd all gotten complacent and learned to implicitly trust the results.
Of course Google could use this for political gain or some other nefarious purpose, but they rely absolutely on user trust and so it would be an incredibly risky move.
Not to mention that looking at your watch or using bing or ddg or similar tools would show you the deception. It's just silly invoking Orwell over this I think.
No, they are not.
"Google’s mission is to organize the world’s information and make it universally accessible and useful."
So far, Google has done an awfully good job of hiding everything not immediately relevant, so I'm going to give them the benefit of the doubt on this one.
Sure, you still have to find the source of the "summary" to understand it and determine how they got that information so it will make sense to you. But it's better than just giving you the search results which already have the summary! Right?
And yeah, the search results already have links to all the different kinds of [taj mahal]. Now you can filter down the top pages to a specific type instead of clicking on the link that matches what you want! It's so easy it takes an extra click!
You just don't understand, man. Google knows what you want, even if it isn't what you want. And you'll take it and like it. Psht... clutter.