

Apple's Siri wrong 38 percent of the time in test - unstoppableted
http://news.cnet.com/8301-17852_3-57464337-71/apples-siri-wrong-38-percent-of-the-time-in-test/

======
mkaltenecker
If I understand that article correctly the headline and lead paragraph must
have been written by someone who did not really understand the article and who
had a distinctly different opinion than the author of the article.

First, the article body portrays the 62% success rate in a generally positive
or neutral light, headline and lead paragraph portray the 38% failure rate in
a completely negative light.

Second, the comparison was with normal Google search it seems, i.e. without
any voice recognition, something the lead paragraph gets wrong.

So this seems to be the data for Siri:

\- Siri understands 89% of all queries in quiet environments

\- Siri understands 83% of all queries in noisy environments

\- Siri answers accurately 68% of all queries in quiet environments

\- Siri answers accurately 62% of all queries in noisy environments

That’s Siri. Now on to Google:

\- Google understands 100% of all queries (since they are entered manually
with a keyboard)

\- Google answers accurately 86% of all queries

This test pitted Siri against Google, asking the question how good of a Google
replacement Siri can be.

Fortune’s article is much clearer:
[http://tech.fortune.cnn.com/2012/06/29/minneapolis-street-
te...](http://tech.fortune.cnn.com/2012/06/29/minneapolis-street-test-google-
gets-a-b-apples-siri-gets-a-d/)

The whole method, however, seems iffy to me. I would like to know much more
about which questions were asked and how they were selected in the first
place. And even if you can find a good sample of questions I’m not sure how
helpful this comparison is.

