This is what causes my love-hate relationship with Siri: when it works, it's a fantastic experience and gets little details like this spot-on; when it doesn't, it's off by a mile. More frustratingly, it doesn't seem to improve much between major iOS releases despite being mostly a thin client to Apple's services.
To be fair, I generally prefer obvious failure rather than quietly doing the wrong thing (which is what probably would have happened here), but even really simple stuff like "take me home" only seems to work as expected half the time.
(I'm ignoring situations where the voice recognition fails outright, since that's a totally different problem - this just relates to handling of correctly-interpreted commands)
Like many others, I wonder what Apple's QA and user feedback processes look like with Siri. Unlike Maps, there's no way (AFAIK) to report a crappy Siri response, so while I'm sure they have stats on low-confidence speech-to-text results, I'm not sure what they do to determine "you heard me right, but you did the wrong thing" or "doing X instead of Y would have been a lot more useful". As such I assume most of it is internal QA process, and Apple's secrecy around new features (fortunately Siri no longer qualifies as such) definitely hurts QA that requires a lot of real-world usage.
If you're aware that Siri is "mostly a thin client to Apple's services", why would you expect it to update with iOS releases? If it's a thin client, it can be updated (or not updated) at anytime on the server side.
That's my point - I expect it to get incrementally better between major releases[1]. That doesn't seem to happen - at least not on the response handling. Voice recognition seems to improve slightly, but that's the nature of having lots of training data.
[1] On new functionality that requires resources installed on the phone (the new sports scorecard things, for example), I understand that only happening on new releases. When I say "take me home" and it only starts navigation sometimes, it clearly has the ability to start nav based on something, so I expect that to happen more reliably.
If you are in Canada and ask Siri for something like "where's the nearest coffee shop", in iOS 5 you get "I don't support that in Canada", but in iOS 6 you get the expected results.
I'd assume the device version + iOS version + device ID is sent in the request and Apple maps this to the corresponding Siri version/database/API.. there would be no technical reason why iOS 5 can't see the same POI's as 6, only whatever business policy Apple has decided to implement. eg. freezing updates to the 5's database.
I'm sure the back-end processing is the same, but because 6 supports or will support new commands, then they are running separate instances & databases which results in varying update schedules.
As long as they don't restrict based on device model (only iOS version) then I'm happy with that.
To be fair, I generally prefer obvious failure rather than quietly doing the wrong thing (which is what probably would have happened here), but even really simple stuff like "take me home" only seems to work as expected half the time.
(I'm ignoring situations where the voice recognition fails outright, since that's a totally different problem - this just relates to handling of correctly-interpreted commands)
Like many others, I wonder what Apple's QA and user feedback processes look like with Siri. Unlike Maps, there's no way (AFAIK) to report a crappy Siri response, so while I'm sure they have stats on low-confidence speech-to-text results, I'm not sure what they do to determine "you heard me right, but you did the wrong thing" or "doing X instead of Y would have been a lot more useful". As such I assume most of it is internal QA process, and Apple's secrecy around new features (fortunately Siri no longer qualifies as such) definitely hurts QA that requires a lot of real-world usage.