That is something I can understand - maybe there are development reasons why second language voice recognition is not possible. However, every time I update my OS, Cortana is added to my main screen again, perhaps to remind me of the wonderful times we could have together if only I changed my phone's main language.
It shouldn't be too hard to add an extra "if" somewhere in the installation process, right?
It baffles me that there are still so many services that try to 'automagically' intuit which language someone wants to have a service in, instead of just simply .. asking.
Google play is a prime culprit, for example, showing a mixture of language content on the play store. See also subtitles, dubbed content, books..
Another one - Sony's Playstation store, where if you choose a country, then the language (despite them having the assets available through their international presence) is permanently set and impossible to change.
My experience is that language/locale implementation for a big project is so complex that PMs like to handwave the language selection part away. On a recent project I had to go back and ask, "What if an english-speaking user is traveling in Germany, do we show them the German-language site? What if a German-speaking user is traveling to France, do we show them English (since we didn't have an FR localization)?"
Confounding the issue is that many implementations combine locale with language, which can be a problem if you have locale-exclusive features. As a made-up example, maybe Cortana's contract for stock data specifies it can only be used in the US, and you're a US-based Spanish speaker. But Cortana assumes Spanish == Mexico, so if you use Spanish now stock quotes don't work. It's really attractive to just call those problems edge cases to be worked out in the next release.
(I think Etsy handles this really well, you can specify ship-to location, interface language, and currency separately. Although for some reason they auto-detected me, in the Bay Area, as UK, EN-GB, CAD which was strange. Obviously some work to be done.)
How? Seriously, how? Assuming the browser's interface is in Chinese, the Accept-Language header should be set properly.
This can be trickier than it sounds. For instance, selecting English from a Japanese UI should be "English" not 英語. Conversely, from English to select Japanese has to be 日本語 not "Japanese".
Many features and recognition roll out to English (US) first and remain unavailable to me for a long time. I'm in Canada. It's the exact same language, just with some extra "u"s scattered around in the written form.
Well, because they should and rightfully so. You shouldn't have to select a language every time you start an app. Instead it should use whatever language was set on your system. Anything else just isn't intuitive and would add more friction before a user could get anything meaningful done.
The parent comment was referring to Cortana only working with English and thus assumes the user can't speak English as he has his system set to Spanish. That's an unfortunate edge case but one that could be overcome with advanced settings should they want to deal with it. But again, the anecdote is an edge case and shouldn't be used to go against best practices in regards to localization.
By all means, make a guess at what language to use, but always allow the user to choose in case you guess wrong.
The edge case of someone setting their device to a different language than the one they want to use in a specific application is not one of localization but of usability.
> By all means, make a guess at what language to use, but always allow the user to choose in case you guess wrong.
While I agree I wouldn't call it a guess as it's a setting a user set when setting up their device.
Many people outside of those know English well enough that they'd rather have a feature in English than not at all.
Also, many if not most applications and services that aren't billion-dollar businesses or otherwise have had insane amounts of resources poured into them have terrible localization. And even if the localization is good, it can be beneficial to use the English version.
English is the international language, it is taught to children all over the world, it is used in business, academia and so on, all over the world. You know that, everyone knows that, but sadly, to all-English dev teams, it is all too often an afterthought.
And that doesn't even take expats, travelers etc. into account, who might want to use their native language system-wide but need certain applications to use the local language, or vice versa.
Apple takes this approach with Siri, for example.
There's a simple explanation for that. Adding a UI may end up being much more involved than a somewhat complicated guessing infrastructure. And if that's fine for most of users, an organization might not care enough to make it better.
They are damned if they do and damned if they don't. If they don't automatically intuit users will call them lazy or declare it bad UX for not using the same language the phone is set to.
I've worked with several really smart programmers from the US during my career. However, I've noticed that their geographical knowledge of anything outside the US is extremely poor (generalisation I know).
I assume this is due to the education system. Could this be the reason why large global players like Facebook and Google have such a poor acceptance and understanding concerning language, location and locales? Simply because many of the programmers are American?
I often give the example that there are vastly more Catalan speakers in the world than those that speak Norwegian. However Catalan speakers are treated like second class citizens because most of them live in Spain, and programmers assume that Spain equals Spanish. And we are talking millions of people more. It isn't a small number.
I'm often reminded of the urban tale of the man who approached a well known seller of washing-up liquid. He promised them a 25% increase in sales and he's receive 5% of the 25%. They agreed since they had nothing to lose and he told them to make the hole in the squeezy bottle 25% bigger. The people still squeezed the bottle for the same amount of time and the genius went home happy.
So I say to Facebook and Google, "use the language of the user, as defined in their browser language s settings primarily, but offer the user a choice to change it. I'd like 25% of the increased sales" .
The user's location should never be used unless you have legal obligations that are tied to geography. In which case, don't show an ad, since they'll not only be ineffective, but they make your company appear incompetent to the end user as well. Especially when I've registered to use your service and selected a preferred language!
Unfortunately this is not always accurate. Chrome on my Mac for example uses "en-us" despite the fact that my machine is entirely configured to use British English (en-gb). For my own websites I'm forced to geoip the location to infer which subset of English to use - most annoying.
"Turn Left on A U T O B A H N TWO ONE FIVE". Thanks for spelling every character in english separately >_>
(proper would be "Turn left on A215", or even better, with German pronunciation for street names)
Take Xbox One. You cannot possibly set your location to say France and language to English. Nope. If you want your console in English, you HAVE to set your location to one of the English-speaking countries. It's really infuriating.
Google started getting this correct, and now they let me choose two languages, so I can either speak to it in Japanese or English.
Devs need to realize that there are disconnects in language and usage context for some people, and I'd think this will get worse, not better. Some apps (Amazon MP3 back in Android 2.x era last I tried -- I'm not sure they've fixed that or not now) went so far and even said the service is not available in your country, for solely by the choice of language.
We sound the same.
Irrespective, Microsoft has really been impressive lately. If I were in the market for a new laptop I would be strongly considering a Surface Book - I imagine Cortana on my phone would be a great addition if I had one.
I found the systems to be pretty comparable. Google Now still creeps me out a bit because it assumes I want it to do many things without asking. This may be a product of me not customizing the options, but the fact that it has pinged me while I was sleeping to tell me I was late for a flight in a different country that a family member was taking was irritating. I didn't have that kind of issue with Cortana because it seemed to only do what I directly requested of it.
The voice recognition and speed seemed a little better than Google and Siri at the time too. During lunch, we frequently had faceoffs between digital assistants and Cortana won more than its share. It also seemed to be better at not just showing search results for a question, but actually answering it like a human would. I've even found that it's song recognition works faster than Shazaam, although it has a higher rate of returning no results.
From my experience, unless you are in the MS ecosystem, there is probably little reason to switch from your native assistant. The one exception being, if you find the data policies or certain features to be too intrusive in your current assistant.
Definitely a convenient feature, though I don't know if it has all the knowledge search stuff Siri has, or any of the random pointless gimmicks that iPhone users seem to mainly coo over.
I hope we drive creators like this out of business. You don't get to make decisions like that about the user and if you do you should be punished in your wallet and with public denunciation. This is the reason RMS was and is right about GPL.
I am increasingly convinced that people need to start open-sourcing their life now before it's too late and they get too locked into and used to the proprietary walled garden ecosystems.
I think if you really took the time to analysis this situation an honest conclusion would be different than this. Personally I think the future model is going to be FOSS with upfront cost and you get source when you pay, or FOSS and service and support when you pay. FOSS is the future either way. I would like to emphasize since you seem to misunderstand some fundamental principles, that the F in FOSS stands for free as in speech, not as in beer. Of course us Americans value free speech more than most so it could also be a cultural difference.
I like it. In particular its recording, geofencing, and exposure of reminders is just MUCH more reliable than Google Now, which was previously my go-to for such tasks.
Either when I've got the app open/focused, or even when the app isn't running?
Another interesting point, Cortana is itself a CLIENT of open APIs you can build on as well.
It's a pretty interesting concept. It means it's really easy to knit together a user. One of the most fascinating concepts is this notion exposed by mbrace of "just in time" and "user specific" clusters. You can use a user's Azure environment to do distributed computation on their behalf (after getting them to agree to charges, of course) without exporting their data to a generalized cloud infrastructure.
This has a lot of potential for giving the user the best of both worlds. It's a hell of a lot more approachable than a DIY approach like Sovereign.
Bought an Nvidia Shield TV recently and while voice search is really good, it would be awesome to have wider voice control.
Maybe the cortana servers are over loaded, but none of my questions are even being registered. All that happens is the app just keeps showing recommended news articles.
Finally got it to recognize a weather question. But I have headphones plugged in and somehow the app plays through the iPhone speakers instead of my headphones. I didn't even think that was possible.
Even speaking loudly and succinctly, the voice recognition only catches parts of a sentence or starts searching after only getting a single word like "what" or "who".
Gonna pass on this for now.
After going through a frustrating sign up with many screens and processes I was able to use Cortana. There I realized if it's not built into the phone it's going to be hard to delight users and or compete with each devices' built in assistant. I use Siri daily and she is useful when driving to control what Apple Music plays and other quick requests/actions.
I'm looking forward to trying Facebook's assistant .. hopefully it gives Siri a run for it's money and more!
Because otherwise such a comparison is pointless.
(I'll get my coat.)
Plays4Sure, the original Surface, Silverlight, I could go on. MS has no stomach for long-term commitment to their non-core products, and as such I'm always counting down the days till their latest innovation/copyvation gets the ax.
But really, what do you want a company to do when it has a failing product that isn't getting traction? It's unreasonable to keep it on life support for years for absolutely no reason whatsoever.
Edit: Also, Silverlight was updated yesterday to address https://technet.microsoft.com/en-us/library/security/ms15-12...
I also had to create a Microsoft account in order to check it out for the 5 minutes I did so before it was summarily uninstalled.
Or you can keep squatting on Windows 7 until it EOLs and use a desktop OS that manages to be even more dated than Mac OS X.
The fact I have to write a host file to do that makes it rather irritating but, yes, with a local account and the appropriate host file and all the features turned off it sorta-works.
The concern is Windows Update will turn it back on as it has with other things in the past.
And Ubuntu actually lets you shut it off relatively easily fyi.
Cortana integrates pretty deeply into the OS for that to happen.. Apple/Google would have to allow it and THATS not happening :)
Regarding integration - what is integrating so deeply, that it needs special handling? Google Now runs entirely with public API.
- Device & app history
- WiFi connection information
- Bluetooth connection information
- Device ID & call information