2. Does it cache answers or does it try to "learn" and modify/improve the response each time?
2. Watson can learn from feedback (i.e., a user grading the quality of an answer).
Until Watson is available in Bluemix, what is the best way to get access to the q and a API for exploration?
There is no link on the webpage pointing to access request. I had to find it from the comments.
there is no example questions other than that michael jackson example. and that example is not a question. why does it reply michael jackson as an answer? there is no explanation.
there should be a web access point for potential developers to play with the api. I still have no idea what question I can ask and what answer I can expect.
I wrote a blog post about my experiences running a Hello World app using the Heroku buildpack for Clojure which you can find here:
In addition, since Bluemix natively supports Java apps, you can export a Clojure app as either an uberjar or an uberwar and run it directly on Bluemix that way. So for example, if you create an uberwar of your Clojure webapp then you can push it to Bluemix by doing:
cf push your-app-name -p target/my-app-uberwar.war
I'm happy to help if you have any questions - contact details in my HN profile.
I don't know about Clojure, I would ask that question on their forum: https://developer.ibm.com/answers?community=bluemix
Not quite what I expected. Does this mean the developers provide the data?
This is buried in the docs as a comment on this page: https://developer.ibm.com/watson/docs/developing-watson-apis...
No real support for 'playing around' with the API. Bummer.
Just went through the application process linked above. Be prepared to give info about yourself and your company and an explanation of why you want access to the Watson API, as well as what type of information you'll be working with. I stated 'just want to play around with the API'. We'll see how they react to that.
I get the impression that Azure specifically as PaaS/IaaS, OTOH, is quite a bit less so. At least compared to AWS.
I wouldn't choose Azure myself, and I would actively recommend against choosing it to others from what I have seen (unless you are building a solution on .NET/Windows, perhaps). I can't imagine that I'm alone.
I think this might be useful if Watson was being feed with a medical database. Otherwise I don't see any need for it; is there any?
edit: Watson as a legal consultant would be great. There might be a product in that, not as an replacement for a lawyer but more as guide/search tool.
But would be interesting to see a real application working. An online retail for example.
The last 20% is the hardest - and it's why Watson is so impressive (even though even Watson is probably only at 90%)
Watson could replace lawyers or doctors for people that equate Google searches to legal advice or medical advice. Think legalzoom and webmd... Absolutely seems like it could be an entertaining way for a non lawyer or no doctor to explore a law or medical library. The majority on my time spent with lawyers has been discussing my issue until it could be distiled down to a couple concise legal questions; I bought a short sell house and the seller demanded that I put a clause in the contract that said his bank couldn't issue him an i9... I have. No authority over tax laws but I also didn't want any liability or an invalid contract, nor to willingly build a bogus one. There was some real language subtlty to it all and I didn't even know the questions to ask.
Same with doctors, pain is relative, strong pains turn lesser pains into mild discomfort and people are insanely good at ignoring and normalizing pains away. Do most patients even know what to ask or describe?
Don't get me wrong, I'd love to have a lawyer and a doctor on my smartphone all day everyday but it still seems like a ways off. Watson really seems like a tool that cuts your legal fees because your lawyers research time drops 90% or something. (Or rather, he makes 90% more profit from you..)
Human doctors need sleep. They get tired. They get old. You'll still need innovation in the medical field, but lets not kid ourselves that we need a gourmet chef in every McDonalds.
me: i have pain in my side
dr: 25%: something you ate 22%: appendicitis, 18%: kidney stone...
me: am i allowed to...
lawyer: 42%: Maybe, 38%: Yes, 20%: No
No offense to doctors/lawyers meant here; all human brains suck at that.
The confidences here provide misinformation. This is more harmful than no information.
The doctors job is to provide me with as much information about the objective criteria of my physical condition as possible. However when it comes to making choices about my treatment, say in the case of accepting/rejecting an experimental drug with some potentially nasty side effects, it should be entirely my own value judgement on what to do with said information.
Watson might be able to partially replace some specialists that primary care physicians use for consultations. When PCPs are unsure about a diagnosis or proper plan of care they will often consult with a specialist for advice via phone or e-mail. So in that case the PCP has already gathered at least some preliminary data and could feed it to a computer. But even for that use case Watson won't be able to provide same level of back-and-forth interaction that's often necessary to achieve the correct result.
Read the entire comment.
What is holding back the killer apps for answer/computation engines?
1. "Who was the 12th president?" - Zachary Taylor
2. "What color wine is cabernet sauvignon?" - Red
3. "Is a ferret a rodent?" - The ferret is the domesticated member of the Order Carnivora, Family Mustelidae and Genus Mustela. A common misconception is that ferrets are rodents.
The real challenge is answering niche questions:
1. What size are the OEM rear wheels of a Honda S2000?
2. How can I fix MySQL error 1064?
3. How do I remove wine from a macbook?
These types of questions aren't answerable by a simple mining of Wikipedia or Encyclopedic knowledge. They represent niches within our society (S2000 owners, programmers, people who spilled wine on their macbooks). Google provides excellent links to pages that contain answers to these questions, but it cannot deduce a single answer or common response. This is why sites like Answers.com, Yahoo! Answers, StackExchange, etc. can flourish, but it's also why an NLP question and answer system is very difficult.
I've been working on a system to mine existing responses to questions - http://gotoanswer.stanford.edu - I only have a small subset of programming-related questions (~10M), but you can get an idea for what I'm trying to do by searching for "How do I remove wine from a macbook?" You'll see that there are results for removing wine the liquid and WINE the windows non-emulator.
Anyway - I'm really interested in this area. I have Q/A system built that can answer (some of) the broad-type questions you mention.
I think grouping Google/Wolfram/Watson together misses that each has their strengths and weaknesses, and that they take dramatically different approaches.
Google traditionally relies on ranking information it finds to answer questions (though the whole knowledge graph thing is moving it closer to what Watson does).
Wolfram relies on manual curation of facts and probably the best "calculation" engine of the three.
Watson relies on manual curation of sources, and automatic extraction of facts and ranking of them.
I think it's quite interesting that Google is moving to a model more similar to Watson.
Anyway - I'd love to hear about your approach and what you are doing. My contact is in my profile.
It may handle different queries with different attributes differently, such as focusing on certain portions of its corpus or changing what aspects of its search results are more heavily weighed.
A query identified as a factoid might be researched and judged very differently than something a bit more nebulous, such as a comparison, or something with more specificity like the examples you listed.
Admittedly, I am basing quite a bit off of one example response given in their documentation, but it is an intriguing clue as to how Watson will handle that aspect of understanding which info to discern.
The Wikipedia article says it well:
A factoid is a questionable or spurious (unverified, false, or fabricated) statement presented as a fact, but without supporting evidence, although the term can have conflicting meanings.
It seems very useful for this kind of business.
If you read the documentation, you will see that preparing training data and questions is fairly straightforward.
I don't see an API for feeding it information.
And I thought, "...close enough - Watson could answer questions about Emma".