Hacker News new | past | comments | ask | show | jobs | submit login
A professor built a chatbot to be his teaching assistant (washingtonpost.com)
382 points by dlgeek on May 13, 2016 | hide | past | favorite | 115 comments



This is definitely an impressive result, but I'm pretty sure it was made easier by the opposite pattern.

I've had a couple of TAs who I strongly suspected of being chatbots, even when I was talking to them in person. I'm still not 100% convinced they were human.


Yeah, there exist humans who fail the Turing test.

Another thing is foreign students and teachers who, even if otherwise competent, quickly get one used to language comparable with machine output. I think I may have been guilty of that at one time :)


You know this is very interesting. I've often wondered if other humans I'm interacting with were in fact sapient.


Awareness and intelligence might very well be completely orthogonal phenomena.


perhaps they were poorly programmed philosophical zombies


More human than human is our motto ...

Next step will be to replace the prof, and then have the students replaced by surprisingly attentive interrogative expert systems, and the cycle will be complete.


Reminds me of a scene in the movie "Real Genius". The class was empty of people, was just tape recorders listening to another tape recorder set up by the professor.


Absolutely. And, once that's set up, our banking sector and government should profit from hefty educational loans made to all those student bots.


Yep, and it's not like you want to make such an accusation when you're in that position.

In fact, that's one way to pass the Turing test: any judge that suggests a subject is bot, experiences a permanent loss of social status regardless of correctness!


I was a student in this class, and had at least one question on the forum answered (correctly) by Jill. I can see this technology being hugely useful for teachers who conduct large lecture classes in any subject on a regular basis.

That said, I'd be even more excited if Jill had the ability to synthesize new answers to questions through some type of case-based reasoning. This would require Jill receiving feedback on "her" answers, which might mean the students would have to know "who" "she" was in advance. (Sorry, got lost in the quotes.) Right now, Jill is essentially an automated FAQ-retrieval bot.


If you had some reddit/HN-style forum where the questions get replies and votes, that would be valuable information for what counts as a "good answer".


Yeah I can see that going over well in Physics McPhysicsFace 101


If I recall correctly, https://piazza.com/ does something similar to this and I believe they also have voting on a "best" answer as well.


Do you know what kind of classification method it is using at the moment?


Not exactly. I know it is built on an academic flavor of Watson, though.


For anyone interested, the class, which covers interactive intelligence and knowledge based AI, is freely available at https://www.udacity.com/course/knowledge-based-ai-cognitive-... . It's a great starting point if you're looking to automate the sort of human intelligence tasks that don't lend themselves well to the traditional searching/planning/proving/minimax/retrieval route that underpins most AI.

I took it before they rolled out the chat bot TA, unfortunately.


Also took it before the chat bot... :(

If you are thinking about a masters in CS, you must check out the OMSCS program. The cost to value ratio is completely off the charts!

"total program cost of about $6,600 over five terms"


I took that course a year ago and thoroughly enjoyed it. Writing an AI agent to take IQ tests was a rewarding experience.

I'm sure I asked/answered more than a dozen questions during that semester within the Piazza platform and likely unknowingly contributed to Jill Watson's training.


thanks for the link ! question though does anybody have any good resources for doing something similar to this in javascript? I know a small amount of java, no python, and want to brush up my js skills anyways.


IIRC there was nothing specific to java or python, except for using opencv for the last exercise to visually solve ravens matrices. You could do the same with nodejs, I'd presume. It might be easier too, considering most of the code involved manipulating dynamic data structures.


This is the key:

> The system is only allowed to answer questions if it calculates that it is 97 percent or more confident in its answer.

Knowing when you don't know the answer is something that has been lacking from AI I've seen. Of course the problem is that not all AI has the option of deferring to a human.

Anyone know what percent of questions the AI answered, vs redirected?


> Anyone know what percent of questions the AI answered, vs redirected?

That's actually a very good question. The article doesn't give many details other than: "There are many questions Jill can’t handle."

For comparison, here's are the highlights of a study from 2014 on the mobile assistants (Siri, Cortana, Google): http://searchengineland.com/google-now-beats-siri-cortana-di...

Google was able to offer "enhanced" answers (i.e. more than a web search) 58% of the time. Siri was at 29% and Cortana was at 20%. Siri improved some with iOS9, but not enough to beat google: http://www.recode.net/2015/9/20/11618696/how-intelligent-is-...

From the actual study: ... we took 3086 different queries and compared them across all three platforms. These were not random queries. In fact, they were picked because we felt they were likely to trigger a knowledge panel. ( https://www.stonetemple.com/great-knowledge-box-showdown/ )


> That's actually a very good question. The article doesn't give many details other than: "There are many questions Jill can’t handle."

I emailed and asked. If I get a reply I'll post it here.


It would be nice if Jill Watson could respond to your email.


I really appreciate the enhanced answers.. "OK Google" has become my go to tool for a lot of things... "OK Google... what is 5 ounces in grams", or the inverse. It's almost invaluable, and could definitely make certain classes of applications (such as a calorie tracking app) far easier to use.


So, I'm guessing right now Jill performs about as well as a human TA who is pretty new on the job.

Nothing wrong with that, of course.


> Knowing when you don't know the answer is something that has been lacking

Definitely, and I'm not sure why that space isn't getting more attention. For RL there's KWIK (knows what it knows) http://www.research.rutgers.edu/~lihong/pub/Li08Knows.pdf [pdf]. A large class of problems (the robo TA being one of them) lend themselves very well to AI with error bounds, for lack of a better term.


That constraint alone makes the AI approx. 92.3% more wise then most humans.


Anytime I see anything AI related it seems context is the key otherwise answers are all over the place. It's the same for everyday life context seems to be the driver of humans.


Rather than inventing sophisticated chat bots, couldn't he have just had a well organised web page? If the question were where is assignment two / when is assignment due. The examples given seem like a "high tech" (possibly unreliable) solution to a low tech problem.


Having written documentation for end users, that will work for only a portion of them. Probably most, which is good.

But there are a significant percentage of users who can't (or won't) read documentation. Even they do manage to read it, they don't understand it.

For some reason, human interaction works better for those people. They ask a question, and someone copies the answer from the documentation, and pastes it into the chat conversation. The user then goes "Oh, wow! That's helpful!"

But ask them to read a web page, and they get lost.

I would love to know the psychology behind this phenomenon.


Similarly, I know people who are incapable of communicating information via email and insist that they must talk on the phone or in person.

Situation 1: "Hey can you call me?" Sure, what do you need to talk about? "The status of the Spark server." Okay.

Find a quiet place, place the call, debug connection problems so we can hear each other.

So what do you need? "Is the Spark server up?" Yes. "Okay, thanks, bye."

Situation 2: Email to coworker: "Hey the other day you said you said that the downloaded certificate is one factor, so with the password, that's already two factors, and the rotating token would be three. But we only need two, so that would mean we don't need the rotating token now. But the CTO is confused then -- how was the certificate securely transmitted in the first place, to the point that we can count it as a second factor?"

Two days later: Coworker approaches me: "So what was that email about?"

(Only exaggerating a little bit.)


To be fair, that is one hell of a poorly written email. HemingwayApp flags 2 of the 3 sentences as hard or very hard to read.


You honestly think it's very poorly written? You can't get any meaning from it, and believe that to be true even if you knew the context? You don't even have enough for a clarifying question?

I can show you writing a lot worse than that!


I can get some meaning out of it. If I knew the context, I could get more.

May I suggest running it through HemingwayApp[1] yourself and seeing that this particular email is objectively hard to read?

Write emails (and internet comments) like the reader is a drunk 7 year old. That's about as much attention as they're giving your email.

[1] http://www.hemingwayapp.com/


You're not answering the question. What specifically is hard to understand? Formulas are just heuristics, not some ironclad objective proof of illegibility. Ironically enough, across two comments, you haven't been able to convey such substantiation.

You also haven't explained what makes it horrible rather than so-so, which is important, since it's fairly easy to make it worse. You really can't just count words and dots and call it a day.

And if someone isn't going to read an email, but ask you to repeat it verbally, they really don't belong in an office environment. I'm sure that doesn't describe you!


Specifically these two sentences are hard to understand:

     Hey the other day you said you said that the downloaded certificate is one factor, so with the password, that's already two factors, and the rotating token would be three.

     But the CTO is confused then -- how was the certificate securely transmitted in the first place, to the point that we can count it as a second factor?
They are complex sentences. This makes them hard to understand.

The email's reading level is at grade 11 on the Flesch-Kincaid scale. [1] Anything above 9 is considered "Fairly difficult to read". Best selling authors tend to score less than 9. A lot less.

You don't need to write emails like you were a best selling author of course. That's silly. But it helps if you write them for the recipient. Don't just word vomit your train of thought.

Here's how I'd write your email (from what I can guess/understand of the situation):

     Hey,

     The other day we talked about auth factors in our app. You said the certificate was one factor.
     Adding the password, that makes two. A rotating token would make three.

     Do we really need the rotating token, then?

     But the CTO is confused: how do we know the certificate transmitted securely? It can't be an auth factor if we're not sure.

     Cheers,
     ~Bla
That said, we have way over-analyzed your email :)

[1] https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabi...

[2] https://contently.com/strategist/2015/01/28/this-surprising-...


I had to read the email twice to understand it. It's absolutely hard to read.

At a minimum, each sentence is doing too much.


My lead is like that. When I send him instructions on how to do something, he always asks me to sit with him and walk him through them rather than attempting to follow them and then asking questions if he gets stuck.


> I would love to know the psychology behind this phenomenon.

Laziness. It's usually faster to ask someone a question than to look up the answer. I used to do this on IRC when learning a new programming language and it saved me a lot of time. If you want an even faster answer, you guess the wrong answer and wait for someone to correct you (https://meta.wikimedia.org/wiki/Cunningham%27s_Law).


It might not apply to you, but in my experience, people that ask questions that would faster be answered by google never actually learn the topic they're asking about. I think 90% of my knowledge was stumbled upon while looking for the answer to something else.

Sure, you might get the answer that X is the best way to do what you're trying to do, but if you read the docs, you might learn that what you're trying to do with X is already implemented and you can skip all that and just use Y. ( you not meaning you specifically here, just in general. )


> Laziness. It's usually faster to ask someone a question than to look up the answer.

:(

My common answer is "If you're too lazy to read the documentation, I'm too lazy to cut & paste it into the email".

Followed by them complaining about how I'm an asshole...


I did this with my brother. Would always ask me beginner programming questions instead of getting Google-fu (while running a business making FileMaker Pro integrations, ugh).

Eventually I LMGTFY. He stopped asking.


I bitly my LMGTFY. The passive/aggressive nerd Rick Rolling.


That's gold. I'm going to use that one next time.


I don't think that fully explains it.

On my school's forum, people routinely ask questions which a simple Google search would have answered.

Yet typing out a question and posting it to a forum is a lot more work than searching on Google. I'd think the lazy response would be to stick to Google.


I often suffer from this - and it stems for me from apparent time pressure and or resentment.

The time pressure is the source of all sorts of sub-optimal issues - "I don't have time to write tests, documentation, do the right thing here ... Whatever. Including reading around the subject"

I do think modern corporate culture contributes to this - I remember being impressed by a PhD DSP guy who was in my scrum stand up who most days would report "I was reading yesterday and will be reading today". Every so often he would say "I finished reading and wrote ten lines that solved the problem"

As a contractor I felt I could not take a similar response, but I should ... or at least should be "I will be refactoring our Dbase access code / Our API / whatever cos it sucks rocks"

But I am weak.

Oh and resentment - "I don't want to find out the details of the pointless process this pathological bureaucracy has created and I really don't want to read a badly written set of documentation that probably does not have the real answers in it because no one likes to write the honest answers"

Hmm, glad it's Friday. I am worse than usual


In this specific instance, I wonder if some people, when panicking about their courses or under a deadline, are comforted by the notion that a TA is out there and willing to help them.


Sometimes this is about having a different learning style. Some people may absorb information better one on one. Tutoring and mentoring can be super helpful.

Another thing you might consider is that there are class and cultural factors to people's expectations about how you get along in life. Person to person conversation and personal relationships are important resources if you are working class or living in poverty.

There is a flip side to culture that disdains just asking, which is that smart people sometimes scramble to find answers independently to any questions because they don't want to reveal their ignorance on a subject. Sometimes just asking is not lazy, it is efficient and results in better collaboration as other people hear the question and realize they don't know, either, and would like to. Or, if no one knows, then you at least know have that information and can work from there.


Is the documentation going the be available via a chat bot or a web page? My guess is the second.

Screw the lazy users who won't look up documentation. If you can't read docs (he is a computer science professor), then do you really aren't in the right subject.


I would love to see email support go beyond that, and actually read the question... I hate when I get a canned response that is completely wrong, or doesn't apply.


I think you'd be surprised at how often students don't read webpages and fire off an email to the class TA or instructor. Replying with a link to the relevant part of the webpage is something I'd love to automate.

(I teach a largish introductory university class)


You can automate replying with a link to the course webpage ToC. Also, make it absolutely clear that the reply is automatic. Hopefully they'll get the hint.

Or heck, just make it a policy that if they haven't received an answer in a week, this means they need to look at the website. I don't know why some people strive to be so accommodating to laziness nowadays.


There are lots of things I could do that would be less work but would come across as "being an asshole" to many students. A smart chatbot would be less work, more helpful, and a better experience for the students.

Most technology is fundamentally about accommodating laziness. There's nothing "nowadays" about it.


Well, but you don't have this bot and instead (I suppose) you waste your private time responding with links to something they should have read.

You don't have to be an asshole, just refuse to be mistreated by assholes.

I've seen a friend of mine, who is a university teacher, respond at midnight to stupid question about some deadline the student should have known since two months. If your experience is anything like that, know that there are people finding it insane.


Laziness is a virtue. What you're suggesting is not all that different than creating a webpage that says "read the book" instead of adding value. If a bot can be created to make discovering answers easier why shouldn't it be done?


There are two kinds of laziness: one which makes you visit course website, copy teacher's email and send him some stupid question whose answer is one CTRL+F away on the same website, and another which makes you want to automate responding to such emails.

I'm not sure if the former is that virtuous. Those people look like they are growing completely helpless.


Because in many cases, the lazy are paying the bills.


No matter how well organized the page is, you still have to do some sifting through the material. The bot is a way to outsource that effort to silicon.


Or the engineer's solution to a social problem :)


My computer science lecturers can't even put together a webpage so yeah it is asking a lot. (They didn't close tags in their tables for some bizarre reason)


Nah. We are engineers who think in terms of man pages and documentation. The world has moved on and we are now in the "live" phase. Everything is live and interactive for the new generation.


I see it as a "high tech" interface to the same solution.


Except you have to publicly post your question before receiving search results. A search function is a lot faster and you can do a few attempts.


I could reasonably ride a horse to work every day, it's only 2 miles.


And where would you get said horse? Where would you keep it? What would you feed it? Who would take care of it?

Students should be capable of reading text. If they can't, they belong in kindergarten, not university.


You misspelled "customers".


My farm, my stable, oats, and me, because people have been riding horses places for the past 5,000+ years in my metaphor (where a car is the bot and a horse is a web page), and there's no good reason to not ride a horse (use a web page).


"Goel plans to use Jill again in a class this fall, but will likely change its name so students have the challenge of guessing which teaching assistant isn't human."

This will be a fantastic Turing test, at least as far as basic question understanding and natural response formation.


Hmm, does it count to use things that are not so much to do with the course?

For example, if a guy is able to answer questions at all times of day, really fast, they're either an AI or Jon Skeet.

Also, real TAs have a life outside of the course. What will an AI say when you ask them whether they enjoyed last night's episode of Game of Thrones?


"I am very busy, please stick to the course material."

I wonder if the professor has it configured to learn from the student's questions? Will the bot turn into a promiscuous neo-nazi a week into the class?


The software worked remarkably well in this instance because the system was well constrained, and the students weren't on the lookout for it. If they do start quizzing their new unseen TA (Dr Colin V Olutionalnet) on random trivia it could presumably deflect the query or have a few canards pre-programmed.


But likewise, if I were a TA for that course I'd probably try to act like a bot just to confuse the students.


That's stunning. I studied AI in undergrad and I just didn't give it enough of my focus at the time. Little did I know that in my lifetime I'd see real world evidence of computers passing the Turing Test with flying colors.

Of course, Turing was right. What's outside of the set of computable sets? Not much!

I recently read an article headline that talked about creeps using AI to stalk porn stars. It's interesting to think that as computation power continues to grow, there's a huge potential for evil uses of AI. All of the things we see and hear and consider "public" have little or no nefarious uses to other humans. But what about a computer that has human-like capabilities and inexhaustible computation resources? The sci-fi scenarios of conscious AI making decisions to save humanity from itself are not what I fear. The evil humans sitting at the helm of powerful AI is what I fear.

A lot of society's norms are predicated on humans having private thoughts and telling small and large lies as appropriate. What happens when AI-augmented humans know when you're lying?

The future is so exciting and so terrifying.


Certainly an impressive result, and may have some practical application for easing the burden of repeatedly answering the same question asked in different ways. I'm not sure I would call this passing the Turing test though. The picture at the top of the original article[1] is very informative. The bot does a passable job of answering the initial question, but it seems like a canned response and when the student asks for a more in-depth explanation, a different human TA had to step in and take over. Then a couple other students in the same thread actually post that they have suspicions whether or not "Jill" is a real person.

[1] https://www.washingtonpost.com/news/innovations/wp/2016/05/1... (Credit to exodust for posting it in a different comment)


Also "the" Turing test requires you to be actively probing for evidence it's a bot. Though this "no suspicion" Turing Test is certainly progress.


Unfortunately, I think that future is as inevitable as the development of that kind of AI. One is almost certainly going to be the consequence of the other.


I went to exactly two of the AI classes in my college before dropping it for a different class. It was not at all what I expected, the first two classes were totally devoted to mathematical proofs of correctness and were already going over my head.

It worked out ok in the end, the spot in the Network Programming class opened up and that's been a wildly useful skill in real life.


> What's outside of the set of computable sets?

Human intution can jump. Mechanical deduction must expend energy.

Let me give you an example: Dmitri Mendeleev goes to bed, has a dream, and then delivers the periodic table.

Can your machine do that?


The periodic table is the elements ordered such that common features occur together. You'd think some sort of clustering algorithm would detect this.

A line of elements that almost never react with anything. A bunch of things that gain one electron, some that lose one. And so on.


Note: "must expend energy"

How many Joules did Mendeleev expend in his dream? How many Joules will your clustering algorithm require? In fact, how many Joules before a meta-algorithm supervisor decides on exclusively using clustering to solve the problem?

> A line of elements ...

Indeed. He arrived at the result with incomplete information.

    Rightly traced and well ordered; what of that?
    ..
    Ah, but a man's reach should exceed his grasp


Where is the jump? There was a lot of computation preceding his dream, how is that different from deduction?


For those wondering what technology is being used. It is using this Watson product: http://www.ibm.com/smarterplanet/us/en/ibmwatson/engagement_.... We are in the process of revamping it and making it a self-service API that will be released this quarter in http://ibm.com/watsondevelopercloud. Some of the core functionality is already available in http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercl... and http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercl... in case you don't want to wait.


Makes sense as a use case. TAs must hear the same questions over and over again from different students, both practical things like "what homework is there" and reference stuff like "What does the line above the X mean?" (No pun intended!)

I wonder how good it is when people don't know what they're asking. This used to happen all the time when I was in tutorials. "I don't get why there's two transistors in a chain". "What's the significance of the process being adiabatic?"

If you can crack that, education will be changed forever, massively.


>Goel and his teaching assistants receive more than 10,000 questions a semester from students on the course's online forum.

Is this normal?


Not 10,000 questions, 10,000 messages: http://www.news.gatech.edu/2016/05/09/artificial-intelligenc...

That article says ~300 students, 30 posts each seems pretty normal.


So the original article:

1. Didn't mention it was an online course

2. Didn't mention there were 300 students

3. Claimed it was questions instead of posts


I don't think 1 and 2 are very important for the story, and the questions/posts thing seems like an honest mistake. No harm done. The meaning is basically the same, since the bot can presumably reply to any post, not just a top level question.


No! I've done a 300 person electronics class and we've gotten about 300 questions with a handful of people who ask the bulk of our questions.


I think that figure likely includes all contributions, not just questions. I took this same class in Summer 2015, and there are 1036 posts[0] and 11074 contributions[1] on the online forum for that class. The number of students is similar (370 in my class, 302 last semester).

[0] posts are questions and notes

[1] contributions are posts, responses, edits, followups, and comments to followups (i.e., everything)


The article got the numbers wrong, but that course was especially active when I took it, and I was taking it the second time it was offered. Still the best course in the OMSCS program that I've taken. Very lively, helpful and deep discussions, some of which had individual responses going into thousands of words.

The course itself was very innovative in its directions and tests because it asked questions that compared and contrasted human and artificial intelligence. That not just got people's creative juices going, but also made it relatable. More than one person related the artificial learning we were imparting to the code with our own learning processes - or that of our children.

The instructors (Dr.Goel and Dr.Joyner) also used it as a chance to compare the online vs on-campus versions of the course with some interesting results (tldr: the online version was OOM better due to student participation, which was likely due to self-selection of early adopters of the OMSCS program, but still worthwhile).


Actual URL path:

    /this-professor-stunned-his-students-when-he-revealed-the\
    -secret-identity-of-his-teaching-assistant/
HN can do better than clickbait sites like the WaPo.


I've never read much of the Washington Post, so I can't say whether there's actually been a decline since Bezos bought them. But there's been a remarkable quantity of terrible reporting from them lately, so at least we can say it's not very good under his ownership.


I can't help feeling a big FAQ page with a search function would be a lot more useful. Especially if the entries contain aliases for common search terms, like "length" on the entry of "word count", it should cover everything.

Now this is a forum where others can chip in as well, but the FAQ page could prevent you from having to post in the first place. And if you don't have to post publicly to get an answer from the chat bot (because it's a FAQ page and not a bot) you can do 20 searches before deciding it's not in there.


Speaking from experience from being in this class, the unfortunate truth is many people don't bother to read the documents for assignments (thoroughly, at any rate). This one wasn't as bad as another one I took, but many people will ask questions which are easily answered by looking at the materials they're given. That's essentially the area in which this AI excels.


This sounded cool until, "Now Goel is forming a business to bring the chatbot to the wider world of education.” // are all educators just trying to create companies these days?


In my honest opinion, innovating in the Educational sphere and showing value to one or more groups - educators, students, administrative tasks - should really be rewarded fiscally. An institution won't really do that, except in the long-term by way of job stability and benefits (all subject to modern fiscal pressures). So, while it might seem the case, I think this avenue is perfectly reasonable.

Now, educators who write texts in concert with publishers and then somehow coerse students into buying brand new copies every semester, that's what I'd call profiteering. Not so highly esteemed to me. But, as noted, the profession is a tradeoff a lot of times.


The world of education could use some real technological disruption. Giving students iPads and smartboards doesn't cut it if they're just doing the same things on a new platform.


Education could use such an overhaul to be honest. Look at internet resources like Khan Academy and how they're proving the merits of mass-distributing a well-functioning formula for teaching.

As far as this bot is concerned it seems to do one thing very well: Reducing the costs of offering a teaching method for those who require "human interaction" to learn optimally (spoonfeeding if you want).


in a way i'm not sure why there is still no parents or students complaining they're kind of being 'defrauded' for a chatbot while paying high tuition (cue discover card tv ad.) one thing for sure is that there are a lot of undergrad or grad students around looking for a part-time job and teaching experience. i know this is a great achievement in AI with huge potential in open course, and great for grant application, but on the very other end is someone paying with an expectation of a human interaction & care. for years i've seen a lot of complaints about foreign human TAs.


"Education is such a huge priority for the entire human race."

Half a century into this current iteration of the game called 'Life as a Human on planet Earth' I have come to the conclusion that our primary focus must be Education and Mental Health. Take care of these and we may yet see our full positive potential.


The next step is to build a chatbot to replace the professor at teaching. If someone is not going to enjoy the fruits why are you going to supply the knowledge base for a chatbot that is going to replace you?


Pretty amazing. Considering that I've only really taken 1 online class so far excluding tutorials that I've done. I wonder if one day how this would be used for real responses for a real professor someday


The site comes with an obnoxious "enter your email" overlay that won't go away and messes up scrolling when you block it. They do accept @washpost.com addresses though, so...


I wonder how many of the responses were "check the syllabus"


Anyone know whether they used watson APIs to create this bot or used the original watson codebase (with IBM collaboration)?


It's using this: http://www.ibm.com/smarterplanet/us/en/ibmwatson/engagement_... but if you are smart you can reproduce a lot of the functionality using both http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercl... and http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercl.... We are also coming up this quarter with a fully integrated self-service API that will make doing all this much easier (especially the dialog part). Stay posted.


I wonder what the 'layers of decision making software' on top of Watson mean. Are these rule engines ?


Anyone have technical details about this story? What kind of bot used, languages, algorithms, etc?


IBM Watson

Here's a better article that was linked a few days ago: http://www.wsj.com/articles/if-your-teacher-sounds-like-a-ro...


It's using this: http://www.ibm.com/smarterplanet/us/en/ibmwatson/engagement_... but if you are smart you can reproduce a lot of the functionality using both http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercl... and http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercl.... We are also coming up this quarter with a fully integrated self-service API that will make doing all this much easier (especially the dialog part). Stay posted.


Already posted in original form: https://news.ycombinator.com/item?id=11688061

I don't know why SMH would take a Washington Post story and change the title to something stupid, while keeping everything else the same, but that's what they've done.

Wait.. there's one more difference. A pathetic picture of "an AI".

"an AI"? Seriously SMH, just use the original title. Your editorial blundering is embarrassing, and could easily be replaced with "an AI" tasked with re-wording headlines.



Thanks for that, the original article was a much better read.


the original has the sample interaction with students, so its definitely worth switching, otherwise its the same copy apart from butchered title.

SMH is a bottom of barrel source. Changing a story title to something more click-baity, sums up the entire Fairfax network and Australian news media in general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: