I've had a couple of TAs who I strongly suspected of being chatbots, even when I was talking to them in person. I'm still not 100% convinced they were human.
Another thing is foreign students and teachers who, even if otherwise competent, quickly get one used to language comparable with machine output. I think I may have been guilty of that at one time :)
Next step will be to replace the prof, and then have the students replaced by surprisingly attentive interrogative expert systems, and the cycle will be complete.
In fact, that's one way to pass the Turing test: any judge that suggests a subject is bot, experiences a permanent loss of social status regardless of correctness!
That said, I'd be even more excited if Jill had the ability to synthesize new answers to questions through some type of case-based reasoning. This would require Jill receiving feedback on "her" answers, which might mean the students would have to know "who" "she" was in advance. (Sorry, got lost in the quotes.) Right now, Jill is essentially an automated FAQ-retrieval bot.
I took it before they rolled out the chat bot TA, unfortunately.
If you are thinking about a masters in CS, you must check out the OMSCS program. The cost to value ratio is completely off the charts!
"total program cost of about $6,600 over five terms"
I'm sure I asked/answered more than a dozen questions during that semester within the Piazza platform and likely unknowingly contributed to Jill Watson's training.
> The system is only allowed to answer questions if it calculates that it is 97 percent or more confident in its answer.
Knowing when you don't know the answer is something that has been lacking from AI I've seen. Of course the problem is that not all AI has the option of deferring to a human.
Anyone know what percent of questions the AI answered, vs redirected?
That's actually a very good question. The article doesn't give many details other than: "There are many questions Jill can’t handle."
For comparison, here's are the highlights of a study from 2014 on the mobile assistants (Siri, Cortana, Google):
Google was able to offer "enhanced" answers (i.e. more than a web search) 58% of the time. Siri was at 29% and Cortana was at 20%. Siri improved some with iOS9, but not enough to beat google: http://www.recode.net/2015/9/20/11618696/how-intelligent-is-...
From the actual study: ... we took 3086 different queries and compared them across all three platforms. These were not random queries. In fact, they were picked because we felt they were likely to trigger a knowledge panel. ( https://www.stonetemple.com/great-knowledge-box-showdown/ )
I emailed and asked. If I get a reply I'll post it here.
Nothing wrong with that, of course.
Definitely, and I'm not sure why that space isn't getting more attention. For RL there's KWIK (knows what it knows)
http://www.research.rutgers.edu/~lihong/pub/Li08Knows.pdf [pdf]. A large class of problems (the robo TA being one of them) lend themselves very well to AI with error bounds, for lack of a better term.
But there are a significant percentage of users who can't (or won't) read documentation. Even they do manage to read it, they don't understand it.
For some reason, human interaction works better for those people. They ask a question, and someone copies the answer from the documentation, and pastes it into the chat conversation. The user then goes "Oh, wow! That's helpful!"
But ask them to read a web page, and they get lost.
I would love to know the psychology behind this phenomenon.
Situation 1: "Hey can you call me?" Sure, what do you need to talk about? "The status of the Spark server." Okay.
Find a quiet place, place the call, debug connection problems so we can hear each other.
So what do you need? "Is the Spark server up?" Yes. "Okay, thanks, bye."
Situation 2: Email to coworker: "Hey the other day you said you said that the downloaded certificate is one factor, so with the password, that's already two factors, and the rotating token would be three. But we only need two, so that would mean we don't need the rotating token now. But the CTO is confused then -- how was the certificate securely transmitted in the first place, to the point that we can count it as a second factor?"
Two days later: Coworker approaches me: "So what was that email about?"
(Only exaggerating a little bit.)
I can show you writing a lot worse than that!
May I suggest running it through HemingwayApp yourself and seeing that this particular email is objectively hard to read?
Write emails (and internet comments) like the reader is a drunk 7 year old. That's about as much attention as they're giving your email.
You also haven't explained what makes it horrible rather than so-so, which is important, since it's fairly easy to make it worse. You really can't just count words and dots and call it a day.
And if someone isn't going to read an email, but ask you to repeat it verbally, they really don't belong in an office environment. I'm sure that doesn't describe you!
Hey the other day you said you said that the downloaded certificate is one factor, so with the password, that's already two factors, and the rotating token would be three.
But the CTO is confused then -- how was the certificate securely transmitted in the first place, to the point that we can count it as a second factor?
The email's reading level is at grade 11 on the Flesch-Kincaid scale.  Anything above 9 is considered "Fairly difficult to read". Best selling authors tend to score less than 9. A lot less.
You don't need to write emails like you were a best selling author of course. That's silly. But it helps if you write them for the recipient. Don't just word vomit your train of thought.
Here's how I'd write your email (from what I can guess/understand of the situation):
The other day we talked about auth factors in our app. You said the certificate was one factor.
Adding the password, that makes two. A rotating token would make three.
Do we really need the rotating token, then?
But the CTO is confused: how do we know the certificate transmitted securely? It can't be an auth factor if we're not sure.
At a minimum, each sentence is doing too much.
Laziness. It's usually faster to ask someone a question than to look up the answer. I used to do this on IRC when learning a new programming language and it saved me a lot of time. If you want an even faster answer, you guess the wrong answer and wait for someone to correct you (https://meta.wikimedia.org/wiki/Cunningham%27s_Law).
Sure, you might get the answer that X is the best way to do what you're trying to do, but if you read the docs, you might learn that what you're trying to do with X is already implemented and you can skip all that and just use Y. ( you not meaning you specifically here, just in general. )
My common answer is "If you're too lazy to read the documentation, I'm too lazy to cut & paste it into the email".
Followed by them complaining about how I'm an asshole...
Eventually I LMGTFY. He stopped asking.
On my school's forum, people routinely ask questions which a simple Google search would have answered.
Yet typing out a question and posting it to a forum is a lot more work than searching on Google. I'd think the lazy response would be to stick to Google.
The time pressure is the source of all sorts of sub-optimal issues - "I don't have time to write tests, documentation, do the right thing here ... Whatever. Including reading around the subject"
I do think modern corporate culture contributes to this - I remember being impressed by a PhD DSP guy who was in my scrum stand up who most days would report "I was reading yesterday and will be reading today". Every so often he would say "I finished reading and wrote ten lines that solved the problem"
As a contractor I felt I could not take a similar response, but I should ... or at least should be "I will be refactoring our Dbase access code / Our API / whatever cos it sucks rocks"
But I am weak.
Oh and resentment - "I don't want to find out the details of the pointless process this pathological bureaucracy has created and I really don't want to read a badly written set of documentation that probably does not have the real answers in it because no one likes to write the honest answers"
Hmm, glad it's Friday. I am worse than usual
Another thing you might consider is that there are class and cultural factors to people's expectations about how you get along in life. Person to person conversation and personal relationships are important resources if you are working class or living in poverty.
There is a flip side to culture that disdains just asking, which is that smart people sometimes scramble to find answers independently to any questions because they don't want to reveal their ignorance on a subject. Sometimes just asking is not lazy, it is efficient and results in better collaboration as other people hear the question and realize they don't know, either, and would like to. Or, if no one knows, then you at least know have that information and can work from there.
Screw the lazy users who won't look up documentation. If you can't read docs (he is a computer science professor), then do you really aren't in the right subject.
(I teach a largish introductory university class)
Or heck, just make it a policy that if they haven't received an answer in a week, this means they need to look at the website. I don't know why some people strive to be so accommodating to laziness nowadays.
Most technology is fundamentally about accommodating laziness. There's nothing "nowadays" about it.
You don't have to be an asshole, just refuse to be mistreated by assholes.
I've seen a friend of mine, who is a university teacher, respond at midnight to stupid question about some deadline the student should have known since two months. If your experience is anything like that, know that there are people finding it insane.
I'm not sure if the former is that virtuous. Those people look like they are growing completely helpless.
Students should be capable of reading text. If they can't, they belong in kindergarten, not university.
This will be a fantastic Turing test, at least as far as basic question understanding and natural response formation.
For example, if a guy is able to answer questions at all times of day, really fast, they're either an AI or Jon Skeet.
Also, real TAs have a life outside of the course. What will an AI say when you ask them whether they enjoyed last night's episode of Game of Thrones?
I wonder if the professor has it configured to learn from the student's questions? Will the bot turn into a promiscuous neo-nazi a week into the class?
Of course, Turing was right. What's outside of the set of computable sets? Not much!
I recently read an article headline that talked about creeps using AI to stalk porn stars. It's interesting to think that as computation power continues to grow, there's a huge potential for evil uses of AI. All of the things we see and hear and consider "public" have little or no nefarious uses to other humans. But what about a computer that has human-like capabilities and inexhaustible computation resources? The sci-fi scenarios of conscious AI making decisions to save humanity from itself are not what I fear. The evil humans sitting at the helm of powerful AI is what I fear.
A lot of society's norms are predicated on humans having private thoughts and telling small and large lies as appropriate. What happens when AI-augmented humans know when you're lying?
The future is so exciting and so terrifying.
 https://www.washingtonpost.com/news/innovations/wp/2016/05/1... (Credit to exodust for posting it in a different comment)
It worked out ok in the end, the spot in the Network Programming class opened up and that's been a wildly useful skill in real life.
Human intution can jump. Mechanical deduction must expend energy.
Let me give you an example: Dmitri Mendeleev goes to bed, has a dream, and then delivers the periodic table.
Can your machine do that?
A line of elements that almost never react with anything. A bunch of things that gain one electron, some that lose one. And so on.
How many Joules did Mendeleev expend in his dream? How many Joules will your clustering algorithm require? In fact, how many Joules before a meta-algorithm supervisor decides on exclusively using clustering to solve the problem?
> A line of elements ...
Indeed. He arrived at the result with incomplete information.
Rightly traced and well ordered; what of that?
Ah, but a man's reach should exceed his grasp
I wonder how good it is when people don't know what they're asking. This used to happen all the time when I was in tutorials. "I don't get why there's two transistors in a chain". "What's the significance of the process being adiabatic?"
If you can crack that, education will be changed forever, massively.
Is this normal?
That article says ~300 students, 30 posts each seems pretty normal.
1. Didn't mention it was an online course
2. Didn't mention there were 300 students
3. Claimed it was questions instead of posts
 posts are questions and notes
 contributions are posts, responses, edits, followups, and comments to followups (i.e., everything)
The course itself was very innovative in its directions and tests because it asked questions that compared and contrasted human and artificial intelligence. That not just got people's creative juices going, but also made it relatable. More than one person related the artificial learning we were imparting to the code with our own learning processes - or that of our children.
The instructors (Dr.Goel and Dr.Joyner) also used it as a chance to compare the online vs on-campus versions of the course with some interesting results (tldr: the online version was OOM better due to student participation, which was likely due to self-selection of early adopters of the OMSCS program, but still worthwhile).
Now this is a forum where others can chip in as well, but the FAQ page could prevent you from having to post in the first place. And if you don't have to post publicly to get an answer from the chat bot (because it's a FAQ page and not a bot) you can do 20 searches before deciding it's not in there.
Now, educators who write texts in concert with publishers and then somehow coerse students into buying brand new copies every semester, that's what I'd call profiteering. Not so highly esteemed to me. But, as noted, the profession is a tradeoff a lot of times.
As far as this bot is concerned it seems to do one thing very well: Reducing the costs of offering a teaching method for those who require "human interaction" to learn optimally (spoonfeeding if you want).
Half a century into this current iteration of the game called 'Life as a Human on planet Earth' I have come to the conclusion that our primary focus must be Education and Mental Health. Take care of these and we may yet see our full positive potential.
Here's a better article that was linked a few days ago:
I don't know why SMH would take a Washington Post story and change the title to something stupid, while keeping everything else the same, but that's what they've done.
Wait.. there's one more difference. A pathetic picture of "an AI".
"an AI"? Seriously SMH, just use the original title. Your editorial blundering is embarrassing, and could easily be replaced with "an AI" tasked with re-wording headlines.
SMH is a bottom of barrel source. Changing a story title to something more click-baity, sums up the entire Fairfax network and Australian news media in general.