Hacker News new | comments | show | ask | jobs | submit login
Why Did the Robot Do That? Increasing Trust in Autonomous Robots (cmu.edu)
70 points by heidibrayer on Dec 6, 2016 | hide | past | web | favorite | 12 comments

"I hope the lifter pilot doesn't get too bored." Jarvis is all chummy again.

"There is no pilot. It's a smart gel."

"Really? You don't say." Jarvis frowns. "Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?"

Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."

Joel's heard this before. The punchline's got something to do with a broken clock, if he remembers it right.

"These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."

"Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."

Peter Watts, Starfish

It's not just how to make a robot explain itself, its decisions. Because even in plain good and old english there are level of understandings. Heck, even explaining some algorithms to someone IT-related isn't easy, when it comes from another field of knowledge.

When the AI(robot) decision is similar to a human daily thought, the difficulty is low. But even humans do things that they can't explain, and come with a later explanation that doesn't fulfill what just happened. When spurious correlations are found by AI, translating it to human understanding (not just language) will be very hard.

­­>But even humans do things that they can't explain

That is actually very interesting. I would even extend that sometimes, humans know the decision they are making is stupid/not the right one but the reward for taking that bad decision is worth it for him. Translating that flawed behavior into an AI must be terribly hard.

Moreover, often humans seemingly "can't explain" something - or invent nonsense explanations - because the act of actually explaining their reasoning could expose them to bad consequences, often social ones.

Question: why didn't you tell me you're going to a meetup after work?

Provided explanation: because I was so busy I forgot about it.

Real explanation: because I didn't want to have a conversation about it with you.

"Analysis." - Westworld


Smirks, puts on tinfoil hat

"Doesn't look like anything to me?"

And when it says "these violent delights have violent ends" you start running...

This is really interesting research. I'm curious to see how it develops.

Building trust with users is going to be one of the most important product design challenges for all of these new "AI" fueled products. A self-driving car obviously requires a huge amount of trust from passengers. But even a personal assistant bot needs its users to trust it before they let it reach out to their contacts to schedule meetings or ask basic questions. There may well be overarching strategies that could be useful in either case, like the transparency into the machine's thinking described by the author.

There is an area of research that addresses this specifically. How AI must be able to explain its decisions. If I find the paper again in my bookmarks I'll add an edit to it.

Related, the work Sussman is doing:


At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."

The article has links to the work he's doing in that area.

It was on here a while ago, if this is the one you're thinking of? http://www.darpa.mil/program/explainable-artificial-intellig...

This is a really interesting and (I think) first of it's kind attempt at something like this.

One of the most important things for any relationship to work is trust, same for the human-robot relationship.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact