"There is no pilot. It's a smart gel."
"Really? You don't say." Jarvis frowns. "Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?"
Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."
Joel's heard this before. The punchline's got something to do with a broken clock, if he remembers it right.
"These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."
"Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."
Peter Watts, Starfish
When the AI(robot) decision is similar to a human daily thought, the difficulty is low. But even humans do things that they can't explain, and come with a later explanation that doesn't fulfill what just happened. When spurious correlations are found by AI, translating it to human understanding (not just language) will be very hard.
That is actually very interesting. I would even extend that sometimes, humans know the decision they are making is stupid/not the right one but the reward for taking that bad decision is worth it for him. Translating that flawed behavior into an AI must be terribly hard.
Question: why didn't you tell me you're going to a meetup after work?
Provided explanation: because I was so busy I forgot about it.
Real explanation: because I didn't want to have a conversation about it with you.
Smirks, puts on tinfoil hat
"Doesn't look like anything to me?"
Building trust with users is going to be one of the most important product design challenges for all of these new "AI" fueled products. A self-driving car obviously requires a huge amount of trust from passengers. But even a personal assistant bot needs its users to trust it before they let it reach out to their contacts to schedule meetings or ask basic questions.
There may well be overarching strategies that could be useful in either case, like the transparency into the machine's thinking described by the author.
At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."
The article has links to the work he's doing in that area.
One of the most important things for any relationship to work is trust, same for the human-robot relationship.