
Geoff Hinton Dismissed the Need for Explainable AI - sonabinu
https://www.forbes.com/sites/cognitiveworld/2018/12/20/geoff-hinton-dismissed-the-need-for-explainable-ai-8-experts-explain-why-hes-wrong/
======
gqcwwjtg
Split brain patients will often have one half of their brain invent
explanations for why the other half did something. The explaining half
believes these explanations to be true. Which suggests pretty heavily that
even the decisions that humans make aren't as explainable as they seem.

Imo, this whole explainability thing is an incorrect human intuition that
we're imposing on AI. If you want unbiased AI, explainability isn't the way to
do it. Instead judge its performance against explicit criteria for what counts
as bias. If those criteria don't exist or can't be agreed on then the problem
is hopeless from the outset.

~~~
sharemywin
I think it depends on the what area. Criminal Judgement probably shouldn't be
a "black box" your going to jail. why? the machine said so.

mathematical proofs. My algorithm solved xyz conjecture. it's true. how? I
don't know it's a black box.

Also, from a diagnosis/debgugging problems it can be a problem. Oh, no the was
a huge mistake and the system just catastrophically failed. how do we fix it?
don't know. Should be keep using it? don't know.

Not saying there aren't places where a black box is perfectly fine.

------
dang
This is cribbed from [https://www.wired.com/story/googles-ai-guru-computers-
think-...](https://www.wired.com/story/googles-ai-guru-computers-think-more-
like-brains/).

