Hacker News new | past | comments | ask | show | jobs | submit login


Explainable AI is a emerging field, I hear about this necessity specially in NLP and Law. We expect to understand how some decision was reached, and we'll never accept some computer-generated decision if it wasn't explained how each logical step was done. And just giving millions of weights of each neuron won't give us that, because we won't be able to reach the same decision with just those parameters.

We know that IA is a bunch of probabilities, weights and relations in n-dimensions. Our rational brain can know that too, but can't feel it.

That's why you use interpretability tools like LIME

Example of this would be here: https://github.com/Hellisotherpeople/Active-Explainable-Clas...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact