Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are certainly right to question the reasoning behind AI decisions. I'm not sure that neural network based AI can do that though.

For reference see an older thread on this very topic: https://news.ycombinator.com/item?id=10388795



> I'm not sure that neural network based AI can do that though.

Generating human-readable explanations from a machine learning algorithm or NN is probably an impossible problem to solve in full generality.

However, for many particular systems, doing so should be feasible.


Explaining how neural network models generate their output is possible, see http://cs231n.github.io/understanding-cnn/ https://auduno.github.io/2016/06/18/peeking-inside-convnets/ https://courses.cs.washington.edu/courses/cse590v/14au/cse59...

Also it should be possible to train the NN model to explain its decision, even in plain language.

I think the "right to explanation" law is pretty sensible, people should have a right to question the algorithmic judgment that impacts their life. It will be somewhat inconvenient for big companies applying ML to make these decisions, but it won't ruin their business.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: