Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's a little better than that.

An example from class. Suppose you are building a ML decider which can stop a production line if it sees defective products. If you are choosing between a decision tree and a neural net, one of the things to consider is that with a DT you can look at the tree the model comes up with a d say, okay if the mass of the widget is low, we reject.

With a NN, you can't see why things are rejected in the same way.

Some tasks benefit from having more explainable models, some it doesn't matter. But I don't think it's just a buzzword or trying to enforce political control.



> But I don't think it's just a buzzword

I don't think it's just a buzzword either, in the same sense that 'AI' is not just a buzzword. But both "AI" and "explainable" are also buzzwords, or at least often used as such.

I have no objections to the example you made, some models are "obviously" more explainable than others. I'm simply refuting the claim that some models are inherently more explainable, because the entire concept starts falling apart when you take it out of its mathematical context. For example, it's easy to see what a DT does when it's small, but larger DTs are as "unexplainable" as a NN.

> trying to enforce political control

I'm not against political control per se, some political control is well-justified; but I find it pretty suspicious that everyone is jumping on this train when there are more glaring issues, (for example, if and when a branch of the government or a government-controlled agency decides to use a ML model, they should be required to declare that they are using it and make the source code public, so that it can be cross-examined) and my guess is that it's because "explainability" provides a nice narrative, unlike other concerns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: