Hacker News new | past | comments | ask | show | jobs | submit login

> I've accepted that even though the models are pretty much semantically meaningless, they work.

Until they don't. Which may happen easily if you deploy a model for the first time.

My personal view is that (at this moment) ML is mostly correlation detection and pattern recognition, but has little to do with intelligence.




The point is that we don't have mental capacity to understand this stuff. Nobody has any clue how to interpret millions of dimensions, some non-linear manifold there and how to translate it to something humans are capable of understanding. These things might be done automatically by our brains on subconscious level in a similar fashion (or not), but on conscious level we are completely clueless and basically shoot darts to see which ones become somewhat useful.

I think you object to the lack of "mathematical beauty", but my point is "who cares?". Not sure why should reality conform to some mental model we find "appealing" for whatever reason. Deep Learning is similar to experimental physics.


This.

Explainable AI is a emerging field, I hear about this necessity specially in NLP and Law. We expect to understand how some decision was reached, and we'll never accept some computer-generated decision if it wasn't explained how each logical step was done. And just giving millions of weights of each neuron won't give us that, because we won't be able to reach the same decision with just those parameters.

We know that IA is a bunch of probabilities, weights and relations in n-dimensions. Our rational brain can know that too, but can't feel it.


That's why you use interpretability tools like LIME

Example of this would be here: https://github.com/Hellisotherpeople/Active-Explainable-Clas...


Wait... That's what dimensionality reduction is for. I can interpret 3 PCA dimensions pretty damn well since I can figure out the covariance explained by each dimension of my original dimensions


Yeah, but your accuracy also drops. You might end up with interpretable underwhelming solution instead of non-interpretable SOTA.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: