Hacker News new | comments | show | ask | jobs | submit login

I've worked in a lot of AI-related projects and was around when the AI winter arrived.

These various techniques that currently work by training, either supervised or self-training, can have fatal flaws.

Take, for example, some high-tech camera technology. Use it on a drone to take pictures of warships from thousands of angles. You take pictures of U.S. warships, Russian warships, and Chinese warships. You achieve 100% accuracy of identifying each ship using some Neural Net technology.

One day this net decides that an approaching warship is Chinese and sinks it. But it turns out to be a U.S. warship. Clearly a mistake was made. Deep investigation reveals that what the Neural Network "learned" happened to be related to the area of the ocean, based on sunlight details, rather than the shape of the warship or other features. Since the Chinese ships were photographed in Chinese waters, and the U.S. warship that was sunk was IN those waters at the moment, the Neural Net worked perfectly.

Recognition and action are only part of the intelligence problem. Analysis is also needed.

This problem is well studied - there are ways to make a neural net explain what parts of the input most influenced the decision.

Another solution would be to use autoencoders or GANs to create a latent code from the input image. By construction, these codes need to carry the most important features about the input, because otherwise they couldn't reconstruct it.

And regarding analysis - a lot of groups are attempting the leap from mapping "X -> y" to reasoning based on typed entities and relations. Reasoning would be more like a simulator coupled with a MCMC system that tries out various scenarios in its 'imagination' before acting out.

There are many formulations: relational neural nets, graph based convolutional networks, physical simulators based on neural nets, text reasoning tasks based on multiple attention heads and/or memory. It's very exciting, we're closing in on reasoning.

> This problem is well studied - there are ways to make a neural net explain what parts of the input most influenced the decision.

That's a new area of research, actually.

New areas of research become "well studied" in a year or two in AI. GANs are considered both new and well studied, for example.

Machine Learning works on a different timescale from everything else.

> Machine Learning works on a different timescale from everything else.

To piggyback your comment. I think this is the real major cause for concern, AI disruption occurs at a expotentially high rate than human learn rates affecting issues like career transition.

No one may have been ill intentioned when they applied AI and invented a way to replace anyone any longer having to mnqualy do job X but none-the-less all those who do job X are now stuck and even if they retrain (optomistixally in 2-3 years) it will in all likelyhood distrupt them again.

As technologists maybe we face a bias as we have not been significantly career disrupted by technological progress and so we can't fully see what it's like for those not riding but being swamped by the wave.

"This problem is well studied - there are ways to make a neural net explain what parts of the input most influenced the decision."

Would you mind providing some reference here? I am interested and not familiar with any such way.

While I wouldn't say that the problem has been "well studied", the research community has been paying interest, and some progress has been made, most notably- as pointed out by benjaminjackman- LIME[1]. Roughly speaking, LIME learns a locally approximate model which can be interpreted. It will work will any black box model, not just neural networks.

[1] https://arxiv.org/abs/1602.04938

darpa recently launched a new program to do just that - force ML models to "explain" their predictions. it's called "explainable artificial intelligence" and the goal is to increase trust of autonomous systems.

[1] https://www.darpa.mil/program/explainable-artificial-intelli...

Checkout LIME: https://github.com/marcotcr/lime presumably in this example when reviewing the test dataset you'd see the ocean light up more than the warships.

E.g. Here is an example of a wolf (ship) detection system that is actually detecting snow (area of the ocean) https://youtu.be/hUnRCxnydCc?t=58

In that scenario, sinking a Chinese ship in Chinese waters is still very problematic.

Not quite. Another popular example is the cnn learning to distinguish wolves from non-wolves, because the wolf was always photographed on snow & the other animals not on snow. It just learnt to distinguish snow vs non-snow instead of wolf vs non-wolf. It then failed catastrophically when you put a non-wolf on snow.

There's no leakage in either case. The model is learning one thing and you think it's learning another.

No, it's called insufficient feature engineering. Data leakage is when your test data contaminates your training data.

Actually this is a clear case of overfitting leading to a high false positive rate resulting in the misidentification of a US ship for chinese.

I don't think that's a clear case of overfitting. You could have used a subset of the original data for training and the rest for validation and it would have generalised pretty well.

It doesn't generalise when the US ship is in Chinese waters, but that's because the system was never "learning" to recognize ships in the first place.

in your example, was the neural net trained with examples of US ships in Chinese waters?

I assume it would have to be trained with existing material, which would mostly consist of US ships in US waters and Chinese ships in Chinese waters.

You are implying that the problem would have been in the set that was used as input, but my understanding is that in many a case you would realize that mistake when it's already too late.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact