These various techniques that currently work by training, either supervised or self-training,
can have fatal flaws.
Take, for example, some high-tech camera technology. Use it on a drone to take pictures of warships from thousands of angles. You take pictures of U.S. warships, Russian warships, and Chinese warships. You achieve 100% accuracy of identifying each ship using some Neural Net technology.
One day this net decides that an approaching warship is Chinese and sinks it. But it turns out to be a U.S. warship. Clearly a mistake was made. Deep investigation reveals that what the Neural Network "learned" happened to be related to the area of the ocean, based on sunlight details, rather than the shape of the warship or other features. Since the Chinese ships were photographed in Chinese waters, and the U.S. warship that was sunk was IN those waters at the moment, the Neural Net worked perfectly.
Recognition and action are only part of the intelligence problem. Analysis is also needed.
Another solution would be to use autoencoders or GANs to create a latent code from the input image. By construction, these codes need to carry the most important features about the input, because otherwise they couldn't reconstruct it.
And regarding analysis - a lot of groups are attempting the leap from mapping "X -> y" to reasoning based on typed entities and relations. Reasoning would be more like a simulator coupled with a MCMC system that tries out various scenarios in its 'imagination' before acting out.
There are many formulations: relational neural nets, graph based convolutional networks, physical simulators based on neural nets, text reasoning tasks based on multiple attention heads and/or memory. It's very exciting, we're closing in on reasoning.
That's a new area of research, actually.
Machine Learning works on a different timescale from everything else.
To piggyback your comment. I think this is the real major cause for concern, AI disruption occurs at a expotentially high rate than human learn rates affecting issues like career transition.
No one may have been ill intentioned when they applied AI and invented a way to replace anyone any longer having to mnqualy do job X but none-the-less all those who do job X are now stuck and even if they retrain (optomistixally in 2-3 years) it will in all likelyhood distrupt them again.
As technologists maybe we face a bias as we have not been significantly career disrupted by technological progress and so we can't fully see what it's like for those not riding but being swamped by the wave.
Would you mind providing some reference here? I am interested and not familiar with any such way.
E.g. Here is an example of a wolf (ship) detection system that is actually detecting snow (area of the ocean) https://youtu.be/hUnRCxnydCc?t=58
There's no leakage in either case. The model is learning one thing and you think it's learning another.
It doesn't generalise when the US ship is in Chinese waters, but that's because the system was never "learning" to recognize ships in the first place.
You are implying that the problem would have been in the set that was used as input, but my understanding is that in many a case you would realize that mistake when it's already too late.