- Search engines use algorithms, not neural nets.
- The most popular algorithm on Kaggle (data analysis competitions) is random forests
- Google's self-driving car uses statistical-based methods
I can't imagine commercial aircraft would use a neural net. What happened if one crashed? They would analyze the data and ask questions like, Q: "What happened?" A: "I don't know" Q: "Can we fix it so it doesn't happen again?" A: "I don't know".
Definitely image recognition: http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf
Speech recognition: http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf
Natural language processing: http://www.socher.org/index.php/DeepLearningTutorial/DeepLea..., http://aclweb.org/anthology/N/N13/N13-1090.pdf
If you're into kaggle competitions: http://blog.kaggle.com/2012/11/01/deep-learning-how-i-did-it...
I don't think there are going to be any further major advances in eg SVMs or random forests (famous last words maybe...) Neural nets, on the other hand, are just scratching the surface of what's possible. So right now they are state of the art in some historically very difficult areas. But these are early days still.
As to the GP: Geoff Hinton (probably the most well-known neural networks researcher) said in his Coursera course that neural networks thrive at problems with a lot of structure that could be encoded, while simpler models like SVMs or Gaussian processes might be better for problems without as much deep structure to discover.
Also, a lot of the current research with neural networks involves using neural networks to learn better representations of data. These cleaner representations of data (which can be thought about as a sort of semantic PCA) often make classification far easier, which explains the great results. Learning representations also makes transfer learning (transferring knowledge from one domain to another) much easier/more possible.
Recent papers show that NN are coming back, but I think most of the speech recognition out there is still Hidden Markov Models and most of the image recognition is definitely based on tailored detectors & descriptors. This is especially true when you do these things on the users' mobile devices.
We already do. The pilot and co-pilot(s). Though I get what you're saying, I just find it humorous that we would worry about it in such manner.
AI neural nets aren't quite there yet. But we attribute that great unknown to vague catch-all "human error" all the time. I suspect one day we will simply attribute AI screw ups to "computer error" or something like that.
Search engines could use neural networks (and possibly have better search results), but the research was just done a few years ago (ie 3-4 years ago) and search engines are much older than that.
As for the commercial aircraft point: I'm trying to find the youtube video I've seen before about this. The video is of a plane simulation where the plane has lost control of both of the wing flaps (or both of some other important steering element) and a person was unable to stabilize the plane but a neural network could. This doesn't mean a neural network should be used for autopilot all the time, but they can be useful for certain situations.
Edit: found the video, https://www.youtube.com/watch?v=aObBHXsc_iw&t=3m50s
the system is fed with the actual transcript for some videos so it can learn with time and minimize error.
the overall system is incredibly useful as you can search the video stream of a TV channel for a specific keyword.
I have also used self organizing maps before for creating a recommendation engine.
I think it works a better than typical recommendation engines because you can feed the network features based on multiple criterias (in addition to taxonomies, you can also feed user geographic location, and click patterns etc ..)
The thing is that (at least several kinds of) neural networks and support vector machines are equivalent in their operations. SVMs themselves seem to be something like a general form of binary regression (a clever way to separate two complex sets with a complex curve on a high-dimensional space). So the choice of neural network or support vector machine or some other statistical device seems like it comes down to which problem is easier to formulate how, which tricks are available for what problem, etc.
For example: http://blenny.ncl.ac.uk/peter.andras/PAnpl2002.pdf (that's what I google just now, think there are results on "deep" networks also).
It's just a particular way of describing a mixture of nonlinear regression models. There are some nice, although very abstract analogies to biology, which is both a blessing and a curse.
"...there's nothing mystical about neural networks", yes, there's nothing "not statistical" in the reality of the construct (the NN classes are often given without reference to stat).
But in the way that people present them or think about them, there is tendency for them to be seen or used as indeed "black boxes"(per the gp) or mystical constructs.
A NN might indeed be a good fit for a given class of problems and its the mechanism might be revealing for said problems. But the external impression that their development gives, that their development is that of a "thinking process", is kind of a weakness for the field.
Here is a test with a model of a robotic arm:
One thing I've noticed though, is that img 10 of chapter 1 is missing.
I think that's so amazingly awesome, that it can evolve as a living document in this way.
(rereading my original comment I think I described what I was seeing incorrectly but this new reply correctly explains what I was originally seeing)