That paper, discussung Schmidhuber team's work on MNIST and traffic sign recognition dates to 2012, same time as Hinton's team (Alex Krizhevsky, Ilya Sutskever - now of OpenAI) was working on ImageNet about to win that year's ImageNet competition.
The scale of ImageNet (1000 categories, 1 million 256x256+ images) is far more demanding than something like MNIST (10 categories, 60K lo-res 28x28 images), both in terms of compute power and network architecture. Remember this was done before CuDNN or any NN framework software, and the advance of AlexNet over things that had come before such as Schmidhuber's DanNet (which the AlexNet paper cites) was exactly in all the architectural tweaks and optimization (incl. handwritten GPU kernels) to get a CNN to work at this scale.
The introduction to the AlexNet paper clearly sets out what prior work existed and what their own contributions were in taking it to this level.
There's nothing wrong with Schmidhuber expecting prior work to get recognition, but his own work built on that of others as did work that came later. I'm sure he'd have loved to enter ImageNet in 2012, but Hinton's team beat him to it and opened everyone's eyes to the possibility of training neural nets at that scale.
The scale of ImageNet (1000 categories, 1 million 256x256+ images) is far more demanding than something like MNIST (10 categories, 60K lo-res 28x28 images), both in terms of compute power and network architecture. Remember this was done before CuDNN or any NN framework software, and the advance of AlexNet over things that had come before such as Schmidhuber's DanNet (which the AlexNet paper cites) was exactly in all the architectural tweaks and optimization (incl. handwritten GPU kernels) to get a CNN to work at this scale.
The introduction to the AlexNet paper clearly sets out what prior work existed and what their own contributions were in taking it to this level.
https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6...
There's nothing wrong with Schmidhuber expecting prior work to get recognition, but his own work built on that of others as did work that came later. I'm sure he'd have loved to enter ImageNet in 2012, but Hinton's team beat him to it and opened everyone's eyes to the possibility of training neural nets at that scale.