Deep neural networks aren't aware of changes in their own performance, because their cannot remember or compare their own current performance to the one in the past.
Thus humans at least cannot be pure DNN ML algorithm.
RNNs have error propagation thru time, but connections' weight changes aren't represented as neuron activations and therefore aren't accessible to the neural network itself. That is RNNs aren't aware that they learn too.
Thus humans cannot be pure RNN ML algorithm.
Naturally, humans implement some learning algorithm. The question is which algorithms are capable of which sides of thinking.
Deep neural networks aren't aware of changes in their own performance, because their cannot remember or compare their own current performance to the one in the past.
What are you talking about? Awareness of the changes is the point of such techniques as momentum or early stopping.
The part of a system which monitors these changes (well, it mostly monitors single number, performance on a test set) isn't represented in the network itself.
You can train dialogue NLP system as much as you wish, but you'll never get answers to a question "Have you learned this training set already?" which correlate with whether performance on the test set have plateaued.
I wouldn't call it awareness. We are certainly aware that one part of the system monitors another part, the system itself isn't.
I'm still not following you. What does the question "have you learned this training set already?" even mean? You measure a training set accuracy, and you compare it against the corresponding test set accuracy. It's trivial to add the code that monitors this correlation to your model. Then you can even have the model make adjustments to the learning process based on what it observes (e.g. deploy more regularization to reduce overfitting, etc).
I was trying to show that humans can't be purely relying on certain kinds of ML algorithms because those algorithms can't provide properties which we observe in our thought processes.
> It's trivial to add the code that monitors this correlation to your model.
Yes, but while it's not there we can't say that the model is aware of the process of its own learning. The question serves as an experimental test of system's awareness of the process of its own learning.
All in all, I tried to make poorly defined terms like "awareness" and "the way humans think" a little bit more technical.