If you're referring to backprop, no. It's an open research question to characterize what/how a neural net is learning during backprop (aside from trivial statements like "optimizing a nonconvex function" etc.)
No, I didn't even study NN, so backprop is a fuzzy notion. I take it as a convergence feedback loop, am I completely off ?
By backward I meant the images done by the Deep Dreams program. They retrofit (hence the back) 'dog' signals fractally. This is a pretty amazing primitive to me. Trying to recognize 'concepts' even through that much distortion.