Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're referring to backprop, no. It's an open research question to characterize what/how a neural net is learning during backprop (aside from trivial statements like "optimizing a nonconvex function" etc.)


No, I didn't even study NN, so backprop is a fuzzy notion. I take it as a convergence feedback loop, am I completely off ?

By backward I meant the images done by the Deep Dreams program. They retrofit (hence the back) 'dog' signals fractally. This is a pretty amazing primitive to me. Trying to recognize 'concepts' even through that much distortion.


The proof is in the pudding. If it is useful.


People aren't useful. Hammers are useful. It seems hard to go from useful -> intelligence, and hard to go from intelligent -> conscious.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: