Hacker News new | past | comments | ask | show | jobs | submit login

I guess you could say I don't know RWP from Adam! :D

My og comment wasn't to accurately explain gradient optimization, I was just expressing a sentiment not especially aimed at experts and not especially requiring details.

Though I'm afraid I subjected you to the same "cringe" I experience when I read pop sci/tech articles describe deep learning optimization as "the algorithm" being "rewarded" or "punished," haha.




No worries, we're all friends here!

it's just you happened to accidentally describe the idea behind RWP, which is a gradient-free optimization method, so I thought I should point it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: