Hacker News new | past | comments | ask | show | jobs | submit login

I think that's the reason for its dead end.

However if this is really the biological analogue of credit assignment, this might scale better than training llms from scratch every time. Even if say it could approx gradients to a certain degree given a new network, normal backprop could further tune for a few epochs or so dramatically reducing overall training costs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: