Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The adapter and LoRa have a drastically fewer parameters, so one might expect that forward + backward is roughly 2x the cost of forward.

Then (as far as I know), in contrast to generation, training is done on the entire output of the transformer (so all tokens of the full input) rather than serially token-by-token (in the RNN days, this was called teacher-forcing), so that may give you a significant boost in the tokens per second rate over generation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: