Hacker Newsnew | past | comments | ask | show | jobs | submit | linearlayer's commentslogin

You seem quite upset. Can you explain exactly which quantities don't make sense?


No it's not


There is a mistake in the article, right? Multi-headed attention doesn't average together multiple attention heads. Rather it concatenates them and then right-multiplies by a matrix such that the output dimension matches the input dimension. That matrix might learn averaging, but it's not built in a-priori.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: