Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: GPT Circuits – Mapping the inner workings of simple LLMs (peterlai.github.io)
6 points by peterlai 87 days ago | hide | past | favorite
I've built an app that extracts interpretable 'circuits' from models using the GPT-2 architecture. These circuits reveal how specific inputs influence the probabilities of the next token in a sequence.

While some tutorials present theoretical examples of how feedforward layers and attention heads may produce predictions, this app provides concrete examples of how information flows through an LLM. You can see, for example, the formation of features that search for simple grammatical patterns and trace their construction back to the use of more primitive features.

Feel free to reach out with feedback. I'd love to work with others on scaling out the size of the model that circuits can be extracted from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: