Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: We've open-sourced our LLM attention visualization library (github.com/labmlai)
197 points by lakshith-403 5 months ago | hide | past | favorite | 15 comments
Inspectus allows you to create interactive visualizations of attention matrices with just a few lines of Python code. It’s designed to run smoothly in Jupyter notebooks through an easy-to-use Python API. Inspectus provides multiple views to help you understand language model behaviors. If you have any questions, feel free to ask!



On a related note: recently, I released a visualization of all MLP neurons inside the llama3 8B model. Here is an example "derivative" neuron which is triggered when talking about the derivative concept.

https://neuralblog.github.io/llama3-neurons/neuron_viewer.ht...


This is insanely fun to just flip through. I found a "sex" neuron. https://neuralblog.github.io/llama3-neurons/neuron_viewer.ht...


Pretty cool. The tokens are highlighted based on the activation?


Yes, you're correct. The tokens are highlighted based on the neuron activation value, which is scaled to a range of 0 to 10.


This seems to be what Anthropic and OpenAI did in their research

Golden Gate Claude - https://news.ycombinator.com/item?id=40459543 - (60 comments, 16 days ago)

Extracting Concepts from GPT-4 - https://news.ycombinator.com/item?id=40599749 (144 comments, 2 days ago)


Interesting. I think OpenAI here uses sparse autoencoders to map out sparse activation patterns in networks. Comparing them to how a real person reasons about a situations.

Inspectus, on the other hand is a general tool to visualize how transformer models pay attention to different parts of the data they process.


That OpenAI work is more elaborate. It trains an additional network in such a way that it encodes what GPT is doing in terms of activations, but in a more interpretable way (hopefully). Here, as far as I can tell, it's visualizing the activation of the attention layers directly.


Sounds great. Non-engineer, but curious. Is there a walkthrough blog post or video that can help someone appreciate/understand this easily?


Attention in transformers, visually explained | Chapter 6, Deep Learning - 3Blue1Brown: https://www.youtube.com/watch?v=eMlx5fFNoYc&t=


Thank you


Loosely related, but also a great read: https://distill.pub/2020/circuits/zoom-in/


This looks cool but can you explain how to make it useful?


I'm not a primary user. Just cleaned up the existing codebase to make it open source. But you could use this to visualise attentions and debug the model.

For an example if you're working on a Q&A model, you can check which tokens in the prompt contributed to the output. It's possible to detect issues like output not paying attention to any important part of the prompt.


The issue with the ambiguity of usage plague lots of OSS projects. Guides/Tutorials will always help drive usage much more, just look at the usage of GPT-3 vs ChatGPT (which is GPT-3.5 with WebUI slapped on top of it).


Hey! This is pretty neat, it reminds me of the graphs made by transformer_lens. Cool to see all of these visualization libraries popping up!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: