We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.
What Klarity does:
- Real-time analysis of model uncertainty during generation
- Dual analysis combining log probabilities and semantic understanding
- Structured JSON output with actionable insights
- Fully self-hostable with customizable analysis models
The tool works by analyzing each step of text generation and returns a structured JSON:
- uncertainty_points: array of {step, entropy, options[], type}
- high_confidence: array of {step, probability, token, context}
- risk_areas: array of {type, steps[], motivation}
- suggestions: array of {issue, improvement}
Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.
Installation is simple:
`pip install git+https://github.com/klara-research/klarity.git`
We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?
Links:
- Repo: https://github.com/klara-research/klarity
- Our website: [https://klaralabs.com](https://klaralabs.com/)
This creates a gap between the mechanical measurement of certainty and true understanding, much like mistaking the map for the territory or confusing the finger pointing at the moon with the moon itself.
I've done some work before in this space, trying to come up with different useful measures from the logprobs, such as measuring shannon entropy over a sliding window, or even bzip compression ratio as a proxy for information density. But I didn't find anything semantically useful or reliable to exploit.
The best approach I found was just multiple choice questions. "Does X entail Y? Please output [A] True or [B] False. Then measure the linprobs of the next token, which should be `[A` (90%) or `[B` (10%). Then we might make a statement like: The LLM thinks there is a 90% probability that X entails Y.
reply