Hey HN, we're super excited to share something we've been working on: OpenLIT. After an engaging preview that some of you might recall, we are now proudly announcing our first stable release!
*What's OpenLIT?*
Simply put, OpenLIT is an open-source tool designed to make monitoring your Large Language Model (LLM) applications straightforward. It’s built on OpenTelemetry, aiming to reduce the complexities that come with observing the behavior and usage of your LLM stack.
*Beyond Basic Text Generation:*
OpenLIT isn’t restricted to just text and chatbot outputs. It now includes automatic monitoring capabilities for GPT-4 Vision, DALL·E, and OpenAI Audio. Essentially, we're prepared to assist you with your multi-modal LLM projects all through a single platform and we're not stopping here; more updates and model support are on their way!
*Key Features:*
- *Instant Alerts:* Offers immediate insights on cost & token usage, in-depth usage analysis, and latency metrics.
- *Comprehensive Coverage:* Supports a range of LLM Providers, Vector DBs, and Frameworks - everything from OpenAI and AnthropicAI to ChromaDB, Pinecone, and LangChain.
- *Aligned with Standards:* OpenLIT follows the OpenTelemetry Semantic Conventions for GenAI, ensuring your monitoring efforts meet the community's best practices.
*Wide Integration Compatibility:*
For those already utilizing observability tools, OpenLIT integrates with various telemetry destinations, including OpenTelemetry Collector, Jaeger, Grafana Cloud, and more, expanding your data’s reach and utility.
*Getting Started:*
Check our quickstart guide and explore how OpenLIT can enhance your LLM project monitoring: https://docs.openlit.io/latest/quickstart
We genuinely believe OpenLIT can change the game in how LLM projects are monitored and managed. Feedback from this community could be invaluable as we continue to improve and expand. So, if you have thoughts, suggestions, or questions, we’re all ears.
Let’s push the boundaries of LLM observability together.
Check out OpenLIT here: https://github.com/openlit/openlit
Thanks for checking it out!
From looking at the screenshots, it looks like it can monitor number of tokens, which seems useful, but I'm not clear why that needed a whole big project.
I feel like the stuff you actually want to monitor in prod for ML that you don't get from infra monitoring are things that are not trivial to drop in because you want a sense for how well the ML components are working, which is generally pretty application specific. Having a general framework for that seems useful, but not really what we have here, at least for the moment.
Also, it just seems a bit weird for this to have it's own UI. Part of the point of OTEL is so that you can send all your metrics to one place. Not totally possible all the time and turning metrics into dashboards takes time, but the point of OTEL seems to be to separate these concerns.