Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: OpenLIT – Open-Source LLM Observability with OpenTelemetry (github.com/openlit)
1 point by patcher99 6 months ago | hide | past | favorite
Hey Everyone! I am super excited to share something we've (me and my friend) been working on: OpenLIT. After an engaging preview that some of you might recall, we are now proudly announcing our first stable release!

*What's OpenLIT?* Simply put, OpenLIT is an open-source tool designed to make monitoring your Large Language Model (LLM) applications straightforward. It’s built on OpenTelemetry, aiming to reduce the complexities that come with observing the behavior and usage of your LLM stack.

*Beyond Basic Text Generation:* OpenLIT isn’t restricted to just text and chatbot outputs. It now includes automatic monitoring capabilities for GPT-4 Vision, DALL·E, and OpenAI Audio. Essentially, we're prepared to assist you with your multi-modal LLM projects all through a single platform and we're not stopping here; more updates and model support are on their way!

*Key Features:*

- *Instant Alerts:* Offers immediate insights on cost & token usage, in-depth usage analysis, and latency metrics. - *Comprehensive Coverage:* Supports a range of LLM Providers, Vector DBs, and Frameworks - everything from OpenAI and AnthropicAI to ChromaDB, Pinecone, and LangChain. - *Aligned with Standards:* OpenLIT follows the OpenTelemetry Semantic Conventions for GenAI, ensuring your monitoring efforts meet the community's best practices.

*Wide Integration Compatibility:* For those already utilizing observability tools, OpenLIT integrates with various telemetry destinations, including OpenTelemetry Collector, Jaeger, Grafana Cloud, and more, expanding your data’s reach and utility.

*Getting Started:* Check our quickstart guide and explore how OpenLIT can enhance your LLM project monitoring: https://docs.openlit.io/latest/quickstart

We genuinely believe OpenLIT can change the game in how LLM projects are monitored and managed. Feedback from this community could be invaluable as we continue to improve and expand. So, if you have thoughts, suggestions, or questions, we’re all ears.

Let’s push the boundaries of LLM observability together.

Check out OpenLIT here: https://github.com/openlit/openlit

Thanks for checking it out!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: