Hi — I'm Raphael Mansuy. I built edgequake-litellm to provide a low-latency, Rust-backed drop-in replacement for LiteLLM. It exposes the same Python API (`completion()`, `acompletion()`, `stream()`, `embedding()`), supports provider/model routing (OpenAI, Anthropic, Gemini, Mistral, xAI, OpenRouter, Ollama, LM Studio, etc.), and ships as a single ABI3 wheel with zero Python runtime deps.
Quick migration:
```python
import edgequake_litellm as litellm # drop-in alias
```
Why build it? LiteLLM is excellent, but its pure-Python HTTP layer adds SDK overhead. I moved the core into Rust (edgequake-llm) and wrapped it with PyO3 to cut latency and provide a robust, multi-arch wheel. This is v0.1 — P0 compatibility is in place, but I'd love feedback on priorities: provider coverage, proxy features, billing/budgets, or tool-calling parity.
QuantaLogic: A ReAct Framework for Building Advanced AI Agents
QuantaLogic is a Python-based ReAct (Reasoning & Action) framework bridging advanced AI models with practical business applications. It integrates LLMs with a robust tool system, enabling AI agents to understand, reason about, and execute complex tasks through natural language.
Key Features:
Universal LLM Support: OpenAI, Anthropic, DeepSeek, etc. via LiteLLM
Secure Tool System: Docker-based code execution and file manipulation
Real-time Monitoring: Web interface with SSE events
Enterprise Ready: Comprehensive logging, validation and error handling
Extensible: Custom tools and agents via Python SDK
Quick migration:
```python import edgequake_litellm as litellm # drop-in alias ```
Why build it? LiteLLM is excellent, but its pure-Python HTTP layer adds SDK overhead. I moved the core into Rust (edgequake-llm) and wrapped it with PyO3 to cut latency and provide a robust, multi-arch wheel. This is v0.1 — P0 compatibility is in place, but I'd love feedback on priorities: provider coverage, proxy features, billing/budgets, or tool-calling parity.
Install:
pip install edgequake-litellm
Repo: https://github.com/raphaelmansuy/edgequake-llm
If you try it, please star the repo and open issues for features you want most — I'm actively iterating. Happy to answer technical questions here.