All we wanted was our prompts in our codebase, not on some website. Then just to see the prompts before we ran the code. Then comments inside of prompts. Then just less strings everywhere, and more type-safety...
At some point, it became BAML: a type-safe, self-contained way to call LLMs from Python and/or TypeScript.
BAML encapsulates all the boilerplate for:
- flexible parsing of LLM responses into your exact data model
- streaming LLM responses as partial JSON
- wrapping LLM calls with retries and fallback strategies
Our VSCode extension provides:
- real-time prompt previews,
- an LLM testing playground, and
- syntax highlighting (of course)
We also have a bunch of cool features in the works: conditionals and loops in our prompt templates, image support, and more powerful types.
We're still pretty early and would love to hear your feedback. To get started: