By default, the SDKs use our API endpoints, where we run a combination of models to maximize accuracy and reliability. This also enables us to provide logging with screenshots and reasoning to help with debugging.
That said, we're currently experimenting with a few customers who run our tooling against their own hosted models. While it's not publicly available yet, we might introduce that option going forward.
Would love to hear more about your use case, if a self-hosted setup is relevant or just the use of your own LLM tokens?
How do your AI-features work when running tests locally using yout SDK? Do I need to provide my own token to some LLM provider?