Hacker News new | past | comments | ask | show | jobs | submit login

Oh Okay this is cool, I thought it's not possible (it's not mentioned at your frontpage at all).

How do your AI-features work when running tests locally using yout SDK? Do I need to provide my own token to some LLM provider?






By default, the SDKs use our API endpoints, where we run a combination of models to maximize accuracy and reliability. This also enables us to provide logging with screenshots and reasoning to help with debugging.

That said, we're currently experimenting with a few customers who run our tooling against their own hosted models. While it's not publicly available yet, we might introduce that option going forward.

Would love to hear more about your use case, if a self-hosted setup is relevant or just the use of your own LLM tokens?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: