Hacker News new | past | comments | ask | show | jobs | submit login

I'm not quite understanding how different prompts for different models reduces the attractiveness of a framework. A framework could theoretically have an LLM evals package to run continuous experiments of all prompts against across all models.

Also theoretically, an LLM framework could estimate costs, count tokens, offer a variety of chunking strategies, unify the more sophisticated APIs, like tools or agents–all of which could vary from provider to provider.

Admittedly, this view came just from doing early product explorations, but a framework was helpful for most of the above reasons (I didn't find an evals framework that I liked).

You mentioned not having this problem yet. What kind of problems have you been running across? I'm wondering if I'm missing some other context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: