Arch-Function our fast, open source LLM does most of the heavy lifting on extracting parameter values from a user prompt, gathering more information from the user, determining the right set of functions to call downstream. Its designed for smarter RAG scenarios and agentic workflows (like buying an insurance claim through prompts). While you can use the model yourself, archgw offers a framework on usage to detect hallucinations and re-prompt the LLM if token logprobs.
The same model is currently being updated to handle complex multi-turn intent and parameter extraction scenarios, so that the dreaded follow-up, clarifying RAG use case can be effortlessly handled by developers without having to resort to complex LLM pre-processing. In essence, if the user's follow-up question is "remove X", their RAG endpoints get structured information about the prompt and refined parameters against which developers simply have to retrieve the right chunks for summarization.
The same model is currently being updated to handle complex multi-turn intent and parameter extraction scenarios, so that the dreaded follow-up, clarifying RAG use case can be effortlessly handled by developers without having to resort to complex LLM pre-processing. In essence, if the user's follow-up question is "remove X", their RAG endpoints get structured information about the prompt and refined parameters against which developers simply have to retrieve the right chunks for summarization.