After many months of hard work, trying to understand the challenges and issues around AI and data, we’re excited to be able to share PromptQL (
https://promptql.hasura.io), a new tool to build assistants to talk to your data.
Try it by building a GitHub issues assistant: https://promptql.hasura.io/docs/example-github
You may know Hasura as “the GraphQL on Postgres” tool, but the product has been significantly generalized since those early days, and last year, we launched Hasura DDN (https://hasura.io/blog/launching-hasura-ddn), the third major iteration of our data API product. Briefly, DDN was a major step forward in terms of the expressivity of Hasura - where Hasura v2 supported a few different major relational database vendors, DDN was built on the premise that we should be able to fetch any data, in any format, from anywhere, and help you bring it into a unified graph.
This metadata driven approach to building APIs is a great fit for the current needs of LLMs, and enables AI application developers to build quickly. Good AI needs great data, and data which can be queried in a standardized way - which is exactly what Hasura enables for humans (using GraphQL or REST), and now it can enable it for LLMs too.
The problem we see: Our most important realization was that good AI architecture flows from good data architecture, so we wanted to give our users the tools to build their data architecture without boundaries, bringing in any data sources, restructuring for efficient and purpose-built data access. Want to store data in some other place? No problem. Need to bring in a custom vector database with a specialized search mechanism? A fine-tuned LLM? We can support that too.
We also realized that existing AIs perform surprisingly poorly on real problems. Because of this, users simply don’t trust their assistant’s solutions. An AI might give a poor answer, or no answer at all, and requires a lot of human intervention and iterations to get to the right answer. This has led to the adoption of architectures like RAG, and purpose-built, one-size-fits-none tools.
Our solution: Hasura takes a different approach to others: our solution jumps off from two major observations:
1. As others have realized, assistants do better when asked to write programs to solve problems, instead of reasoning through problems directly.
2. In order to move data out of the LLM’s context and reduce the risk of hallucination, we can give the LLM an external memory in the form of stored artifacts.
PromptQL is a tool and runtime for LLMs which directly implements these ideas, by building a programmable API on top of the graph you already built with Hasura.
In addition, we’ve put together the Agentic Data Access benchmark (https://github.com/hasura/agentic-data-access-benchmark) to illustrate the problem, and to help compare solutions.
Happy to answer any questions about choice of tech stack, challenges, possible applications, comparisons with other approaches, etc.
Please try it out and let us know what you think!
LLM's and a framework that's structured and clear enough for LLM's to understand (if you have used Payload CMS and built blocks, fields, or other, this is also an example) has droven a lot of growth in my company. People that took more than 6 months to be productive in our codebases are becoming much more productive way sooner -- and this is not only because Sonnet and 4o are so good, but because we also have tools that put effort in to making sure LLM understand them, and Hasura is one of them.