We created RadPod to bring the latest in LLM Reasoning Agents (think Gemini/OpenAI "Deep Research") to your data. You upload possibly large datasets to Pods (handles a lot more than other products), providing context for the LLM agent, and start chatting. Under the hood we employ on-the-fly code generation and execution to answer arbitrarily hard questions with in-context citations to make answers verifiable and trustworthy. This is a lot more powerful than common Retrieval Augmented Generation (RAG)-based approaches
In this example Pod, we uploaded 227 of Paul Graham's essays, which are inspirational to us as founders, and show-case some example threads (questions + responses). Among other things it was able to find that:
- The most common topics are unsurprisingly Startups, Programming, and Investing
- Lisp is his favorite programming language
- There are many mentions of Mark Zuckerberg and Larry Page in the essays, but not as many as Jessica Livingston, who he really admires :)
- He wrote more frequently in the period 2005-2009
It's fun to try, ask away! Check some other cool Pods at https://radpod.ai/spotlight-pods. We find RadPod works well on many domains, such as finance and legal documents.