You are right but it is not because the weakness of AI, it is because all AI generation app are built by those coder only know .md files. They do not know what a consultant level slides looks like.
See this demo in the middle I show some top level strategy business slides (they are all generated by AI)
https://youtu.be/Sq3a5qxsLwM
If you want to get knowledge from KB within 10 seconds, then traditional RAG (embedding+vector based) is still the most efficient way -- maybe more in consumer facing search / chat area.
If you want to get most precise and useful information from KB in minutes, then Agentic RAG is the key.
I believe Multimodal KB+Agentic RAG is a suitable solution for personal KB. Imagine you have tons of office docs and want to dig some complex topics within it. You could try
https://github.com/JetXu-LLM/DocMason
Fully retrieve all diagram or charts info from ppt and excels, and then leverage Native AI agents(e.g. Codex) to conduct Agentic Rad
A few weeks ago, I shared LlamaPReview (an AI PR reviewer) here on HN and received great feedback [1]. Now I'm trying to understand how experienced developers prioritize different aspects of code review to make the tool more effective.
When you open a PR, what's the first thing you check? Is it:
Current results show an interesting split between "Detailed Technical Analysis" and "Critical Findings", but I'd love to hear HN's perspective:
1. What makes you trust/distrust a PR at first glance?
2. How do you balance between architectural concerns and implementation details?
3. What information do you wish was always prominently displayed?
Your insights will directly influence how we structure AI Code Review to match real developers' thought processes.
reply