Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How are you structuring Markdown-based context for AI coding agents?
3 points by lepuski 13 days ago | hide | past | favorite | 2 comments
I’ve recently transitioned from using LLMs in-browser to a local agentic workflow in VS Code (Gemini Code Assist). I can approve/disapprove changes which is nice, but I’ve hit a wall regarding context management. Initially, I provided all the whole repo as context to the non-agentic version of Gemini code assist and it performed well.

I read the agentic mode is "better" so to keep the agent aligned with my project's architecture, I’ve manually built 7 dense Markdown files that serve as the system instructions for the project. I require Gemini to update these files as we implement features.

gemini.md (instructs gemini to read the other md files and handle updating) project_overview.md, architecture.md, features.md, database.md, api.md, security.md

Each file is between 500–1,500 words so I’m concerned if f this is the right way to go. There seems to be no consensus on context file best practices. I’m seeing strong arguments for both minimalist, lean instructions and dense, project-wide specs. Honestly, the proper usage/prompting patterns of LLM's seems to be comparable to reading horoscopes, everyone goes by gut-feeling with the most cited source of truth being the confirmation bias.

How are you using .md context files in your workflow?

 help



  We ran into this exact problem and ended up formalizing what we were doing into a small convention called HADS. Four
  block types in plain Markdown — [SPEC] for authoritative facts, [NOTE] for context, [BUG] for known failures, [?] for
  unverified — plus an AI manifest at the top that tells the model what to read and skip. No tooling, just annotation.

  In practice it cut per-query token load ~70% on our longer docs. Small models (7B) handle it well because the tags
  remove the structural reasoning problem entirely.

  Spec and examples: https://github.com/catcam/hads

Its still very experimental - best thing to do is keep trying different things and see what sticks

There are great videos on Skills, Subagents etc. I'd give them a watch.

Context locality is a big one https://bradystroud.dev/blogs/context-locality

Heres another tip https://bradystroud.dev/blogs/show-ai-the-bug




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: