Liad from the team here - yes, it provides a full context from the docs to the LLMs, so they will be able to correctly use the components when generating code.
It can be copy-pasted directly to the prompt, or downloaded and attached as a file.
That sounds really interesting! What got us into this project is the problem in with the LLM a large llms-full.txt file as a context, for example. We wanted to provide the agents an easy way to get the documentation for every repo (be it llms.txt, readme, etc) - but also search chunks of it using semantic search.
Will be happy to chat more, if you like - sounds like we can benefit from bouncing ideas and notes
Yes, this is a fully remote MCP server, so the need for an SSE support makes the implementation quite complex. The MCP spec updated to use HTTP streaming, but clients do not support it yet.
If you're in Elixir land, Hermes MCP[0] is a fantastic library for building out MCP clients/servers with both SSE/HTTP support. And will be quite robust given the scalability and fault tolerance of the BEAM.
ooh cool. Sadly I am far from Elixir land. MCP starting out as largely STDIO definitely has made things harder for server-side engineers. I expect this will sort itself out this year though.
It's very helpful when working with a specific technology/library, and you want to access the project's llms.txt, readme, search the docs, etc from within the IDE using the MCP client.
Check it out, for exmaple, with the langgraph docs: https://gitmcp.io/#github-pages-demo
It really improves the development experience.
We built an open source remote MCP server that can automatically serve documentation from every Github project.
Simply replace github.com with gitmcp.io in the repo URL - and you get a remote MCP server that serves and searches the documentation from this repo (llms.txt, llms-full.txt, readme.md, etc). Works with github.io as well.
Repo here: https://github.com/idosal/git-mcp