I got tired of my AI agent hitting rate limits right when I was actually getting work done. I’d be using an MCP server for docs, and suddenly the assistant would start hallucinating old API patterns because the cloud service I was using hit its cap or was lagging.
It felt kind of ridiculous that we’re paying monthly subscriptions and dealing with network latency just to query markdown files. These docs don't change every five minutes—they change per version.
So I spent the last week building a local-first version called Context.
The idea is pretty simple: you "build" a library's docs into a local SQLite file once. From then on, your AI can query it in under 10ms with zero internet.
A few things I realized while building it:
- FTS5 is underrated: Everyone wants to jump straight to Vector DBs and embeddings, but for docs, simple full-text search with BM25 ranking is incredible. I weighted headings higher than body text, and it's been snappier and more accurate for me than the cloud RAG stuff I was using before.
- The "Build Once" approach: Since the output is just a `.db` file, you can actually share it. I’ve started just sending the database file to my teammates so they don't have to clone or index anything themselves.
- Parsing is the hard part: Getting the chunking right - especially stripping out MDX-specific junk and keeping code blocks together—took way more effort than the actual search engine part.
I built the whole thing using Claude Code as a partner. It’s definitely not perfect, but it’s been a massive quality-of-life upgrade for my own workflow.
I wanted to share something I’ve been working on: Remention. A platform designed to help SaaS companies grow by rewarding users for actions that drive engagement, referrals, and reviews.
Here’s how it works:
1. Customizable in-app widget: The widget integrates directly into your product and can be styled to match your brand seamlessly.
2. Targeted rewards: You can offer rewards like credits, cash, or perks at just the right moments — for example, when a user completes onboarding or refers a friend.
3. Seamless integrations: We work with Stripe, PayPal, Venmo, Amazon Gift Cards, and platforms like Capterra and G2 to make setup easy.
4. Analytics and insights: Track user actions and reward effectiveness to fine-tune your campaigns and get the most out of your efforts.
The idea is to help SaaS businesses capture high-quality users faster and turn them into loyal advocates who bring in more signups, reviews, and revenue. I’d love for you to check it out and let me know what you think: remention.co
Check the examples for schema files in the README.
Using the schema you can define which states are possible for each entity and also define what mutations are allowed for each state. For example "publish" mutation is allowed only on draft posts.
Is Neuledge just another competitor for Prisma or a new approach to enforcing database integrity?
Neuledge is a powerful new tool for enforcing database integrity that allows you to validate each state of an entity (such as draft, published, or archived) while storing all the states under the same table or collection. It can be a game-changer for developers who want to maintain strict data consistency without sacrificing flexibility.
What are your thoughts on Neuledge and its potential impact on database management?
It felt kind of ridiculous that we’re paying monthly subscriptions and dealing with network latency just to query markdown files. These docs don't change every five minutes—they change per version.
So I spent the last week building a local-first version called Context.
The idea is pretty simple: you "build" a library's docs into a local SQLite file once. From then on, your AI can query it in under 10ms with zero internet.
A few things I realized while building it:
- FTS5 is underrated: Everyone wants to jump straight to Vector DBs and embeddings, but for docs, simple full-text search with BM25 ranking is incredible. I weighted headings higher than body text, and it's been snappier and more accurate for me than the cloud RAG stuff I was using before.
- The "Build Once" approach: Since the output is just a `.db` file, you can actually share it. I’ve started just sending the database file to my teammates so they don't have to clone or index anything themselves.
- Parsing is the hard part: Getting the chunking right - especially stripping out MDX-specific junk and keeping code blocks together—took way more effort than the actual search engine part.
I built the whole thing using Claude Code as a partner. It’s definitely not perfect, but it’s been a massive quality-of-life upgrade for my own workflow.
If you're sick of rate limits or just want your agent to stop lagging, check it out: https://github.com/neuledge/context