Hacker Newsnew | past | comments | ask | show | jobs | submit | zanfiel's commentslogin

Quick update since this got some traction — shipped a few significant things:

*v5.4:* Fixed a privilege escalation bug where rate-limited API keys were silently promoted to admin. Also added RBAC (admin/writer/reader), full audit log, and proper security headers (CSP, HSTS).

*v5.5:* Intelligence layer — server now extracts structured facts, user preferences, and current state from freeform content into dedicated tables. `/context` endpoint does 5-layer retrieval packed to a token budget. More useful for RAG/agent workflows.

*v5.6:* Graphology integration — memories are graph nodes with typed relationship edges. LLM infers "depends_on", "causes", "related_to" etc. You can run centrality, community detection, shortest paths on your memory graph. Also rewrote the MCP server (529 → 168 lines) and added 76 tests.

The codebase was also split from a 5700-line monolith into proper modules with TypeScript strict mode.


Engram already has namespace isolation — API keys scope memory per-agent, spaces partition further within a user, and key scopes can be set to read-only. One agent's memories don't surface in another's recall unless you deliberately share a key. The prompt injection via recalled content point is fair but that's true of any retrieval system feeding an LLM. The memory layer stores and retrieves text — sanitizing what goes into the context window is the agent framework's job. Same reason you don't expect a database to prevent SQL injection at the storage layer. Always interested in adversarial testing though, feel free to share.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: