Hacker Newsnew | past | comments | ask | show | jobs | submit | SimplyLiz's commentslogin

Thanks! For multi-repo, check out the federation features (--preset federation) It handles cross-repo symbol resolution and blast radius across service boundaries.

See docs: https://codeknowledge.dev/docs/Federation

On dead code detection: CKB has two modes:

1. Static analysis (findDeadCode tool, v7.6+) - requires zero instrumentation. Uses the SCIP index to find symbols with no inbound references in the codebase. Good for finding obviously dead exports, unused internal functions, etc. No telemetry needed. 2. Telemetry-enhanced (findDeadCodeCandidates, v6.4+) - ingests runtime call data to find code that exists but is never executed in production. This is where APM integration comes in.

For the telemetry integration: It hooks into any OTEL-compatible collector. No custom instrumentation required, it parses standard OTLP metrics:

- span.calls, http.server.request.count, rpc.server.duration_count, grpc.server.duration_count - Extracts function/namespace/file from span attributes (configurable via telemetry.attributes.functionKeys, etc.)

You'd configure a pipeline from your APM (Datadog, Honeycomb, Jaeger, whatever) to forward aggregated call counts to CKB's ingest endpoint. The matcher then correlates runtime function names to SCIP symbol

IDs with confidence scoring (exact: file+function+line, strong: file+function, weak: namespace+function only).

Full setup: https://codeknowledge.dev/docs/Telemetry

The static analysis mode is probably enough to start with. Telemetry integration is for when you want "this code hasn't been called in 90 days" confidence rather than "this code has no static references."


The architecture separates tool availability from result depth, which addresses exactly that concern.

Presets control tool availability, not output truncation. The core preset exposes 19 tools (~12k tokens for definitions) vs full with 50+ tools. This affects what the AI can ask for, not what it gets back. The AI can dynamically call expandToolset mid-session to unlock additional tools when needed.

Depth parameters control which analyses run, not result pruning. For compound tools like explore: - shallow: 5 key symbols, skips dependency/change/hotspot analysis entirely - standard: 10 key symbols, includes deps + recent changes, parallel execution - deep: 20 key symbols, full analysis including hotspots and coupling

This is additive query selection. The call graph depth (1-4 levels) is passed through unchanged to the underlying traversal—if you ask for depth 3, you get full depth 3, not a truncated version.

On token optimization specifically: CKB tracks token usage at the response level using WideResultMetrics (measures JSON size, estimates tokens at ~4 bytes/token for structured data). When truncation does occur (explicit limits like maxReferences), responses include transparent TruncationInfo metadata with reason, originalCount, returnedCount, and droppedCount. The AI sees exactly what was cut and why.

The compound tools (explore, understand, prepareChange) reduce tool calls by 60-70% by aggregating what would be sequential queries into parallel internal execution. This preserves reasoning depth while cutting round-trip overhead. The AI can always fall back to granular tools (getCallGraph, findReferences) when it needs explicit control over traversal parameters.


This is really well thought out. The git-like versioning approach for memory artifacts is something I’ve been advocating for after spending way too much time debugging agent state issues.

I’ve been working on AI memory backends and context management myself and the core insight here — that context needs to be versionable and inspectable, not just a growing blob — is spot on.

Tried UltraContext in my project TruthKeeper and it clicked immediately. Being able to trace back why an agent “remembered” something wrong is a game changer for production debugging.

One thing I’d love to see: any thoughts on compression strategies for long-running agents? I’ve been experimenting with semantic compression to keep context windows manageable without losing critical information. Great work, will be following this closely.


For compression and long-running agents, may I suggest https://memtree.dev. We offer a simple API that compresses messages asynchronously for instant responses and small context leading to much higher quality generations. We're about to release a dashboard that will show you what each compressed request looked like, the token distribution between system, memory, and tool messages, along with memory retrievals, etc... Is this the type of thing that you're looking for?


Something like this needs to be open-sourced. You're going to have a hell of a time trying to get enough trust from people to run all of their prompts through your servers.


For now, I’m intentionally keeping UltraContext as unopinionated as possible, since every application has different constraints and failure modes.

The goal is to focus on the core building blocks that enable more sophisticated use cases. Compression, compaction, and offloading strategies should be straightforward to build on top of UC rather than baked in at the core layer.


Idk, I got this link from someone from our team and felt like signing up here and commenting:)


Welcome—we're glad to have you! (I'm an admin here.)

Your comments were affected by a software filter. Sorry about that—those are a bit harder on new accounts. Fortunately a bunch of users vouched for your comments, so they weren't affected for long. In the meantime we've marked your account legit so this won't happen again.


Cytopia will be available for mobile devices too, when it makes sense. Right now there’s an internal build for Android on our GitHub, but that’s considered a proof of concept at this time :)


Great to hear that. Step by step!


Quite the opposite. We have quite a big community on discord and most devs including me are online a lot. I’m not on Reddit or twitter but there are team members posting stuff from time to time.


Visit our itch.io page. It’s linked on GitHub.


hi SimplyLiz, on Linux it seems that it's not possible to run your itch.io executables because it requires a libnoise version that's completely outdated. Any idea on how to fix it?


We've created an issue on Github for this, we're looking to integrate this into our build process to avoid this issue in the future.

https://github.com/CytopiaTeam/Cytopia/issues/870


I created a symlink in /usr/lib and it seemed to work okay: sudo ln -s libnoise.so.1.0.0 libnoise.so.0


Out of curiosity, what distro are you running?


We use Conan as package manager


I chose this artstyle for Cytopia because I wanted to created a game in the Spirit of SC2000. This is still my favorite city builder but it’s way outdated in terms of gameplay. And I love pixelart.


If you’re talking about the release on GitHub, it’s well outdated. We always push the latest version if Cytopia to itch.io and this works fine. I’m Cytopias lead dev and I’m on a MacBook M1 with latest OSX


The build from itch.io (cytopia-osx-ci.zip) doesn't run on OS X either :)

    You can't use this version of the application "Cytopia" with this version of macOS.
    You have macOS 10.15.7. The application requires macOS 11.6 or later.
The "X" in OS X means version 10.


Oh, sorry, older version of OS X. We can take a look at supporting older platforms later when the project is more matured. Meanwhile you could compile it Yourself.


It's all good. I'm just poking fun at the name of the binary but it doesn't seem to get across. macOS 11 and above is not "OS X". OS X is OS 10. The X is Roman numeral for the version number 10.

There's no real issue, sorry for the confusion and good luck with your project!


Sadly, pedantic sarcasm is the most misunderstood of the humors.


Slightly off-topic (sorry!) but have you considered testing/porting to the BSDs, especially since you have a macOS port? I'm particularly interested in an OpenBSD port. I may try cloning the repo and building it myself; SDL2 and CMake are available on that platform.


I got it from there and it crashed when launching. M1.


Can you submit a bug report on our GitHub page? I’ll gladly take a look at this later.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: