Benchmark methodology (all runs reproducible with bench.js):
Node v22, Intel Xeon Platinum 8370C
15 runs, reporting median
Deterministic PRNG → identical ops across runs
Mixed workload: 40% reads, 20% updates, 20% simple queries, 20% compound
--expose-gc for accurate memory measurement
Key design decisions that drive performance:
Memory: ~667 bytes/entity core, +81% with journaling (tinyop+)
Spatial: Grid + type indexes → filters type first, then grid (90% fewer distance calcs)
Cache: 128-entry LRU with per-type invalidation (enemy writes don't flush player queries)
Views (v3.4): O(1) after first eval, auto-update on changes
Persistence: BYO – dump() to JSON, IndexedDB, or your backend
Reproduce on your hardware: node bench.js --expose-gc
The part I'm most interested in feedback on is the query cache design.
Compound predicates like where.and(where.eq('status', 'active'), where.gt('signal', 0)) look simple but were a cache miss on every call in early versions.
Each call constructs a new function object, so the cache had no way to recognise it had seen the same query before even if the predicate was semantically identical to one it had just run.
The fix was tagging each where.* predicate with a stable string key at construction time (eq:status:active, gt:signal:0) and recursively composing them for and/or (and(eq:status:active,gt:signal:0)).
Two separate calls to where.and(where.eq('status', 'active'), where.gt('signal', 0)) now produce the same cache key even though they're different function objects.
Inline predicates (e => e.signal > 0) fall through to reference-identity keying, which is correct: two closures that look the same but close over different variables shouldn't share a cache entry.
That one change is what flipped the mixed workload benchmark from LokiJS leading by ~20% to tinyop leading by ~32%. LokiJS has a native B-tree index on every field; tinyop was losing specifically because compound queries couldn't be cached and had to scan the full type set on every call.
Once they could be cached, the hot tier returns them in under 0.01ms. For comparison, LokiJS's native indexed path measures 0.09ms for simple queries and 0.72ms for compound ones.
Key design decisions that drive performance: Memory: ~667 bytes/entity core, +81% with journaling (tinyop+) Spatial: Grid + type indexes → filters type first, then grid (90% fewer distance calcs) Cache: 128-entry LRU with per-type invalidation (enemy writes don't flush player queries) Views (v3.4): O(1) after first eval, auto-update on changes Persistence: BYO – dump() to JSON, IndexedDB, or your backend
Reproduce on your hardware: node bench.js --expose-gc
reply