Hacker News new | past | comments | ask | show | jobs | submit login

I use it for a specialized time-series storage / messaging layer. We are receiving stock market data directly, normalizing it into JSON and also PUBLISHing these objects via Redis to consumers (generally connected through a custom WebSocket gateway). We basically turn the whole US stock market into an in-memory sea of JSON, optimized for browser-based visualization.

Redis is great because of its multiple data structures. Depending on their "kind", these JSON objects are either `APPEND`ed onto Redis Strings (e.g. for time&sales or order history) or `HSET` (e.g. opening/closing trade) or ZSET (e.g. open order book).

Sometimes an object transitions from a SortedSet to a String. We used to handle this with `MULTI` but now we use custom modules to do this with much better performance (e.g. one command to `ZREM`, `APPEND`, `PUBLISH`).

We run these Redis/feed-processor pairs in containers pinned to cores and sharing NUMA nodes using kernel bypass technology (OpenOnload) so they talk over shared-memory queues. This setup can sustain very high throughput (>100k of these multi-ops per second) with low, consistent latency. [If you search HN, you'll see that I've approached 1M insert ops/sec using this kind of setup.]

We have a hybrid between this high-performance ingestion and long-term storage. To reduce memory pressure (and since we don't have 20 TB of memory), we harvest these Redis Strings into object storage (both NAS and S3 endpoints) with Postgres storing the metadata to facilitate querying this.

We also do mundane things like auto-complete, ticker database, caching, etc.

I love this tech! It's extremely easy to hack Redis itself and now with modules you don't even need to do that anymore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: