
Ask HN: Trading system design question – need ideas - ice_blended
Hi,<p>Consider this situation:<p>---<p>You are building a trade engine and you currently are limited by the number of trades and orders per second you can do because of the database write limit. To speed things up, you proceed to hold data in memory but since data in a trade engine is incredibly sensitive (real money trades) how do you handle a server going down and the data in memory getting lost? The system is running on AWS using Elasticsearch, DB is postgres, with background processing is done via sidekiq.<p>---<p>Any ideas on how to better design this?<p>Thank you.
======
streetcat1
Yes. Replicate to another in memory instance. The network speed is much higher
than disk IO. Look at volt db (I think).

I would also be thinking about kafka to record your trades and not postgres.
Have another service feed the db from the event stream.

------
nostrademons
A startup I once worked at (in the days before AWS or cloud) literally faced
this exact same problem - it was making a trade engine, trades had to be
durable, and they came in at a rate faster than the disk rotated, so it was
physically impossible to write them out and verify them fast enough. We solved
it with an i-RAM, a battery-backed RAM disk with its own UPS. Worked pretty
nicely at the time, but it was for the pre-AWS era and the i-RAM's
manufacturer has since stopped manufacturing them.

Today I would use streetcat1's in-memory replication solution, but don't be
afraid to think out of the box or consider hardware-based solutions.

