
Ask HN: Scaling to 100k outgoing transfers per account per hour - fintechguy
We&#x27;re growing quickly and are running into scaling issues. We&#x27;d love to get some advice from the hackernews community on our biggest problem:<p>We&#x27;re a payments startup that provides APIs to send and receive money. When our customers send an instruction to disburse funds through our API, a few things happen:<p>1. Acquire lock on customer account<p>2. Check if customer&#x27;s balance is sufficient<p>3. Deduct customer balance<p>4. Release lock on customer account<p>5. Disburse the funds through our banking channels<p>As we continue to grow, this process of checking balance and deducting customer balance is becoming our main bottleneck. We&#x27;ve already implemented parallel processing for transactions in different accounts but the speed of steps 1-4 is impacting performance.<p>We are now exploring solutions that can combine steps 1-4 together, ideally using a database like MySQL that already has these capabilities.<p>The constraints we&#x27;re working under are:<p>1. Transactions for each account must be processed sequentially, in the same order that they arrived in<p>2. Transactions for different accounts should be fully parallelized<p>3. We want to be able to process at least 100k transactions per account per hour<p>4. We want to be able to process transactions in band with the HTTP request, with latency &lt;10ms<p>5. ACID<p>6. Auditable<p>We think that some flavor of SQL is the right direction but would appreciate your feedback&#x2F;advice on what you think the best solution to our problem might be.
======
fefb
I don't know what techs you are using, but I would suggest you looking for:

* Google Spanner (relational) or Datastore (no SQL with ACID)

* NodeJS app running in some instance as gateway with Google Functions, or just Google Functions.

