Short answer: 3 n1-highcpu-16 GCE VMs with Local SSDs attached. We're working on a complete disclosure document, with comprehensive reproduction steps to replicate all our numbers. This document should be out in a couple of weeks. We want to walk you through, command by command, on how to reproduce these numbers, and verify the results for yourself.
Thanks for the short answer. Would be good to know how many local SSDs are attached though for the 850 warehouse scenario. The TPC-C documentation says each warehouse maintains 100,000 items in their stock, but I can't surmise from that how much storage is required to hold 850 warehouses' worth of data. I'm impatient though so let me try to work through the #s myself. I'm using GCP's monthly reserved pricing in the US-Iowa region as a reference as of today's pricing.
A n1-highcpu-16 GCE VM costs $289.84/month. Local SSDs are added at 375GB per drive, and they cost $30/month at $0.08 per GB. I highly doubt you could fit the ~1250 warehouses (what got you the peak TPM-C) on 375GB local SSD, but I have to make assumptions here! So, now you're paying $319.84 per instance per month, or $949.52 for 3 of these instances.
At 16,150 TPC-C, you're paying roughly $0.06 per TPC-C, or, looking at it the other way, you're getting 16.83 TPC-C per dollar spent each month. Is that good? I don't know!
Now, the really interesting question is, is that TPC-C/$ on CRDB 2.0 actually better than TPC-C/$ on CRDB 1.1? The answer lies in how many local SSDs you have to provision to reach that peak throughput. Peak is at ~1300 warehouses on CRDB 2.0, and ~800 warehouses on CRDB 1.1.
Does anyone with more knowledge here know how much storage you need per warehouse in the TPC-C test?
Each warehouse requires about 80 megabytes of storage, unreplicated. 1250 warehouses * 80 MB * 3-way replication = 300 GB, which comfortably fits in a 3-node cluster with 1 local SSD each.