edit: I see now - the big difference is that Google is offering P100s (and pretty cheaply), which are not available on AWS.
That's for 1xV100.
The interesting action is in multiple GPUs at once, for say deep learning. The spot market is fairly non-linear in pricing, so the 8xV100 is more than 8x as much, while it's strictly linear for Preemptible.
Similarly, you have to attach a lot of cores (Broadwell only) and RAM to the p3s, which is often unnecessary for GPU applications (you often want 1-2x the total GPU memory depending on your application, but rarely do you need 64 vcpus to go with them).
Additionally, if for some reason you want super fast local SSD to go with your GPUs (say a GPU accelerated database), you can do so on GCE.
tl;dr: linear, flexible pricing that usually wins (plus Skylake, local SSD and more).
It's there, it's just slathered in marketing-speak. :)
... then V100 would win every time. It offers:
- Proven/Familiar platform in CUDA.
- No specific cloud provider lockin
- Similar/Better $/performance on FP16 while maintaining FP24/32 as an option.
And the HN discussion of that benchmark, for context: https://news.ycombinator.com/item?id=16931394