Hacker News new | comments | ask | show | jobs | submit login
Nvidia Tesla V100 on Google Cloud Platform (googleblog.com)
102 points by deesix 9 months ago | hide | past | web | favorite | 16 comments



I'm sure the extra customizability is great and all, but isn't this not price competitive with Amazon's P3 instances? For spot pricing at least, they're at $1.2/hr [1], whereas here it's $1.34/hr [2]. It's not a big difference, but is there a benefit Google provides that I'm missing?

edit: I see now - the big difference is that Google is offering P100s (and pretty cheaply), which are not available on AWS.

[1]: https://aws.amazon.com/ec2/spot/pricing/

[2]: https://cloud.google.com/products/calculator/#id=56e6b672-18...


Disclaimer: I work on Google Cloud (and launched preemptible VMs as well as contributed to our GPU efforts).

That's for 1xV100.

The interesting action is in multiple GPUs at once, for say deep learning. The spot market is fairly non-linear in pricing, so the 8xV100 is more than 8x as much, while it's strictly linear for Preemptible.

Similarly, you have to attach a lot of cores (Broadwell only) and RAM to the p3s, which is often unnecessary for GPU applications (you often want 1-2x the total GPU memory depending on your application, but rarely do you need 64 vcpus to go with them).

Additionally, if for some reason you want super fast local SSD to go with your GPUs (say a GPU accelerated database), you can do so on GCE.

tl;dr: linear, flexible pricing that usually wins (plus Skylake, local SSD and more).


Those are all good points - I was focusing on my use case (i.e. small deep learning jobs), but definitely locking into pre-built configurations can be wasteful. I'll likely use GC anyway, but appreciate the extra reasons given.


Either way, I'd encourage trying out multi-GPU training. The latest NVLINK is pretty impressive, and reducing your iteration time for free (because linear pricing) is pure goodness.


Typo? The page says $1.24 preemptible for GCP, not 1.34


That's just the GPU, though, right? You still need to attach something with cores and RAM to it. The pricing calculator link adds a custom machine type with 8 cores and 52GB of RAM.


Ah! Thanks. Yes, indeed.


On the other hand, your comment shows how the GPU is ~10x as expensive as the VM it is attached to, which is an interesting data point...


Because AFAIK, the GPUs can't be shared across multiple VMs unlike CPU and RAM.


Unfortunately, these are of the older gen w/16GB of RAM rather than 32GB which is now available. https://www.nvidia.com/en-us/data-center/tesla-v100/


The real plus of V100/Volta is better performance half-precision/FP16 training of deep learning models, which surprisingly isn't mentioned in this post. (more info on NVIDIA's site: https://devblogs.nvidia.com/inside-volta/)


receiving up to 1 petaflop of mixed precision hardware acceleration performance

It's there, it's just slathered in marketing-speak. :)


Fair. :P


This is interesting as it seems to compete with their Cloud TPU offering. If benchmarks hold up against this as they looked here: https://www.forbes.com/sites/moorinsights/2018/02/13/google-...

... then V100 would win every time. It offers:

  - Proven/Familiar platform in CUDA.   
  - No specific cloud provider lockin
  - Similar/Better $/performance on FP16 while maintaining FP24/32 as an option.
What am I missing?


The more recent RiseML benchmkark: (https://blog.riseml.com/comparing-google-tpuv2-against-nvidi...

And the HN discussion of that benchmark, for context: https://news.ycombinator.com/item?id=16931394


Both require model tuning before you can get a speed-up from mixed-precision training.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: