I am in the market for a cloud GPU offering, and I have to say the big cloud providers are very uncompetitive here, only offering these old, slow GPUs.
For display it's still a fine part. The P100 is also a beast, so its overkill for most people just doing Remote Desktop. So perhaps the M60 (like with Azure) fills this market segment for them, and they don't mind the hardware diversity.
[Edit: Too sleepy. A post down below reminds us that these are G-series and G is for Graphics. So yeah, I assume they just didn't want to wait for enough P4 parts in volume or will quickly make another such announcement about the Pascals].
Disclosure: I work on Google Cloud.
Out of curiosity, why OpenCL on AMD rather than OpenCL on say a K80 or AWS's M60s or upcoming P100s? (That is, are you blocked on having an AMD part, or do you have a great reason to prefer AMD for compute)
Disclosure: I work on Google Cloud (and want to sell you GPUs!)
Overall though, I would basically consider opencl "AMD's thing".
((Next Generation) (GPU EC2 Instances)) rather than ((Next Generation GPU) (EC2 Instances)) :)
It's one thing if one of them is like that, but if all of them are like that, maybe it's not because of the cloud providers?
Last link has a list of vendors. The one I used gave you a bare bones machine running some Linux distro, with no platform bullshit around it. Exactly what you need for deep learning work.
Yes. Nvidia has real competition from AMD in consumer graphics but no competition in enterprise. Their consumer cards are sold at a competitive price while their enterprise cards are marked way up.
If cloud providers were really smart, they'd release cheaper/just-as-fast instances with consumer cards, but Nvidia probably doesn't want to do that deal; they'd prefer to push teslas as the standard for GPGPU, even if they have to mark them down for cloud providers.
> It's one thing if one of them is like that, but if all of them are like that, maybe it's not because of the cloud providers?
I'm suggesting Nvidia is offering great deals on teslas to cloud providers but not offering those same discounts on 1080s; they're doing that because it lends support to their pitch to enterprises that they need Teslas instead of their competitively priced consumer line; if everyone was paying list price, cloud providers would be offering 1080s.
Even with all that, cloud GPU is not cost effective and end users are better off buying their own "consumer grade" stuff.
Disclosure: I own HPC for AWS (among other things) and used to own instances
They didn't say newest generation, only next and next may in this case refer to
- "next after kepler" (parent's assumptions) or
- "next after current" (other commenters' assumption)
When I'm using cloud GPUs it's pretty much a batch job, and latency is the last thing I care about.
I'm not aware of any DL projects in Australia on health images which may have some legislative requirements about keeping data onshore.
More selfishly, I game using parsec.tv and would love a new rig, and it's pretty latency sensitive! :)
How is the quality compared to x264 with the default settings (preset medium, crf 23)?
Here's a comparison video: https://www.youtube.com/watch?v=BV5btdqQfu4
According to this it's almost equivalent when you compare 720p@5Mbps and 1080p@12Mbps, which is way more than most streaming sites will do: http://on-demand.gputechconf.com/gtc/2014/presentations/S464...
In our use case(sports broadcasting, 720p) we found that in reasonable bitrate(>1Mb) NVIDIA HQ quality was virtually the same to x264 faster.
(in newer versions of nvenc, they got amazingly better in the last couple of years)
When the bandwidth drop you start to see x246 advantage.
As a general rule, for every 100 USD you'd only mine about ~50 USD's worth of crypto-currencies.
Which is not surprising since on these products you get a fancy motherboard + high-end Intel CPUs + boatloads of RAM.
These are of little to no use when mining, and account for about half of the cost of this hardware. Also, the local cost of electricity is not the lowest price in the world (China having one of the lowest)
While the price of hardware is fixed, crypto-currencies possess a difficulty adjustment mechanism. This makes the whole system have an upper bound on mining profitability, and this bound converges on the profitability of the best-yield-hardware's. Which would be something to the tune of this . Note that while having 6 GPUs, this system has 8 GB of RAM and an Intel Celeron.
EDIT: We're talking about IO-bandwidth-bound crypto-currencies here, like all the ones based on EThash, Ethereum being one of them. Bitcoin's upper bound on profitability is set by the best ASICs for SHA256 processing.
if this is true:
> it was a low as losing 2 cents per hour
then it seems someone should be able to negotiate with you (even if not Amazon), especially if you're willing to use 1,000 or 10,000 instances - and don't need memory, etc.
it seems 2¢ should be within what you might be able to shave off of somewhere...
maybe not, though.
Love to hear your thoughts on this!
Although, since an ASIC is application specific, couldn't you pack more cores (since you can ignore the circuitry that you don't need) or otherwise optimise for the application? If you can do more work, or do the same amount of work with less energy, then it could still be worth it.
I am obviously ignoring a rather important factor: its unlikely a "small" player could make a more performant ASIC than a "big" player like the GPU vendors, who have invested billions into building high performance hardware in a cost effective way. I'm not really asking "why don't people do this", but rather asking if its possible at all given a big enough budget.
G are optimized for graphics-intensive applications (and have 1, 2 or 4 GPUs, less RAM and more CPUs) - you might use these for design work, gaming etc.