
Azure and AWS's 'GPU general availability' lies - jph00
http://www.fast.ai/2016/12/19/gpu-lies/
======
xyzzy123
It seems likely their policies are a response to the risk that abusers will
mine cryptocurrency, then skip out without paying the bill. Register multiple
accounts, repeat ad nauseum.

Even requiring a credit card isn't too helpful because cryptocurrency can be
cashed out right away, while credit cards transactions can be reversed.

Also, for new accounts which have not been billed yet there is a lot of
uncertainty about whether the account was really registered by the cardholder.

This is a nontrivial fraud problem, and the cloud provider response is a first
approximation to a solution. I would expect that as they engineer better fraud
signals and risk scoring, they'll eventually be able to offer gpus to new
accounts.

~~~
thatrascaltiger
Exactly which cryptocurrency is actually feasible to GPU mine? Bitcoin hasn't
been feasible to GPU mine for years.

~~~
JorgeGT
Not feasible if you have to foot the bill. But if you don't...

~~~
bduerst
Yeah, this seems like a scam that would be easy to automate too, for just
about any rewards-based mining cryptocurrency.

------
pmalynin
Hmm, I created a new account and didn't have much trouble requesting service
limit increase for p2 instances.

At my day job, we also don't have many issues accessing clusters of up to 20
p2 instances.

~~~
jph00
Once you know the limit is there, you know to request the increase. For my
students, this was less than obvious.

Furthermore, decisions as to who were accepted and who rejected were really
wacky. For instance, my co-instructor's request (who in her request included a
link to the course and her linkedin, and who has a Duke math PhD, worked as a
quant, and was a data scientist at Uber) was denied!

------
abuqutaita
Hey guys, I'm on the Azure team. I responded to Jeremy here:
[https://twitter.com/tmohammed/status/810982532925640704](https://twitter.com/tmohammed/status/810982532925640704).

We have some folks looking through his post and the comments on this thread to
help make cases like this a bit clearer. Thanks!

~~~
jph00
Thanks for looking into it. I just added a couple more concerns to the post:

* The totally bizarre responses that requests received. For instance, my co-instructor's request (who in her request included a link to the course and her linkedin, and who has a Duke math PhD, worked as a quant, and was a data scientist at Uber) was denied, whereas some students who provided no justification were accepted, on the same day!

* Why some of our students, who were fully paid-up, suddenly found their access cut off in the middle of the course.

------
n00b101
According to documentation, the default Service Limit for g2.2xlarge is 5
instances:
[https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_ru...](https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2)

Do you see something different in EC2 Console?
[https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reso...](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-
limits.html)

Anyway, the solution to the problem is request a Limit Increase using the
relevant form:
[https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reso...](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-
limits.html#request-increase)

~~~
garethsaxby
g2 are the EC2 instances designed for graphics usage; they're typically aimed
at GPU encoding rather than machine learning, like what the blog poster is
aiming for.

Not to say they couldn't cover either, but the GPU classes used between the
two instance types are very different; g2 is GRID based, p2 is using Tesla. I
don't think that the GRID GPUs have RDMA, although not being into machine
learning I don't know how important that would be.

The real problem is that the documentation you've linked to (The EC2 FAQ)
actually shows that the p2 instances should have a spot limit of 1, but when I
check my account, it's actually 0 for all sizes of p2 instances.

It doesn't seem to matter what region I choose, it just doesn't match the FAQ;
p2 is on request only.

~~~
jph00
> _The real problem is that the documentation you 've linked to (The EC2 FAQ)
> actually shows that the p2 instances should have a spot limit of 1, but when
> I check my account, it's actually 0 for all sizes of p2 instances._

Exactly. The lack of (and indeed, plain wrong) communication to customers and
support staff is the biggest problem here.

------
Cacti
It is pretty obnoxious, but I think most of us understand why they are doing
things this way, it's just a shame it's not more obvious up front.

FWIW it only took me about one week to get approved and there was no
difficulty in it, I just put in two tickets and that was it.

------
dogma1138
Digital Ocean wanted me to pay 100$ upfront before they let me create any
droplets it's a pretty common thing with credit cards.

That said if you have an amazon or an msft account which was billed in the
past it's likely to count.

------
tomc1985
If you're going to build your house (or business) out of clouds, don't be
surprised to find that the floors, walls, and everything else are vapor

------
moonbug2
This guy's whining about nothing. The concurrency limits are there for
capacity planning and to limit possibility of accidental overuse. Raising a
support ticket is all it takes to get the limits raised.

~~~
brd529
His point is that he runs a MOOC and his students have their own AWS accounts.
Since they are students, and do not have established histories at Amazon
presumably they aren't able to get these limits increased, and so can't do the
labs for the course which require a GPU.

His biggest objection was that this wasn't documented anywhere, and so he's
built a course and sold it to students based on the promise of an on-demand
GPU for labs, but they can't actually participate since they don't have the
history required to get their GPU limit raised above zero.

~~~
jph00
I wish I'd explained this as clearly as you just did in my post... Thanks!

------
jph00
Just wanted to update this thread to lets folks know that AWS reached out and
helped us find a solution for our MOOC students. I've updated the post with
this information.

------
vvladymyrov
Blog post mentions about Deep Learning MOOC. I'd love to hear more about it.

~~~
jph00
It'll be online tomorrow. Keep an eye on
[http://www.fast.ai](http://www.fast.ai) for details. There's lots of info
there on the original in-person course that it's based on (
[https://www.usfca.edu/data-institute/certificates/deep-
learn...](https://www.usfca.edu/data-institute/certificates/deep-learning-
part-one) )

~~~
vvladymyrov
Thanks. Looking forward for the course to start

