
New Amazon EC2 GPU Instance Type - isb
http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1872208&highlight=
======
jeffbarr
There's more technical info in my post at
[http://aws.typepad.com/aws/2013/11/build-3d-streaming-
applic...](http://aws.typepad.com/aws/2013/11/build-3d-streaming-applications-
with-ec2s-new-g2-instance-type.html)

------
ck2
Very weird they do not use the amazon domain for that and yet it looks exactly
like amazon.

Teaching consumers bad habits.

~~~
jeffbarr
That's a really good point, and one that I will be bringing up with my manager
immediately after re:Invent (I am part of the AWS team).

~~~
chimeracoder
If I remember correctly, it has to do with disclosure requirements.

Corporate-IR (or whoever runs that domain) meets the criteria/is authorized
for disclosure of information to investors.

Other companies use them too, for example NVIDIA: [http://phx.corporate-
ir.net/phoenix.zhtml?c=116466&p=irol-ir...](http://phx.corporate-
ir.net/phoenix.zhtml?c=116466&p=irol-irhome)

~~~
smackfu
Yeah, think of something like earnings reports. It's very important that no
one gets early access, and very useful to have a third party handle it so you
can prove that no one has early access. And if the third party screws it up,
they get investigated by the SEC, not you.

------
thenomad
How do the GPUs on this compare with NVidia desktop GPUs? Anyone know?

Also, very exciting that they're supporting GPU cloud rendering - that's going
to be big for 3D.

~~~
wmf
Via Jeff Barr: NVIDIA GRID™ (GK104 "Kepler") GPU (Graphics Processing Unit),
1,536 CUDA cores and 4 GB of video (frame buffer) RAM.

~~~
thenomad
Thanks!

My experience is that graphics card stats are a decidedly slippery fish as far
as comparison goes.

However, a quick bit of Googling implies that this is almost identical, at
least on paper, to a GeForce 770 or 680.

[http://www.geforce.co.uk/whats-new/articles/introducing-
the-...](http://www.geforce.co.uk/whats-new/articles/introducing-the-geforce-
gtx-770)

Unfortunately, without knowing more details (clock speed, memory bandwidth)
it's hard to say more.

Guess someone (possibly me) needs to benchmark 'em. :)

UPDATE - excellent info further down this thread:
[https://news.ycombinator.com/item?id=6678744](https://news.ycombinator.com/item?id=6678744)

------
HeXetic
> making it ideally suited for video creation services, 3D visualizations,
> streaming graphics-intensive applications ...

And, presumably, cracking hashes!

~~~
earlz
I've never used Amazon EC2, but with this kind of application, I might have to
give it a try. Buying a $300 graphics card just to try some GPU programming is
ridiculous.

~~~
profquail
You don't need to buy a $300 graphics card to experiment with GPU programming.

The current and previous generation Intel CPUs (Haswell and Ivy Bridge,
respectively) have on-die GPUs which support OpenCL:
[http://software.intel.com/en-us/articles/intel-sdk-for-
openc...](http://software.intel.com/en-us/articles/intel-sdk-for-opencl-
applications-2013-release-notes)

AMD's APUs are quite cheap (~$100) CPU+GPU designs similar to those in the
upcoming PS4 and XBox One (though the retail APUs are somewhat less powerful).
They've been more-or-less designed _specifically_ around the needs of a
heterogenous OpenCL application.

Finally, the last several generations of NVidia cards all support both CUDA
and OpenCL; the newer cards do support additional features though. You should
be able to pick up a low-end, recent-edition Nvidia GPU for roughly $100.

The new g2.2xlarge instances are $0.650/hour, and the existing cg1.4xlarge are
$2.100/hour; so it may make sense to experiment on AWS a bit, then buy your
own card for long-term use if you decide to spend more time doing GPU
programming.

~~~
idupree
Sadly, Intel's integrated-GPU OpenCL still doesn't support Linux, and only
just started supporting OS X in 10.9 Mavericks[1]. Usually Intel's Linux GPU
support is great; I don't know why this is different.

(Intel do have a Linux OpenCL implementation for Xeon CPU cores and Xeon Phi
coprocessor[2], which doesn't help me much. On-CPU OpenCL is fine but hardly
faster than regular CPU code, and Phi coprocessors aren't very common
currently.)

[1]
[http://forums.macrumors.com/showthread.php?t=1620203](http://forums.macrumors.com/showthread.php?t=1620203)
[2] [http://software.intel.com/en-
us/vcsource/tools/opencl](http://software.intel.com/en-
us/vcsource/tools/opencl)

------
beamatronic
I'm trying Folding@Home on it now. Looks like it might not recognize the GPU.

22:42:58:WU02:FS00:0x15:GPU memtest failure 22:42:58:WU02:FS00:0x15:
22:42:58:WU02:FS00:0x15:Folding@home Core Shutdown: GPU_MEMTEST_ERROR
22:42:58:WU02:FS00:0x15:Starting GUI Server 22:42:59:WARNING:WU02:FS00:FahCore
returned: GPU_MEMTEST_ERROR (124 = 0x7c)

~~~
jeffbarr
If you are confident that this should be working, post a note to the EC2 forum
so that we can investigate.

------
lelf
[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using_clu...](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using_cluster_computing.html)

------
kayoone
With stuff like this it looks like the devices we use could be only streaming
clients in the future and wont require a lot of processing power but excellent
network connectivity.

That goes a bit against the trend in web development to move much of the
processing to the client side so i wonder where this will go.

Really high performance streaming of apps/games could revert the trend of
making everything browser based in favor of streamed native apps.

------
jewel
I work on some opengl software that renders slideshows, and this is precisely
what we need. We've used the bigger CG1.4xlarge nodes in the past but they are
very expensive for what we're doing. The lower price on this (65¢/hr instead
of $2.40) is going to be much more manageable for us.

~~~
jeffbarr
Sounds awesome. Send me a link when you have it working!

------
dsugarman
this is huge beyond graphics, new levels of performance can be achieved with
GPGPU for data intensive startups. i would love to see someone build a company
around this.

~~~
nivertech
Unfortunately GPU used in g2.2xlarge instances isn't good for double precision
calculations.

~~~
dsugarman
why is that?

~~~
nivertech
It's optimized for 3D and CAD not for HPC.

------
warrenmiller
An ideas whether this would make a decent bitcoin miner?

~~~
varelse
I'd guess it'd hit about ~75% of a GTX 680 at this task.

~~~
thenomad
What makes you think that? Specs look pretty similar to a 680.

~~~
varelse
800 MHz core clock of each K520 GPU versus 1058 MHz boost clock of GTX 680...

[http://www.nvidia.com/object/cloud-gaming-gpu-
boards.html](http://www.nvidia.com/object/cloud-gaming-gpu-boards.html)

~~~
thenomad
Fantastic - that's exactly the info I've been looking for. Thanks.

------
dsugarman
can they preload
[http://wiki.postgresql.org/wiki/PGStrom](http://wiki.postgresql.org/wiki/PGStrom)?

