
AMD Epyc processors coming to Azure virtual machines - rbanffy
https://arstechnica.com/gadgets/2017/12/amd-epyc-processors-coming-to-azure-virtual-machines/
======
arcanus
This and the fact that AMD now offers tensorflow support for their GPUs:
[https://www.anandtech.com/show/12032/amd-announces-wider-
epy...](https://www.anandtech.com/show/12032/amd-announces-wider-epyc-
availability-and-rocm-17-with-tensorflow-support)

Means there is finally some competition in data centers.

~~~
dragontamer
ROCm looks good, but so did OpenCL and I'm not very convinced that OpenCL is a
good platform to build code in. ROCm addresses a lot of my concerns on paper,
but there isn't much buzz on ROCm... specifically HCC.

It looks like ROCm builds off of Microsoft's C++ AMP specification (which also
looks like a great idea, but that project unfortunately looks dead). So it
provides a "unified" C++ source code that can compile down into x86 OR GPU
Assembly (GCN in the case of AMD).

Developing OpenCL requires a bit of "copy/paste" going on whenever you start
processing data on the GPU vs CPU. Because the OpenCL "world" is completely
different from the host language.

It looks all good, but it doesn't seem like anyone has really tested ROCm out.
There are some benchmarks over at Phoronix testing out their OpenCL
implementation on ROCm (and apparently, the ROCm OpenCL is a different
compiler from the older AMD stuff).

\--------

Its kind of hard for me to see where AMD's "focus" is. If I had to bet, it
seems like ROCm HCC has the most effort right now. In the past two years, ROCm
HCC has seen multiple releases from 1.0 through 1.7, and their Github page
indicates a lot of activity. Way more activity than AMD's OpenCL forums for
example.

But there's no community forums of ROCm / HCC yet! I think most discussions
happen in GitHub issues, which is... a subpar community solution.

AMD has been working on a lot of technologies that seem redundant: Vulkan
Compute Shaders, OpenCL Compute, and now ROCm / HCC Compute. In addition,
there's Microsoft C++ AMP and Microsoft DirectCompute.

NVidia has done a good job at communicating "CUDA" as the #1 platform of
choice on NVidia systems. But I'm not entirely sure which technology AMD is
focusing on right now. If I had to guess, they're all in on ROCm / HCC. But
clarifications would be nice.

I do realize that AMD is also offering OpenCL support on ROCm. But things like
Packed FP16 are better supported on ROCm HCC, and I don't think packed FP16 is
supported on ROCm OpenCL yet. This suggests that AMD's #1 effort is HCC,
despite the relative silence on the HCC platform.

See this ROCm issue for example:

[https://github.com/RadeonOpenCompute/ROCm/issues/219](https://github.com/RadeonOpenCompute/ROCm/issues/219)

------
c2h5oh
I'm happy to see non-Intel solutions in the cloud and I'm looking forward to
seeing more - AMD currently has a superior product at any price point (except
for AVX2 workloads).

------
asdgkknio
What sort of virtualization performance and features does Epyc have? I haven't
seen much good information.

EDIT: Actually, yes I have. I know that Epyc allows encryption of every VM
with a key the host can't access, which is wicked. But I haven't seen
benchmarks or in-depth analysis.

------
eklavya
It would be awesome if apple comes out with an AMD mac mini.

~~~
pjmlp
It would be awesome if apple comes out with an new mac mini at all.

------
api
Hooray for AMD! Breaking into cloud hosting is really what they need to do.
I'm seeing them show up on bare metal box providers like Hetzner and OVH too.

------
mtgx
But will they make use of AMD's Secure Memory Encryption and Secure Encrypted
Virtualization?

[https://semiaccurate.com/2017/06/22/amds-epyc-major-
advance-...](https://semiaccurate.com/2017/06/22/amds-epyc-major-advance-
security/)

~~~
lefty2
No. "We asked several questions about the deployment, such as the Azure
locations that will be EPYC enabled as well as which EPYC-specific security
features are in use on the Azure platforms. Deployments in specific
datacenters are not being discussed at this time, and the SME features of EPYC
are not being used in Lv2." [https://www.anandtech.com/show/12116/amd-and-
microsoft-annou...](https://www.anandtech.com/show/12116/amd-and-microsoft-
announce-azure-vms-with-32core-epyc-cpus)

------
moreless
This is great news, I hope they make it in the cloud! It will be interesting
to see how this plays out... There are many optimizations for Intel CPUs in
kernels and VM hosts, hopefully this will not be too big obstacle for AMD.

------
dis-sys
So? Checked lots of retailers/vendors in China, they all told me the same
story - no Epyc in stock, but Intel's latest servers/processors can be shipped
in hours. In fact, I ordered several Xeon processors last Sunday and they
showed up on my doorstep yesterday. For Epyc, I need to pay the full price and
wait for 4 weeks to get anything with the Epyc logo on it.

when searching the keyword Xeon on taobao.com, I got over 100 pages of
results, from $130 second hand E5-2670 to stupidly expensive Platinum 8180.
Searching on Epyc gave me a blank page with 1 and only 1 seller promising to
ship in 4 weeks.

AMD is not ready for such a battle with Intel and I am not going to waste my
time on such half baked products which is not even really available to
consumers months after its paper release.

posted from my new dual Xeon based workstation.

~~~
sp332
That's because Epyc is a new name, while Xeon is up to what, E7? I bet you're
not getting so many hits for the latest Kaby Lake Xeons. And you'd get better
results looking for comparable Opterons for older systems.

~~~
dragontamer
E7 Xeon is the old naming scheme. Xeons used to roughly follow E3 / E5 / E7
(just like i3 / i5 / i7 in the consumer brands).

The latest Xeons naming scheme for Skylake or newer is Xeon Platinum / Xeon
Gold / Xeon Silver.

~~~
sp332
Thanks. I probably knew that at some point, but the x299 chipset was so crazy
it just pushed that knowledge right out of my head.

------
swarnie_
Can we get some more capacity in the EU West region first please?

------
rdlecler1
Is MS doing this just to put pricing pressure on Intel or is this a legitimate
threat to Intel?

