
AMD lands Google, Twitter as customers with newest server chip - jonbaer
https://www.reuters.com/article/us-amd-alphabet/amd-lands-google-twitter-as-customers-with-newest-server-chip-idUSKCN1UX2KL
======
AareyBaba
It is worth pointing out the David versus two Goliaths situation here.

AMD 10K employees and a $1 billion annual R&D budget.

Intel 100K employees (including 10K just in software) and $20 billion R&D.

Nvidia 11K employees and $2 billion R&D budget.

AMD CPUs now surpass Intel and their GPUs are competitive with Nvidia except
at the high end.

~~~
kev009
AMD software ecosystem is pretty bad. They fired many of the Linux platform
devs in ~2013 and never recovered. There is nothing like Intel's vtune and
even the basic functions like performance counters, MCA support, ECC events
etc are janky on AMD CPUs. Intel does a pretty good job all around there.
Nvidia is leagues ahead on GPGPU with CUDA and general Linux driver
stability.. sure it's a blob but if it works well in a commercial setting
nobody cares.

The Rome is a nice chip nonetheless, I hope they can rehire a big enough Linux
team to get over these humps it isn't that much work on the CPU/platform side.

~~~
justsid
I work on game performance (or rather, I work on the underlying engines with
an eye on performance) and that is the biggest reason why my computer has an
Intel CPU. I would love the killing power of an Zen 2 machine, the compile
time improvement alone would make ith worth every cent. Unfortunately I use
VTune a lot for work and there just isn’t any alternative, so I’m stuck with
whatever Intel gives me. Which is doubly sad, because I end up optimizing for
Intel’s microarchitecture and can only hope that it runs decent enough on AMDs
CPUs.

On the GPU side though, at least with the modern APIs (Vulkan, D3D12 and
Metal), AMD isn’t too far behind Nvidia. I actually prefer RenderDoc to
Nvidias Nsight because I can capture multiple frames instead of pausing the
application to look at one frame at a time. That being said though, the OpenGL
tooling for AMD is abysmal and so is their OpenGL driver. All things being
equal, just swapping an Nvidia card for an AMD card gives around 40-50% speed
up when running OpenGL purely by getting rid of the driver overhead. Although
with Vulkan AMD and Nvidia are actually on par, so at least that’s solving
itself for us.

~~~
uep
I feel like this is a safe assumption, but I assume you do most of your
performance work on Windows? Since you would even focus on OpenGL at all, I
have to assume that you at least spend some of your time developing on Linux.

Have you tried the AMD mesa drivers? The open source driver performance
doesn't look terrible, and in my own experience, it tends to work far better
for OpenGL than AMD's proprietary GPU drivers.

[https://www.phoronix.com/scan.php?page=article&item=rx-5700-...](https://www.phoronix.com/scan.php?page=article&item=rx-5700-july&num=2)

~~~
ahartmetz
Probably Windows, because on Linux you have the AMD mesa driver and perf and
valgrind-callgrind and kcachegrind which are all pretty good. For OpenGL there
are renderdoc and apitrace. Unfortunately, the rr reverse debugger does indeed
not work with Ryzen because the performance counters are too fuzzy, as
mentioned in another comment.

I have been working with performance tools on Linux for over 10 years and
occasionally debugged graphics issues.

~~~
blattimwind
kcachegrind is okay, but VTune is a lot better.

------
koolba
I hope this puts AMD in stable footing for many years to come. Having newer
and faster chips is great, but whats even better is long term competition in
this space.

~~~
samfisher83
Opteron was crushing it back in the day. I think Tech is just cyclical. It
seemed like AMD was in a lull for a while which basically made intel not
really compete. Hopefully this forces Intel to step its game up. Competition
is always good for the consumer.

~~~
uncletaco
Was AMD in a lull or just making longterm bets on technology?

I've only followed casually in the last decade+ but I was under the impression
that the ATI merger was meant to support a long term bet on better chips by
AMD and we're starting to see the fruit of that labor.

~~~
faitswulff
As a layperson, I never understood that merger. How does buying a graphics
card company make for better chips in general?

~~~
dtech
Some advantages:

* Integrated graphics in their CPUs is important for lower-end systems and APU/SoCs. Intel had an integrated GPU at the time, AMD didn't.

* It allowed them to provide the CPU+GPU for all Xbox and Playstation consoles since the acquisition

* It might have allowed them better deals with chip factories because their volume increased

~~~
rch
It was a bad bet on netbooks that didn't fully anticipate the dominance of
smartphones.

~~~
philjohn
Their low-end Ryzen APUs can game better than low-end Intel chips with their
IGP.

------
mktmkr
The historical context here is that AMD once had a monopoly inside Google's
datacenters and pissed it away by shipping the horribly broken Barcelona
followed by the not very broken, but also not very fast, Istanbul. This is a
return to form for them, after a decade of poor form. The important thing for
an operator like Google or Amazon is pricing power. As long as they can
brandish a competing platform under the nose of Intel sales reps, they can get
a better deal from Intel regardless of which they really prefer. You may have
noted that a few years ago Google was showing a POWER platform at trade shows.
That has the same purpose of putting Intel (and AMD) on notice that they have
the capability to port their whole world to a different architecture if
needed.

~~~
bhouston
AMD is not just competitive, it is better than Intel. Thus Google should adopt
it and role it out, and faster than any other cloud provider. This will win
them customers. I want 256 threads per machine at competitive prices.

~~~
jrockway
AWS has AMD machines. Amazon claims they perform worse than the Intel machines
they use, and they are priced lower as a result.

~~~
philliphaydon
T3 are intel, newer than the T2, and perform worse than the T2. T3a are AMD
and perform on par/slightly better than T3 for less cost. (From my own
testing. Not a claim I can backup this is just my observation)

------
barkingcat
The article is missing the largest elephant in the room for datacenter cpu's:
the security issues Intel chips have had latent for the last decade, and the
constant patching and bios updates (and re-updates as researchers discover new
attacks) that are needed in order to make datacenter use sane.

Intel really messed up, and has no one to blame but themselves.

~~~
sq_
Does anyone know of any benchmarks that go through the full impacts for all of
Intel’s hardware-level security issues?

I remember seeing the Linux kernel devs discussing some massive 10+%
performance hits back around Meltdown/Spectre patch time, and I’m now
wondering what the final impact has been.

~~~
mrep
10 percent? My teams RDS cpu usage spiked up by 40 percent after the initial
patch [0].

[0]: [https://m.imgur.com/a/khGxU](https://m.imgur.com/a/khGxU)

~~~
msh
What is RDS short for?

~~~
mrep
Amazon web services relational database service:
[https://aws.amazon.com/rds/](https://aws.amazon.com/rds/)

------
ChuckMcM
This is a big win for AMD and for me it reconfirms that their strategy of
pushing into the mainstream features that Intel is trying to hold hostage for
the "high end" is a good one. Back when AMD first introduced the 64 bit
extensions to the x86 architecture and directly challenged Intel who was
selling 64 bits as a "high end" feature in their Itanium line, it was a place
where Intel was unwilling to go (commoditizing 64 bit processors)

That proved pretty successful for them. Now they have done it again by
commoditizing "high core count" processors.

Each time they do this I wonder if Intel will ever learn that you can't "get
away" with selling something for a lot of money that can be made more cheaply
forever. Processors are not a veblen good.

~~~
Tuna-Fish
> That proved pretty successful for them. Now they have done it again by
> commoditizing "high core count" processors.

They've done much more than that. Intel's current server CPU lineup is tightly
siloed into different segments to limit every feature that some customers
would pay more for into it's own line priced to match. That's why they
currently have Xeon Scalable {Bronze, Silver, Gold, Platinum}, Xeon {D, W, E}
lines, with 402 different Xeon CPUs actively being sold.

In contrast, AMD has two EPYC lines, P and non-P, only differing in that P is
for 1-socket servers. The models in these lines differ in that they have
more/less cores and different clocks, all those features that Intel gates and
segments by, are found in every AMD CPU.

------
Datenstrom
> new Intel chip features for machine learning tasks and new Intel memory
> technology being with customers such as German software firm SAP SE
> (SAPG.DE) could give Intel an advantage in those areas.

I hope AMD turns their attention to machine learning tasks soon not just
against Intel but NVIDIA also. The new Titan RTX GPUs with their extra memory
and Nvlink allow for some really awesome tricks to speed up training
dramatically but they nerfed it by only selling without a blower style fan
making it useless for multi-GPU setups. So the only option is to get Titan RTX
rebranded as a Quadro RTX 6000 with a blower style fan for $2,000 markup.
$2000 for a fan.

The only way to stop things like this will be competition in the space.

~~~
opencl
Their GPUs already have really good performance for machine learning tasks,
but it doesn't seem like there's a whole lot more they can do about people
overwhelmingly using CUDA. Their ROCm software stack has gotten reasonably
good but getting developers to buy into is hard with how much inertia
NVIDIA/CUDA has.

~~~
changoplatanero
Why cant AMD make their gpus to be CUDA compatible?

~~~
yalue
I'd say it's likely because NVIDA's CUDA compiler produces code for NVIDIA
GPUs, and AMD would have a lot of (questionably legal) reverse engineering
ahead of them in order to support the same code on their own GPUs.

If you're asking why AMD doesn't make a compiler for CUDA source code that
targets their own GPUs--that's basically what ROCm currently does. They're
pushing their CUDA alternative, called "HIP", which is essentially just CUDA
code with a find-and-replace of "cuda" with "hip". (And other similarly minor
changes.) They have an open source "hipify" program that does this
automatically ([https://github.com/ROCm-Developer-
Tools/HIP/tree/master/hipi...](https://github.com/ROCm-Developer-
Tools/HIP/tree/master/hipify-clang/)).

So, basically, AMD GPUs are already sort of CUDA compatible: just run your
CUDA code through hipify, then compile it using the HIP compiler, and run it
on a ROCm-supported system (which, for now is the most spotty of all of these
steps IMO).

------
rohan1024
This must be one of the reasons
[https://i.redd.it/181enxirv6f31.jpg](https://i.redd.it/181enxirv6f31.jpg)

Edit: source [https://www.phoronix.com/scan.php?page=article&item=amd-
epyc...](https://www.phoronix.com/scan.php?page=article&item=amd-
epyc-7502-7742&num=4)

~~~
phkahler
Beating Intel using their own codec! That's gotta hurt.

~~~
Teknoman117
Looks like the dual socket systems don't really fare well in that benchmark.
Either way, it's really funny that we need a state-of-art 64 core CPU to
exceed 60 fps in AV1 encode.

Jokes aside, I know H.264 was at this point in the past, I just wonder how
long it's going to be before we see AV1 hardware encoders that produce good
quality video (and hell, hardware decoders at that as well).

~~~
notathing
> _Either way, it 's really funny that we need a state-of-art 64 core CPU to
> exceed 60 fps in AV1 encode._

When H.264 was introduced, you needed a state of the art CPU just to play it
back... You can imagine how slow encoding it was.

~~~
rasz
Wasnt that bad. H.264 ratified in 2003, meanwhile even first Xbox released in
2001 (~celeron 733MHz) can play H.264 480p 2.5Mbit (DVD resolution, equal
quality to 10Mbit mpeg2) movies with ease, just as desktop budget processors
released in 2001 (Celeron 800/Duron 800, both included SSE).

Higher resolutions were another matter. 720p needed at least 2GHz
Athlon/2.8GHz P4 for smooth H.264 720p 5Mbit playback. Either top of the line
2003 CPUs, or 2006-2007 budget ones.

H.264 1080p 30Mbit bluray released in 2006 could be decoded purely in software
on top of the line 2006-2007 CPUs (dual core P4, A64 X2).

------
stupidcar
I'm curious, since I don't follow hardware that closely: Is the current battle
really best framed as a fight between Intel and AMD, or between Intel and
TSMC? I.e. is AMD's recent resurgence due to better chip design, or because
Intel's fabrication is struggling, whereas TMSC isn't?

~~~
ksec
It isn't just both, it is all _four_ area of them. That is Intel's Design and
Fabrication _and_ AMD's Design and Fabrication.

It was basically a perfect storm that is unthinkable few years ago. ( To me it
is still very much unreal even with today's announcement. ) Intel 10nm cant be
fixed in time ( In fact for 24 months, they just keep lying both publicly and
in investor's meeting ) , that is Intel's Fab problem. And Icelake couldn't
arrive on time because of 10nm, their Design could not adopt to 14nm or other
node, it was stuck with 10nm, compare to Apple and AMD which has adopted the
_train_ development method where their Design were less fixated on a Node
Schedule.

And AMD managed to execute in _perfection_. Naples set the tone to the
industry, Rome ( Zen 2 ) were a huge leap in performance, the Chiplet design
gave AMD the advantage in cost ( Smaller Die, Higher Yield, Mass manufacture
in volume ), so while it had a lower price than Intel, they are not hurting
their margin to fight this battle. Very important for the long term survival
of AMD. Along with TSMC 7nm were running in perfect harmony. Not to mention
TSMC were willing to fight and get 7nm capacity for AMD. Along with risking
more CapeX and building more Fab.

And to add a fifth thing to all these, Intel had major securities problem just
months before AMD's Zen 2 launch.

As if the whole thing were scripted to play the perfect Counter Attack by AMD
and TSMC. But No, it is the Hard work and dedication of AMD and TSMC, the will
to fight and deliver against all odds, compare this to Intel in the past 4 - 5
years.

So if you loathe Intel, now AMD is not only an _alternative_ , but also
possibly the best option on Server.

And if you love Intel, you should buy AMD to teach them a lesson for milking
and sitting on their butt not Innovating.

~~~
hajile
Don't forget that the IO die (which is actually much larger than the CPU die)
is still on Global Foundries 14nm, so more of most of these chips is 14nm than
is 7nm. This increases the number of 7nm chips AMD can get which definitely
helps keep up with the (seemingly) massive demand.

~~~
noir_lord
There is real demand, I'm building a Zen 2 based system for a co-worker and I
had to try three different places (online) to get my hands on a 3600 the other
day.

Damn thing is faster than my 2700X (which I'll be upgrading to a 3900X when I
get back from holiday).

AMD is straight _killing it_ at the moment.

------
localhost
It’s not just Google and Twitter, Azure is in as well:
[https://azure.microsoft.com/en-us/blog/announcing-new-amd-
ep...](https://azure.microsoft.com/en-us/blog/announcing-new-amd-epyc-based-
azure-virtual-machines/)

------
walrus01
Reminds me of many years ago when the absolute best $ per performance setup
was a rackmount dual socket opteron. Right when the opteron first became a
thing.

------
drewg123
They need to build a server ecosystem. I'm hoping that this success will help
with that so that things are better positioned for the next new server CPU
launch.

We've had engineering samples of Rome for quite a while. However, there are
very few available boards with PCIe4 right now. The one we've tried (under
NDA) has a busted BMC that won't accept network settings, which has really
hampered Rome testing. We've actually done most of our Rome testing using 1st
generation boards, with slower RAM and PCIe Gen3.

------
ps101
Genuinely asking, since I have little to no concept of this space - how does
the prevalence of either Intel or AMD affect developers?

~~~
robmccoll
For most developers, it doesn't. They're both making x86_64 chips with
coherent caches of similar size, similar core and thread counts, and a lot of
overlap in instruction set extensions. For people doing very high performance
work (heavy data processing, simulations, etc.) or performance critical work
(think hand coding optimized crypto library routines in assembly), the subtle
differences in those categories might affect how they lay out data and access
it, how many threads they use and how they use them, and preferences for the
availability of instruction set extensions that are specific to the kind of
workloads they have (vector processing extensions, native crypto operations,
unusual bit twiddling patterns).

~~~
sanxiyn
Another difference is that Intel and AMD virtualization (vmx and svm) are
incompatible with each other.

------
jacquesm
Intel stock has dropped quite a bit in the last month.

~~~
jdhn
I remember when AMD's stock was a running joke on WallStreetBets. Doesn't seem
like much of a joke now.

~~~
one2zero
Would love to see someone who YOLO'd @ $2.00 and HODL'd till now...what a bet
that would've been.

~~~
phkahler
I seriously considered it just under $2. If I believed all I read about zen it
would have been a sure thing to double or triple my money on an AMD comeback.
But I couldnt tell if that was my intuition or just hope, so no bet. If I had
bought there's no way I would have held all this time either (maybe some
shares but not all). But with what's happening now it looks like $40 is
entirely possible. That will be 20x in just a few years, but I didnt play...

------
sunseb
I love that! This AMD/Intel competition is bringing us faster and cheaper cpu!

------
bob1029
How long do you guys think it will take Intel to come up with an Infinity
Fabric equivalent architecture? I feel like Intel wasn't working on it until
recently and that it might actually be a few years before they can produce
competitive silicon, especially accounting for the possibility that AMD is
going to keep releasing on this yearly cadence while simultaneously leveraging
full node improvements @ TSMC/Samsung...

------
agumonkey
Weird sensation, I remember the struggling days of AMD. And now that they're
about to become very successful (plausibly) ... I'm almost tempted to donate
to them. Go AMD.

~~~
theandrewbailey
Donate? Buy some of their products.

~~~
agumonkey
I didn't imagine donating 100$ or anything near the price of one of their CPU.
It's just that they cracked through years of hard work and it's great IMO.

~~~
BubRoss
Are you actually talking about giving away your money to a multi billion
dollar public company?

Buy some stock instead, don't just throw your money away. How would you even
'donate' to them?

~~~
agumonkey
It's just an emotion toward a company that made business in a way I value.
Maybe buying stock is how you express "emotional" appreciation in the business
world.

------
john37386
Maybe it has something to do with all the recent speculative side channel
attacks. Intel seems to have more hardware vulnerabilities than AMD.

------
nafizh
If only now I can use AMD GPUs for training neural networks. Nvidia badly
needs some competition.

------
schainker
I sit sometimes wondering how much energy is wasted from the intel security
fixes and what that translates to in real dollars. IMO the market should gut
punch Intel for causing such a wasteful problem if competitors bring the
figurative heat.

------
trilila
One product i’ve been waiting for is the upcoming mac pro. But given it has an
intel cpu, overpriced and less lower powerful than the amd, i am going to
delay that purchase for the foreseeable future.

------
dyeje
What does Twitter have to do with any of this? Do they run their own data
center?

~~~
pavanky
Yes, more than one infact.

