Is it still the case that an AMD core is less powerful than an Intel core for the same frequency? I understand that AMD is making it up with more cores but in a cloud you get charged per core. Can a cloud substitute an Intel for an AMD cpu?
That hasn't been true since the release of Ryzen in 2017. The reason why AMD is lagging behind in single core performance is because Intel CPUs can be clocked higher. Often to 5GHz. Whereas AMD usually only boosts to somewhere around 4.4GHz. Gamers care about a 12% difference. Servers usually don't even go beyond 3GHz.
IPC varies according to the specific task but Ryzen 1xxx and 2xxx have always had IPC on average comparable to Broadwell CPUs (excluding AVX workloads). So intel has had a slight lead there from Skylake onwards.
According to what we're seeing, the situation seems to be reversed with the 3xxx series, where AMD seems to have a small but significant lead; we'll have to wait for independent benchmarks.
That's not fair comparison. Servers are lower speeds because they have far more cores. AMD's desktop processors have more cores than Intel, so you'd expect the clock to be lower.
Servers are lower speed because server owners at a pretty trivial scale generally care more about perf/watt than the number of servers they have, and the slower frequencies hit that sweet spot. There are some places where they care about individual node perf, and you'll see lot's of cores _and_ high clock speed there. Borg scheduler nodes come to mind there.
No, servers are lower speed because they have more cores and larger caches. Both of those take up more die space, which makes routing higher-speed clocks harder/impossible.
This is easy to prove. The highest clock rate xeons you'll find are a special SKU exclusive to aws. Sure enough, they have far fewer cores than instances with lower clocks.
The cores have independent clock trees and PLLs. Half the point of going multi core instead of giant single core in the first place is so that you don't have to route clock lines all over the place.
What you're seeing isn't routing issues, but the fact that their newer process isn't up to snuff, and they don't have the proper yields on larger die sizes.
Like, I've shipped RTL and know pretty well how this stuff works.
from the (possibly cherrypicked) benchmarks in the original announcement [0], it looks like single thread performance is on par or slightly better even with a clock disadvantage.
Not sure how much overall effect it will have, but Windows also recently released an update to perform better with Zen, and these benchmarks don't include that update. I'm not sure what the story was with CPU vulnerability mitigations were, but if those were turned off then Zen 2 could handily be beating Intel.
It certainly looks promising, but I'll still hold my excitement until we get some 3rd party benchmarks.
I believe the update is to optimize where threads are scheduled across CCXs. shouldn't affect a single threaded benchmark.
I'm also skeptical of first party benchmarks, but I'm already pretty excited that I finally might be able to justify an upgrade from my old haswell setup.
IIRC, the benchmarks also don't include the spectre/meltdown patches for Intel. AMD, apparently, worked very hard to give a worst-case comparison for the most part.
Yeah, it depends on what they deem to be equivalent. AWS already offer AMD cores, and they're 10% cheaper compared to Intel cores - https://aws.amazon.com/ec2/amd/
AWS mostly did away with that years ago; AFAIK first-tier cloud providers all promise a specific CPU model. And yes, and AMD vCPU is cheaper than an Intel vCPU.