Hacker News new | past | comments | ask | show | jobs | submit login

> ... for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them. The change here comes for the Speculative Store Bypass, known as Spectre v4, which AMD now has additional hardware to work in conjunction with the OS or virtual memory managers such as hypervisors in order to control. AMD doesn’t expect any performance change from these updates.

Which Intel CPU generation will have hardware fixes for these Spectre variants?

I guess it will score it a lot of datacentre business. Not so few big "cloud" providers still run completely bare with regards to microcode patches, and some very likely do so intentionally.

Is it still the case that an AMD core is less powerful than an Intel core for the same frequency? I understand that AMD is making it up with more cores but in a cloud you get charged per core. Can a cloud substitute an Intel for an AMD cpu?

That hasn't been true since the release of Ryzen in 2017. The reason why AMD is lagging behind in single core performance is because Intel CPUs can be clocked higher. Often to 5GHz. Whereas AMD usually only boosts to somewhere around 4.4GHz. Gamers care about a 12% difference. Servers usually don't even go beyond 3GHz.

IPC varies according to the specific task but Ryzen 1xxx and 2xxx have always had IPC on average comparable to Broadwell CPUs (excluding AVX workloads). So intel has had a slight lead there from Skylake onwards.

According to what we're seeing, the situation seems to be reversed with the 3xxx series, where AMD seems to have a small but significant lead; we'll have to wait for independent benchmarks.

Regardless who comes out ahead on the benchmarks, competition is always a good thing!

It kinda matters because AMD hasn't been able to outperform Intel at single threaded tasks for like 5+ years.

They aren't competition if they fall of the map. So a slightly edge would be great because it at least puts them back in the game.

That's not fair comparison. Servers are lower speeds because they have far more cores. AMD's desktop processors have more cores than Intel, so you'd expect the clock to be lower.

Servers are lower speed because server owners at a pretty trivial scale generally care more about perf/watt than the number of servers they have, and the slower frequencies hit that sweet spot. There are some places where they care about individual node perf, and you'll see lot's of cores _and_ high clock speed there. Borg scheduler nodes come to mind there.

No, servers are lower speed because they have more cores and larger caches. Both of those take up more die space, which makes routing higher-speed clocks harder/impossible.

This is easy to prove. The highest clock rate xeons you'll find are a special SKU exclusive to aws. Sure enough, they have far fewer cores than instances with lower clocks.

The cores have independent clock trees and PLLs. Half the point of going multi core instead of giant single core in the first place is so that you don't have to route clock lines all over the place.

What you're seeing isn't routing issues, but the fact that their newer process isn't up to snuff, and they don't have the proper yields on larger die sizes.

Like, I've shipped RTL and know pretty well how this stuff works.

All evidence hints to opposite. IPC gain from all improvements in Zen2 means AMD should be equal or faster at same frequency

from the (possibly cherrypicked) benchmarks in the original announcement [0], it looks like single thread performance is on par or slightly better even with a clock disadvantage.

[0] https://images.anandtech.com/doci/14407/COMPUTEX_KEYNOTE_DRA...

Not sure how much overall effect it will have, but Windows also recently released an update to perform better with Zen, and these benchmarks don't include that update. I'm not sure what the story was with CPU vulnerability mitigations were, but if those were turned off then Zen 2 could handily be beating Intel.

It certainly looks promising, but I'll still hold my excitement until we get some 3rd party benchmarks.

I believe the update is to optimize where threads are scheduled across CCXs. shouldn't affect a single threaded benchmark.

I'm also skeptical of first party benchmarks, but I'm already pretty excited that I finally might be able to justify an upgrade from my old haswell setup.

IIRC, the benchmarks also don't include the spectre/meltdown patches for Intel. AMD, apparently, worked very hard to give a worst-case comparison for the most part.

> in a cloud you get charged per core. Can a cloud substitute an Intel for an AMD cpu?

In a cloud you typically pay for cores from a specific CPU type. Presumably any clouds at offer AMD cpus will price them in a competitive manner.

I thought they had a “cpu x or equivalent” kind of language, like rental cars.

Yeah, it depends on what they deem to be equivalent. AWS already offer AMD cores, and they're 10% cheaper compared to Intel cores - https://aws.amazon.com/ec2/amd/

AWS mostly did away with that years ago; AFAIK first-tier cloud providers all promise a specific CPU model. And yes, and AMD vCPU is cheaper than an Intel vCPU.

It used to be the case, but it looks like that gap is almost nothing with zen 2. Clouds also often have different pricing for AMD cores.

What do you mean by "run completely bare with regards to microcode patches", that they don't apply microcode patches & errata?

Yes, that they did not bring up microcode patches that cover some of Spectre/Meltdown family bugs

Do you have a source for that?

GCP was "fully fixed before it was known" according to the engineers I know there. I find it /highly/ unlikely that they don't have patched microcode.

I mean, the cloud business is the place with the most to lose from these kinds of issues, I am incredibly suspicious of the claim that cloud providers aren't patching their microcode.

Whether it's intels or their own modified variant of microcode I would fully expect them to be patched in some way.

Google's researchers played a big part in discovering / classifying / mitigating the vulnerabilities. They also developed the retpoline pattern. It is very likely that GCP was "fixed before it was known."

Indeed, this is why it's unlikely that hey have the patched microcode.

How do you figure that?

They have much to lose from not applying these mitigations, especially if they're the people spending a fortune to find them.

If the grand-parents claim holds, they wouldn't have needed it because they already implemented a workaround themselves.

I honestly doubt that claim however and haven't heard it before this thread.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact