Hacker News new | past | comments | ask | show | jobs | submit login

That's not fair comparison. Servers are lower speeds because they have far more cores. AMD's desktop processors have more cores than Intel, so you'd expect the clock to be lower.

Servers are lower speed because server owners at a pretty trivial scale generally care more about perf/watt than the number of servers they have, and the slower frequencies hit that sweet spot. There are some places where they care about individual node perf, and you'll see lot's of cores _and_ high clock speed there. Borg scheduler nodes come to mind there.

No, servers are lower speed because they have more cores and larger caches. Both of those take up more die space, which makes routing higher-speed clocks harder/impossible.

This is easy to prove. The highest clock rate xeons you'll find are a special SKU exclusive to aws. Sure enough, they have far fewer cores than instances with lower clocks.

The cores have independent clock trees and PLLs. Half the point of going multi core instead of giant single core in the first place is so that you don't have to route clock lines all over the place.

What you're seeing isn't routing issues, but the fact that their newer process isn't up to snuff, and they don't have the proper yields on larger die sizes.

Like, I've shipped RTL and know pretty well how this stuff works.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact