- 6 memory channels (instead of 4)
- new CPU cache architecture that should show big gains for things like databases
- new vector ISA (AVX-512) that is significantly more useful than AVX2, in addition to being twice as wide
The first two should be instant wins for things like databases. AVX-512 isn't going to be used in much software yet but it is arguably the first broadly usable vector ISA that Intel has produced. This should enable some significant performance gains in the future as code is rewritten to take advantage of it. (Not idle speculation on the latter, we're queued up to get some of this hardware for exactly this purpose. Previously vectorizing wasn't worth the effort outside of narrow, special cases but AVX-512 appears to change that.)
The cache structure more closely mirrors the data locality intrinsic in recent high-performance database engines, which don't share data pages across cores and which now commonly use page sizes (256k) that don't fit in L2. The increase in cache size from 256k to 1M is particularly important because it allows you to store multiple pages in L2 which should make a number of multi-page and complex query on single page operations significantly more efficient. Should be great for join kernels. Similarly, the non-inclusive and smaller shared L3 makes more sense in that the amount of state shared across cores is actually pretty small. In short, it redistributes cache resources in a way that is very useful to the way database engines are currently designed.