
The Supermicro H11DSi Motherboard Mini-Review: The Sole Dual EPYC Solution - rubyn00bie
https://www.anandtech.com/show/15783/the-supermicro-h11dsi-motherboard-review
======
nullc
I have this board with 7742s. I like it generally, though most of my affection
is due to the CPUs. I wish it had PCI-E 4.

I've found it to be unreliable with memory running faster than 2933MHz but
that might be the memory I was using. When I did have memory corruption
problems running at higher memory speeds I didn't get a single ECC error,
which I found surprising and concerning.

I also had issues with Kernel 5.4 not booting at all (
[https://bugzilla.redhat.com/show_bug.cgi?id=1783733](https://bugzilla.redhat.com/show_bug.cgi?id=1783733)
) which I worked around by adding edac_report=off (I'm not sure if later
kernels don't have this issue, because I've left the setting in, since taking
it down and rebooting to test it is a bit of a pain-- as is typical with
server boards with lots of cores and ram it takes many minutes to post)

~~~
arminiusreturns
Having built a few quad/dual cpu AMD Opteron systems since ~2013 with
supermicro boards, I can tell you they are _extremely_ sensitive to RAM
brands/models and variations. Sometimes even RAM in the verified supermicro
list would have issues.

On a side note, it was about then that I foresaw AMD's move towards high core
counts and I love what they have been doing both on desktop and in the server
space. Honestly though, I think if I were to try to do a newer amd server I
might try to push a threadripper if the power profile fit the DC/colo instead
of an epyc mostly because of the lanes and the fact that anything I'm doing on
a system that powerful is going to have a 40gb+ mellanox and often in
combination with GPUs.

~~~
nullc
The single socket epyc boards expose a lot more PCIe than threadripper boards,
from what I can tell.

$/core-ghz is much better on the TR parts though, if you're paying retail.
Epyc chips sometimes show up on secondary markets at much better prices than
retail. (e.g. my 7742's I basically paid the same $/core-ghz as the current
top end threadripper parts).

~~~
arminiusreturns
Yeah it's generally a tradeoff of pci lanes to clock speed between the two. I
would go for the clock speeds.

------
tw04
It's extremely frustrating that none of the vendors are taking advantage of
the built-in 10Gbe functionality of the new AMD chips. I've seen the same with
the embedded options that compete with the Xeon-D. If anyone happens to work
in that specific portion of the industry and can comment - I'd be curious if
there's a reason other than motherboard mfgs just don't want to invest the
time and effort so they slap a 1Gbe intel NIC on the mobo and call it a day.

Little stuff like that makes the AMD option (which SHOULD be cheaper) not
cheaper for a lot of applications because now you've got server + NIC vs. just
the server from intel.

~~~
kentonv
I recently researched EPYC 7002 boards available via retail channels and found
four options: Supermicro, Asrock, Tyan, and Gigabyte. Of these, Asrock and
Tyan both offer 10Gbe options. I ended up buying the Asrock board with 10Gbe
from Newegg, but haven't hooked it up yet.

[https://www.asrockrack.com/general/productdetail.asp?Model=E...](https://www.asrockrack.com/general/productdetail.asp?Model=EPYCD8-2T)

[https://www.tyan.com/Motherboards_S8030_S8030GM4NE-2T](https://www.tyan.com/Motherboards_S8030_S8030GM4NE-2T)

Though now that I look at it, the ASRock board is advertising "10G base-T by
Intel® X550" \-- is this different from the "built-in" functionality you
mention?

~~~
tw04
Yup. AMD has an on-die 10gbe controller that would show up as AMD. Anything
that says Intel/Broadcom/realtek/etc are off chip and something the MFG put on
the motherboard which adds cost.

~~~
kentonv
Hmm. I'm speculating, but I'd guess that tossing on the same external
controller that they've put on a million other motherboards in the past
probably actually saves cost, vs. designing something unique to expose the on-
die controller.

I wonder what the performance impact is, though. I'd guess (very naively) that
an on-die controller would perform better.

~~~
wtallis
My guess is that the Intel Ethernet drivers are seen by the customers as very
tried-and-true, and how to tune them is well-documented. AMD's Ethernet
drivers are relatively unknown and unproven, and it's easy to imagine that
they could perform worse. That could easily make up the difference in cost
between an external PHY vs an external MAC+PHY.

------
metaphor
First impression was single onboard VGA output and DE-9 serial port...typical
enterprise.

Second impression was damn does that first socket look like a peripheral hog.

EDIT: Removed remark that erroneously referenced this comment[1] as a 2U
chassis being applicable to the motherboard in question.

[1] [https://www.anandtech.com/comments/15783/the-
supermicro-h11d...](https://www.anandtech.com/comments/15783/the-
supermicro-h11dsi-motherboard-review/700552)

~~~
robin_reala
That’s a completely different chassis, as seen by the “Integrated Board: Super
H12DST-B” photo. The review is of the H11DSi, which is designed for (big)
tower cases.

~~~
metaphor
Thanks for pointing out that missed detail on my part.

------
lazyjones
Not even a memory bandwidth benchmark? Disappointing... It would be
interesting to see how the 2 x N cores can deal with 16 memory sockets only
and what the bandwidth of the low core vs. the high core CPU looks like.

------
xVedun
So I was looking at the H12DSU-
iN([https://www.supermicro.com/en/products/motherboard/H12DSU-
iN](https://www.supermicro.com/en/products/motherboard/H12DSU-iN)) which would
seem to be another option. But it looks like the article implies that this
would only be available to buy through getting an entire server with it.
Although it looks like it could be possible that they sell it if you ask. The
server they are going to release is here:
[https://www.supermicro.com/en/Aplus/system/1U/1024/AS-1024US...](https://www.supermicro.com/en/Aplus/system/1U/1024/AS-1024US-
TRT.cfm)

------
greendave
This feels a bit like they just took an existing Intel design and tried to
adapt it for AMD. Nothing wrong with that per se - by all accounts it's a good
solid board - but it leads to weird choices like not using 56 of 64 PCIe lanes
on the second CPU.

~~~
rumanator
> (...) but it leads to weird choices like not using 56 of 64 PCIe lanes on
> the second CPU.

The article mentions that out of a total of 128 PCI lanes, 64 are used for
CPU-to-CPU communication and only 8 are used for an external PCIe device.

To be fair, a user that feels the need to cram two EPYC processors on the same
motherboard is someone who is willing to pay a premium for CPU-bound
computations.

------
ShamelessC
I've admittedly never actually considered the dis/advantages of running two
CPU's on one board.

Is it correct that you can run two different processors with these? How much
of a speedup does it give you? Is the operating system designed to take full
advantage of both by default?

Why don't gaming builds use these? Too much cost?

~~~
posix_me_less
The same processors should be used, I would be surprised if different one
would work together. The processor model has to support two socket operation,
those are more expensive. Speedup happens only if the running system benefits
from more cores. The expected speedup is <2\. Operating systems work with
this, but Windows may have some weird problems with many core processors.

Gaming builds usually cannot take advantage of so many cores. Even if the
machine is running the game and streaming, single 6-core or 8-core processor
is enough.

~~~
rbanffy
> The same processors should be used

This is an interesting thing. I _should_ be possible to be smarter about
different timings. ARM has very mature asymmetric core support for quite some
time now. Intel will soon launch their own in the mobile space.

> Gaming builds usually cannot take advantage of so many cores

Indeed. Games have long been limited in their support for multiple cores. It's
a bit surprising because consumer Windows, while it took its time to be able
to support multiple sockets, has been able to support them for a very long
time now.

~~~
rcxdude
It's mostly that the kind of CPU work games are doing is quite hard to
parallelise (physics simulations, NPC 'AI', loading and decompressing model
data). It's a lot of fairly complex code and getting a synchronisation
strategy which actually produces speedup is hard. And when it is done it's
usually still limited to e.g. 1 thread doing physics, 1-2 threads doing
rendering (it took a while for graphics APIs to support any kind of multi-
threaded rendering), etc so it only scales to 3-4 cores max (and there are
games which do this, so core count is not completely irrelevant).

~~~
raphaelj
Could it explain why we are seeing more improvements in rendering quality
compared to physics and AI?

As you said, AI and physics are inherently not really parallelizable while
rendering is. AI and physics from the Athlon 64/Pentium 4 era (e.g. Half-Life
2, Crysis) are similar to what we get now, while visual quality greatly
improved (better textures, more detailed objects, longer visibility range).

~~~
rcxdude
I think for AI the core reason is more sophisticated AI isn't particularly
great from a game design POV. In a game, predictability is key, both because
a) it allows the game designers to actually change the AI behaviour to balance
it and be able to incorporate it into their game design, and b) if the players
can't predict the AI they can't plan around it, and the ability to make a plan
around a game's systems and try to execute it is a key part of non-frustrating
gameplay in most games. The more state-of-the-art AI systems tend to become
less predictable and for the most part that makes the games less fun.

In terms of physics usually the issue is if you have a rich physics
envrionment it gets more fragile: it's easier for players to break it,
intentionally or unintentionally. I've played some games with physics systems
far in advance in terms of number of objects than could be achieved previously
and they are all pretty much just 'buggy', not necessarily in terms of coding
quality, just in terms of how rigid body physics is really prone to creating
edge cases which create massive forces, velocities, and/or accelerations from
apparently benign arrangements of objects.

~~~
leetcrew
I think you're right; at least it jives with everything I've read about game
dev. a common design goal in modern games is to never allow the game to fall
into a state where forward progress is impossible due to a player action
(excepting stuff like dying, of course). realistic physics simulation adds a
lot of weird edge cases that need to be tested or hacked around. unlike cs:go,
cs:source had lots of collidable physics props in the maps. they looked benign
at first, but malicious players could use them to block important passages,
make it impossible to defuse the bomb, or arrange them in ways that other
players would get stuck in them.

here's a video of hl2 devs reacting to a speedrun of the game using many
exploits:
[https://www.youtube.com/watch?v=sK_PdwL5Y8g](https://www.youtube.com/watch?v=sK_PdwL5Y8g)

kind of a long video, but there's a lot of interesting commentary on how much
work it was to make the physics puzzles robust against unexpected player
actions.

------
chx
Anandtech.

I called this guy out on Twitter eventually challenging him to show me any
article (in this PC area) of theirs and I will find mistakes. I offered my
editorial services for one dollar. I've been, long ago, one of the editors of
the largest computer monthly in Hungary and was on the editorial board. This
wouldn't have been my first rodeo.

In this particular article there's not a lot of technical explanation so it's
not easy to find a mistake but it's not so hard either: the GIGABYTE MZ31-AR0
is seriously outdated, they now sell the MZ32-AR0 which is now PCIe 4.0 and
they missed from the list the Tomcat HX S8030 which is also PCIe 4.0. They did
not, in fact, list any PCIe 4.0 boards. It's just sloppy.

~~~
wtallis
> I called this guy out on Twitter eventually challenging him to show me any
> article (in this PC area) of theirs and I will find mistakes. I offered my
> editorial services for one dollar. I've been, long ago, one of the editors
> of the largest computer monthly in Hungary and was on the editorial board.
> This wouldn't have been my first rodeo.

Do you honestly think anyone would hire you with an approach like that? You'd
have to _pay_ people to put up with that attitude.

~~~
chx
Sigh. People missed the word "eventually". The discussion was more pleasant. I
have pointed out some mistake in an article politely and offered to help. He
was quite adamant they need no help and was, in fact, quite arrogant and
that's when I told him that a) everyone knows Anandtech is sloppy, actually
linking a HN comment form a few months back b) I challenged him to show me
anything that I can't find a mistake in.

