Users looking to build their own dual EPYC workstation or system, using completely off-the-shelf components, do not have a lot of options. Users can buy most of the CPUs at retail or at OEM, as well as memory, a chassis, power supplies, coolers, add-in cards. But the one item where there isn’t a lot of competition for these sorts of builds is in the motherboard. Unless you go down the route of buying a server on rails with a motherboard already fitted, there are very limited dual EPYC motherboard options for users to just purchase. So few in fact, that there are only two, both from Supermicro, and both are called the H11DSi. One variant has gigabit Ethernet, the other has 10GBase-T.

Looking For a Forest, Only Seeing a Tree

Non-proprietary motherboard options for building a single socket EPYC are fairly numerate – there’s the Supermicro H11DSL, the ASRock EPYCD8-2T (read our review here),  the GIGABYTE MZ31-AR0 (read our review here), or an ASUS KNPA-U16, all varying in feature set and starting from $380. For the dual socket space however, there is only one option. The Supermicro H11DSi, H11DSi-NT, and other potential minor variants, can be found at standard retailers from around $560-$670 and up, depending on source and additional features. All other solutions that we found were part of a pre-built server or system, often using non-standard form factors due to the requests of the customer those systems were built for. In being the only ‘consumer’ focused motherboard, the H11DSi has a lot to live up to.

As with other EPYC boards in this space, users have to know which revision of the board they are getting – it’s the second revision of the board that supports both Rome and Naples processors. One of the early issues with the single socket models was that some of them were not capable of Rome support, even with an updated BIOS. It should be noted that as the H11DSi was built with Naples in mind to begin with, we are limited to PCIe 3.0 here, and not the PCIe 4.0 that Rome supports. As a result, we suspect that this motherboard might be more suited to users looking to extract the compute out of the Rome platform rather than expanded PCIe functionality. Unfortunately this means that there are no commercial dual socket EPYC motherboards with PCIe 4.0 support at the time of writing.

The H11DSI is partly E-ATX standard and part SSI-CEB, and so suitable cases should support both in order to get the required mounting holes. Using the dual socket orientation that it has, the board is a lot longer than what most regular PC users are used to: physically it is one square foot. The board as shown supports all 8 memory channels per socket in a 1 DIMM per channel configuration, with up to DDR4-3200 for the Revision 2 models. We successfully placed 2 TB of LRDIMMs (16 * 128 GB) in the system without issues.

As with almost all server motherboards, there is a baseband management controller in play here – the IPMI ASPEED AST2500 which has become a standard in recent years. This allows for a log in to a Supermicro interface over the dedicated Ethernet connection, as well as a 2D video output. We’ll cover the interface on the next page.

Ethernet connectivity depends on the variant on the H11DSi you look for: the base model has two gigabit ports powered through an Intel i350-AM21 controller, while the H11DSi-NT has two 10GBase-T ports from the Intel X550-AT2 on board. Due to this controller having a higher TDP than the gigabit controller, there is an additional heatsink next to the PCIe slots.

The board has a total of 10 SATA ports: two SATA-DOM ports, and four SATA ports from each CPU through two Mini-SAS connectors. It’s worth noting that the four ports here come from different CPUs, such that any software RAID across the CPUs is going to have a performance penalty. In a similar vein, the PCIe slots also come from different CPUs: the top slot is a PCIe 3.0 x8 from CPU 2, whereas the other slots (PCIe 3.0 x16/x8/x16/x8) all come from CPU 1. This means that CPU 2 doesn’t actually use many of the PCIe lanes that the processor has.

Also on the storage front is an M.2 x2 slot, which supports PCIe and SATA for Naples, but only PCIe for Rome. The power cabling is all in the top right of the motherboard, for the 24-pin main motherboard power as well as the two 12V 8-pin connectors, one each for the CPUs. Each socket is backed by a 5-phase server-grade VRM, and the motherboard has eight 4-pin fan headers for lots of cooling. The VRM is unified under a central heatsink, designed to take advantage of cross-board airflow, which will be a critical part in any system built with this board.

We tested the motherboard with both EPYC 7642 (Rome, 48 core) processors and the latest EPYC 7F52 (Rome, 16 core high frequency) processors without issue. 

BIOS Overview
Comments Locked

36 Comments

View All Comments

  • eek2121 - Wednesday, May 13, 2020 - link

    A though I had: It would be nice if PCIE latency could be measured going forward.
  • headeffects - Wednesday, May 13, 2020 - link

    Can you explain a bit why ECC is less useful now than on the past? I’m curious.
  • AntonErtl - Thursday, May 14, 2020 - link

    All sTRX4 (Threadripper 3000 socket) boards listed on https://geizhals.at/?cat=mbstrx4 are listed as supporting ECC. Even a number of mainstream desktop boards (primarily from Asrock and ASUS) support ECC and we have built servers with them, and we have tested that ECC works.
  • Micronsystems - Friday, May 15, 2020 - link

    Much informative blog.
    Reach Micronsystems for the best<a href = "https://www.micronsystems.co.in/">desktop rental in chennai</a>
  • fakemoth - Friday, May 15, 2020 - link

    Long time lurker here, love Anandtech. I had to register an account for this article in order to join the disappointment gang: there is a ridiculous low number of ATX formats options for AMD EPYC, and when talking Rome and PCI gen4, even lower. Found this the hard way: I hardly managed to get a Gigabyte MZ32-AR0 after months of waiting in vain for Supermicro to release some standard ATX/eATX with PCI gen4. Nowadays they seem to have some H12 models out, but of course those are nowhere to be found. If you want to buy one that is.

    Problem is: now there is no new dual socket option! Exactly when I bought a 7402 Epyc, not a P part as usual... Supermicro is simply missing the server CPU party of the decade. We hope that the press can push things in the right direction, as it is not the time for big manufacturers to arbor the Intel fanboy flags. They did it for a very boringly long time, I just can't believe there isn't the slightest interest in the Rome platform. Because that's what it is: an astounding lack of interest and a very obtuse technology angle, that awful "partnership" inertia that plagues the server/workstations market. It seems that being future proof is a crime in this area.

    It speaks for itself: we are reading here a review of a board that is 2 years old and got a revision half a year ago... Yeap, the cheapo BIOS revision, that one.

    One thing about the article: the BIOS can't usually be updated via IPMI for Supermicro boards without a license. Only the firmware. Is this still the case or not?

    Thank you Anandtech for reviewing enterprise, but standard formats (there shouldn't exist anything else, but that's just me)! Supermicro makes cool tech and I own a bunch, but sometimes, man...
  • JustTheInductions - Friday, May 15, 2020 - link

    PCIe Gen 4. Supporting the feature set of the expensive CPUs you plan on utilizing is a necessity. Support a competitor to SuperMicro to get the board manufacturers to provide more support, AMD. We all know competition engenders motivation to get off one's arse . . .
  • Deicidium369 - Monday, May 18, 2020 - link

    Which competitor is that?
  • JustTheInductions - Friday, May 29, 2020 - link

    Probably ASRock Rack.
  • kwinz - Saturday, May 16, 2020 - link

    E-ATX is just a painful form factor for a dual socket EPYC. Think of all the PCIe lanes that you can never use.
  • Deicidium369 - Monday, May 18, 2020 - link

    Even if all the slots were available - you still would not have a use for 128+ lanes. I know the big number is enticing - but in reality - like the 16C desktop CPUs - it's just a marketing gimmick. It's like having a car with 3000HP and it only gets driven in Manhattan - Cool that you have 3000HP, but in reality, not much use.

Log in

Don't have an account? Sign up now