System Benchmarks

On a system like this, there’s not a whole lot to emphasize through benchmarking.

 

Wall Power

For our power testing, we're booting into Windows and letting the system idle in the operating system for 5 minutes, then taking a reading of the power at the wall. We then fire up Cinebench R20, which probes all the threads in our dual 7F52 setup, and take the peak power from the benchmark. For this review, we've also tested a series of DRAM setups, taking a minimum/minimum of 2 x 8 GB RDIMMs (1 channel) and 16 x 128 GB LRDIMM (8-channel).

Idle System Power Consumption

For idle power, our RDIMM arrangement doesn't cause much extra power. With the LRDIMMs, we're looking at an extra 2W per module at idle.

CB20 System Power Consumption

For full load, again the 8 GB DIMMs only draw fractions of a watt a piece. Moving up to the large modules, and we're realistically seeing another 7 W per module on average. When we compare the min/max, there's an extra 100W dedicated just to the memory here.

 

Warm POST Test

For our POST test, we take a system that has been previously booted, shut it down until all the fans stop spinning, and then initiate a power on through the BMC. The time recorded is for the initial BIOS procedure until the OS starts loading. Again with this test, we've gone through with different DRAM configurations.

Warm POST Time

More memory means more training is required to ensure that each module will operate within a given set of sub-timings. The more capacity at play, and the more channels populated, means more time is required. At the quickest POST it takes 50 seconds, but our longest recorded POST was over two minutes.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

Deferred Procedure Call Latency

The DPC values for the Supermicro board are very impressive. Normally we consider here anything under 200 microseconds as a successful result, and a fresh system on the Supermicro goes well below that.

IPMI Overview H11DSi Conclusions
Comments Locked

36 Comments

View All Comments

  • 1_rick - Wednesday, May 13, 2020 - link

    Yeah, "numerous" was the correct word here.
  • peevee - Thursday, May 14, 2020 - link

    Nope. 1 is not numerous.
  • heavysoil - Friday, May 15, 2020 - link

    He's talking about the options for single socket, and lists three - numerous compared to the single available option for dual socket.
  • Guspaz - Wednesday, May 13, 2020 - link

    $600 enterprise board supporting up to 256 threads, and it's still just using one gigabit NICs?
  • Sivar - Wednesday, May 13, 2020 - link

    "Don't worry, widespread 10-gigabit is just around the corner." --2006
  • Holliday75 - Wednesday, May 13, 2020 - link

    1gb is pennies. 10gg costs a bit more. If you plan on using a different solution you have the option to get the cheaper board and install it. Save the 1gb for management duties or not at all.
  • DigitalFreak - Wednesday, May 13, 2020 - link

    Why waste the money on onboard 10 gig NICs when most buyers are going to throw in their own NIC anyway?
  • AdditionalPylons - Thursday, May 14, 2020 - link

    Exactly. This way the user is free to choose from 10/25/100 GbE or even Infiniband or something more exotic if they wish. I would personally go for a 25 GbE card (about about $100 used).
  • heavysoil - Friday, May 15, 2020 - link

    There's one model with gigabit NICs, and one with 10 gigabit NICs. That covers what most people would want, and PCIe NICs for SPF+, and/or 25/40/100 gigabit covers most everyone else.

    I can see this with the 1 gigabit NICs for monitoring/management and a 25 gigabit PCIe card for the VMs to use, for example.
  • eek2121 - Wednesday, May 13, 2020 - link

    I wish AMD would restructure their lineup a bit next gen.

    - Their HEDT offerings are decently priced, but the boards are not.
    - All of the HEDT boards I’ve seen are gimmicky, not supporting features like ECC, and are focused on gaming and the like.
    - HEDT does not support a dual socket config, so you would naturally want to step up to EPYC. However, EPYC is honestly complete overkill, and the boards are typically cut down server variants.
    - For those that don’t need HEDT, but need more IO, they don’t have an offering at all.

    I would love to see future iterations of Zen support an optional quad channel mode or higher, ECC standardized across the board (though if people realized how little ECC matters in modern systems...), and more PCIE lanes for everything.

Log in

Don't have an account? Sign up now