Ever since GIGABYTE’s Server team and I first started discussing reviews, it was interesting to see what a purely B2B (business to business) unit could do. Since then, GIGABYTE Server has expanded, catering to both the B2B and B2C (customer) markets and selling direct to end users. With the release of Haswell-EP we reported on their large launch at the time and they sent us the MD60-SC0 for review.

GIGABYTE Server MD60-SC0 Overview

For those not versed in 2P workstation and server culture, the MD60-SC0 looks a bit different to consumer line products. The DRAM and CPU sockets are aligned for airflow, first coming across the power delivery then the socket, and out the rear panel all in one straight line with the dram providing a baffle effect to channel the air. This means that the PCIe slots are in a somewhat awkward position in the middle of the motherboard, limiting the length of PCIe devices, and making the platform focus more on CPU power and storage rather than a GPU powerhouse without riser cables.

The sockets might also confuse some users. Unlike the standard square LGA2011-3 sockets we see on most Haswell-E or Haswell-EP motherboards, the MD60-SC0 uses the narrow socket configuration. This requires different coolers as well due to the different screw hole placement, and these coolers are typically not sold at retail and thus come OEM only.  Gigabyte also supplied us with a pair of Dynatron R14 heatsinks for our review which certainly looked the part, although the noise at high loading is something I wouldn't wish on anyone. This system combination belongs in a server room for sure.

The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe. We were also supplied with a type-T LSI RAID mezzanine card to enable some on-board SAS ports. So while the ports are part of the motherboard, the raid card is a separate purchase and leverages the T slot configuration. It also allows a series of RAID card potential upgrades over time if needed.

As with most server motherboards, the MD60-SC0 does not have any form of onboard audio but does offer two gigabit network ports alongside an Aspeed management interface. The board also comes with a QSFP+ port for added network connectivity.

Benchmark wise, as this is the first Haswell-EP motherboard we have tested, it is a little hard to place. As usual with management-esque type motherboards, POST times are long and power consumption is in the upper echelons. DPC Latency with 2697 V3 CPUs was quite reasonable though, despite the other two CPU combinations giving much higher peaks.

One of the main server design crux points is the orientation and the size of the board, so the MD60-SC0 has to fit into your design paradigm to make the short list. With narrow sockets and the PCIe orientation it clearly aims itself at the OEM and server builders more than consumers.

Visual Inspection

The first thing that strikes as the motherboard is taken out of the box is two things – the size of the sockets, and the fact that the motherboard is not a simple square but with a cut out. The size of the sockets is merely from the point of view that I have not encountered the Narrow ILM LGA2011 based socket before, let alone the LGA2011-3 iteration. Using the socket is easy enough, although instead of the hooks we get with the larger socket the narrow socket has a flattened lever on one side and a raised lever the other. The usage is exactly the same.

Similar to the narrow socket, Type-T mezzanine connectors are a new concept for this reviewer. The gap in the PCB is such that the add-in card can fit, as the card is built within a certain 1U/2U standard. I would assume that non-perfect rectangular PCBs are harder to create, requiring a cutout, but then again all the motherboards we have reviewed at AnandTech all have cutouts for screw holes, and as such is probably not the oddest thing we have encountered.

Each socket uses all four memory channels at two DIMMs per channel. Combine this with the narrow sockets and there is space left for SATA ports on the edge of the board. Either way you cut it, it is a very tight squeeze and as such the narrow CPU coolers have to conform to Intel specifications exactly. The 16 total DRAM slots will accept 128GB of UDIMM memory, up to 512GB of RDIMMs and 1024GB of LRDIMMs. The thought of 1TB of DRAM in a single system is mindboggling.

Each socket is supplied with six phase power and an applicable heatsink. For those more accustomed to the mid-to-high end consumer market, this combination might not look like enough, especially when the motherboard has to deal with 160W CPUs. One thing C612 motherboards have in their favor is a lack of overclocking, meaning that power draw is a known quantity. Also, with this motherboard being oriented with sockets and DRAM aligned with the rear panel, the focus will be in systems with high pressure fans blowing in a single direction across the motherboard. This will aid the cooling, especially when at full tilt.

The PCIe mezzanine type-T arrangement is a full PCIe 3.0 x8 affair, with each side of the PCIe arrangement dealing with power and data. One important thing to note with type-T is the ability to cope with height restrictions. PCIe devices are notoriously tall, and are rarely given upright in server systems that are not 4U or above. Type-T allows smaller height arrangements, and as this motherboard is geared more towards storage with SATA breakout connectors and the eight SAS RAID ports, Type-T is a good fit for smaller height systems. It is worth nothing that the red SATA ports on the motherboard are the basic SATA 6 Gbps ports for testing the system or OS/storage when the breakout cables are not in use.

With our memory right up against the SAS ports, there might be a slight conflict if locking cables are used here especially at the edges with the DRAM latches. Though one would imagine that in a server, the cables are fixed and only the drives are moved if they need replacing.

In this area of the motherboard is also where we see the fan header arrangement. Beside each socket is a four-pin fan header, although these are SYS headers. There are five SYS headers on board – four on the right hand side of the board and one at the top. The two CPU fan headers are found to the left of both the sockets.

The PCIe arrangement affords two possibilities: either an x8/x16/x8/x8 arrangement with the type-T at PCIe 3.0 x8, or an x8/x16/-/x16, again with the type-T at PCIe 3.0 x8.  Either way, due to the location of the sockets, large PCIe co-processors can only be used with a riser card or cable. For our testing, we typically equip the system with a GTX 770 Lightning. This was not possible with this system, and as such we used an R7 240 instead and we were unable to perform our normal GPU based testing.

Above the PCIe slots is the meat of the IO and control, with the Aspeed management engine chip paired with 256MB of Samsung flash and also the Intel 82599ES controller in quick succession.

Also in this area of the motherboard is a USB 3.0 header, a TPM header, a COM header and a Thunderbolt header (for use with a TB card). The QSFP Ethernet controller requires its own heatsink, and the port extends some way onto the motherboard:

Perhaps a little surprising is the power connectors. Bonus points for their location on the edge of the motherboard, although typically we see them a lot near the CPUs, especially the 8-pin connectors. This might have implications for power arrangement and delivery though the PCB, although as the board squeezes two sockets with 2DPC DRAM support in the way that it does it seems to be a reasonable compromise.

The rear panel is networking focused, giving the QSFP+ port alongside two Intel I350 ports and the server management port. All six USB ports on the rear are USB 3.0 standard, with a combination PS/2 port, a COM port and a VGA port (from the Aspeed) also in the mix.

Board Features

GIGABYTE MD60-SC0
Price US (Newegg)
Size SSI EEB
CPU Interface LGA2011-3, Narrow ILM
Chipset Intel C612
Memory Slots Sixteen DDR4 DIMM slots
Up to 128 GB UDIMM, 512GB RDIMM, 1024GB LRDIMM
Up to Quad Channel, 2133 MHz
Video Outputs VGA (via Aspeed)
Network Connectivity Intel 82599ES (QSFP+)
2 x Intel I350
10/100 Management Port
Onboard Audio None
Expansion Slots 2 x PCIe 3.0 x16
2 x PCIe 3.0 x8
1 x PCIe 3.0 x8 Type-T
Onboard Storage 2 x SATA 6 Gbps, RAID 0/1/5/10
4 x SATA 6 Gbps via mini-SAS
4 x S_SATA 6 Gbps, no RAID, via mini-SAS
8 x SAS/SATA via Type-T RAID card
USB 3.0 6 x USB 3.0 via Rear Panel
2 x USB 3.0 via Onboard Header
Onboard 2 x SATA 6 Gbps
2 x mini-SAS Breakout connectors
8 x SAS RAID Ports
1 x USB 3.0 Header
1 x COM Header
1 x TPM Header
7 x Fan Headers
1 x Thunderbolt Header
Front Panel Server Header
Power Connectors 1 x 24-pin ATX
2 x 8-pin CPU
Fan Headers 2 x CPU (4-pin)
5 x SYS (4-pin)
IO Panel 1 x Combination PS/2 Port
6 x USB 3.0 Ports
1 x COM Port
1 x VGA Port
1 x QSFP+ Port (via Intel 82599ES)
2 x 1Gbit RJ-45 Ports (via Intel I350)
1 x 10/100 Network Management Port (via Aspeed)
Warranty Period 3 Years
Product Page Link

The big cost here will be that QSFP+ port, although we are not sure on exact cost between manufacturer and end-user – it could be in the region of $50 to $300. The narrow LGA2011-3 slots will also require different CPU coolers to normal as well. The mezzanine Type-T arrangement and RAID slots will need an added purchase to get these working too.

BIOS and Software
Comments Locked

17 Comments

View All Comments

  • PCTC2 - Wednesday, December 3, 2014 - link

    Coming from the HPC space, seeing 512GB-1TB of RAM was pretty regular, but seeing 1.5TB-2TB was rare, but did occur. However, now with systems being able to have 6TB of RAM in a single 4U rack server is pretty incredible (4P servers with 96 DIMMs, Intel E7 v2 support).

    However, there are a few odd things about this board. For one, the QSFP+ is totally unnecessary, as it only supports 2x10GbE, and is not either 1) Infiniband or 2) 40GbE. Sure, with LACP, you could have bonded 20GbE, but you either need a splitter cable (QSFP+ to 4x SFP, with 2 SFP unusable) or a switch that supports multiple links over QSFP+ (a 40GbE with 10GbE breakout capabilities). Also, the decision to use the SFF-8087 connectors for the SATA and individual ports for SAS confounds me, as you lose the sideband support with individual cables, and onboard SATA doesn't support the sideband, thus losing some functionality with some backplanes. Also, the card Gigabyte advertises with this board is an LSI 2308, an HBA and not a full hardware RAID.

    Some of Gigabyte's B2B systems have intrigued me, especially their 8x Tesla/Phi system in 2U, but this board just doesn't seem completely thought out.
  • jhh - Wednesday, December 3, 2014 - link

    I suspect the QSFP was designed to support a Fortville, but they didn't get them qualified in time. That would get them a true 40 Gig port, or 4x10G
  • fackamato - Friday, December 5, 2014 - link

    What's fortville?
  • Cstefan - Friday, December 5, 2014 - link

    Intel 40GBE QSFP+
    Nothing the consumer need worry over for a long time yet.
  • Klimax - Sunday, December 7, 2014 - link

    With some results already available:
    http://www.tweaktown.com/reviews/6857/supermicro-a...
  • Cstefan - Friday, December 5, 2014 - link

    I run multiple database servers with 2TB of ram. My next round is slated for 4TB. And absolutely no joke, they reversed the SAS and SATA connectors in a monumentally stupid move.
  • ddriver - Wednesday, December 3, 2014 - link

    Well, surprisingly no gaming benchmarks this time, but what's with the "professional performance" benches? How many professionals out there make their money on running cinebench? How about some real workstation workloads for a change?
  • JeffFlanagan - Wednesday, December 3, 2014 - link

    This isn't a workstation, or a gaming machine.
  • ddriver - Wednesday, December 3, 2014 - link

    I actually applauded the absence of gaming benchmarks this time. As for whether this is for a workstation machines, I'd say it is far more suited for a workstation than suited for running winrar and image viewing software.

    And just to note this "review" of a "serve" motherboard doesn't have a single server benchmark whatsoever...
  • mpbrede - Wednesday, December 3, 2014 - link

    My usual gripe about acronyms that are not accompanied by an explanation when the term is first used. THis time aggravated by a typo, I'm sure.

    "The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe."

    I'm pretty sure you meant to type MCTP (Management Component Transport Protocol) and not the mTCP (microTCP?) or MTCP (Malaysian Technical Cooperation Programme or has something to do with Transport Layer Support for Highly Available Network
    Services)

Log in

Don't have an account? Sign up now