Alongside their line of channel and ROG motherboards, ASUS also has business (B/Q chipset) and Workstation (WS) lines for professional markets.  The goal of these products is compatibility and stability – the desire to be a rock solid product in the face of any computational conundrum.  Today we are reviewing hopefully the first of many ASUS WS motherboards – the P9X79-E WS, for the socket 2011 / performance Xeon market.  This is an upgrade over the P9X79 WS, featuring a PLX chip giving seven full length PCIe slots.

The goal of the P9X79-E WS is to be able to tackle anything a user wants to use in it: in order to ensure this, ASUS try to validate as many RAID cards, 10 GbE cards, FPGA and PCIe devices as possible.  The goal of the P9X79-E WS is to be the final frontier in single socket performance, suggesting that a 12 core Xeon E5-2697W and several of the latest Xeon Phi cards is just a walk in the park, as well as any consumer level CPU.  If a user needs to run seven RAID cards should not be a problem here.

Several of the main features of workstation motherboards are hard to test from a review point of view.  Compatibility is wholly taken from the QVL list: either a device works or it does not – if I find a device that does not and tell ASUS, chances are it will probably be working in the next BIOS update.  Stability and longevity is hard to test as well – these motherboards are built to withstand several years at full throttle in high ambient temperatures, so if I tested it like that and it fails or works, then it might be up to the sample and I would have to take a statistical look at MTBF (mean time between failures) – a test not within my remit and could take a while to perform!  Feature comparisons and performance are thus vital to our testing – aesthetics for gaming motherboard evaluations are not required here.  It needs to work, ideally out of the box, and work well.

ASUS P9X79-E WS Overview

After reviewing ASUS’ X79, Z87 and ROG ranges, it seems almost nostalgic to start looking at the blue and black of ASUS again.  The P9X79-E WS has actually been on the market for a number of months, and one of the first features to note is usually the seven full length PCIe slots.  The P9X79-E WS uses two PLX PEX 8747 chips to increase the number of PCIe 3.0 lanes on the motherboard from 40 to 72 – if a user needed to, this motherboard will support PCI 3.0 x16/x16/x16/x16 and x16/x8/x8/x8/x16/x8/x8.  We covered the operation of the PLX 8747 chip in a previous review – each of these chips requires ~7W of power hence the extended heatsinks around the motherboard.

Along with the two PLX chips, the P9X79-E WS also uses a Marvell PCIe 9230 controller for four more SATA 6 Gbps ports (over the standard two SATA 6 Gbps and four SATA 3 Gbps), an ASMedia controller for two eSATA 6 Gbps ports with port multiplier support, two Intel I210 NICs, the Realtek ALC1150 audio codec, a VIA 6315N controller for FireWire/IEEE1394 support and an ASMedia controller for four USB 3.0 ports.  All these chips require heat removal, again a cause for a large extended heatsink array.

Being a Workstation board, the P9X79-E WS is designed to accept any socket 2011 Xeon, as well as ECC memory – up to 64GB is listed on the specification sheet.  After speaking with ASUS, they also suggest using the P9X79-E WS in a high airflow environment – I found the system hot to touch due to no active fan on the motherboard for all the extra controllers.

Like the ASUS Rampage IV Black Edition we reviewed previously, ASUS are moving as many modern (read: latest in use) chipsets to the new BIOS layout involving a My Favorites screen, Quick Note, Last Modified, and plenty of CPU/Memory calibration options very similar to the ROG range.  One thing worth noting is the lack of SSD Secure Erase which we see on the ROG boards: I would have thought it appropriate to include it on the workstation range as well.  One new feature called ASUS Ratio Boost is in the BIOS, which implements MultiCore Turbo for Xeon CPUs.

The software for the P9X79-E WS is the older AI Suite II, in line with the X79 launch, with features such as TurboV Evo for non-Xeon overclocking, EPU for power conservation, Fan Xpert+ for tuning fans, Dr.Power for monitoring the power supply, USB 3.0 Boost, AI Charger, SSD Caching and ASUS Update.

In terms of performance, not much differentiates the P9X79-E WS from the other X79 counterparts.  Despite the workstation status, the system even implements MultiCore Turbo when XMP is enabled, giving our i7-4960X sample the full turbo mode (4.0 GHz) no matter the loading.  The difference with this motherboard is all going to be in the functionality it provides and QVL of extra PCIe products rather than raw performance.  To that extent we at least hit the limit of our CPU overclock at 4.5 GHz without too much difficulty, although the platform and heatsink arrangement did get relatively warm.

POST time is reasonable in our standard test (dual GPUs) at just under fifteen seconds, although idle power usage is higher than most due to the extra controllers on board.  DPC Latency is low (115 microseconds), which is always a good thing.

Being a workstation board, price is always going to be a factor.  ASUS covers the P9X79-E WS with a 3 year warranty and provides the ASUS Premium Service for North America customers in case of any issues.  The only issue we really had was that USB 3.0 boost would not disable UASP on our drives, although this was fixed with an update to the latest software.  ASUS (and others) need to get into the groove of providing an update tool for software and drivers in the OS that works well to avoid future issues.

Visual Inspection

The first image of the P9X79-E WS is in striking contrast to the Rampage IV Black Edition I reviewed previously.  We are back to the old blue, white and black styling that ASUS will look to keep for their workstation line (at least for the time being) but again, this being a WS motherboard, functionality is more important than looks.

A full bodied X79 motherboard comes equipped with eight DRAM slots, and ASUS have taken single side latches again so as not to encroach onto any PCIe devices in the first slot.  This means users should make sure that all DRAM is firmly inserted.  The power delivery, again like on the RIVBE, is above the socket, although this time ASUS has an extended heatsink arrangement around the side of the motherboard and onto a very large chipset heatsink.  As mentioned in the overview, this is due to the sizable number of controllers (at least four, plus a chipset) requiring heat removal, as well as the VRMs which need to handle an Intel 150W Xeon processor (the E5-2687W v2, the high frequency 8-core option) if a user specifies a high end build.

There are six 4-pin fan headers on board in total, five of which within easy reach of the socket: two 4-pin CPU fan headers to the bottom right of the socket area, a 4-pin to the left of the DRAM slots, a 4-pin at the top right of the motherboard next to the power/reset buttons and a fifth 4-pin just underneath the 24-pin ATX power connector.  The final 4-pin header is on the bottom of the board, and all can be controlled via the BIOS and OS.

Due to the non-focus of overclocking a Xeon based system, only a single 8-pin CPU 12V EPS power connector is provided.  This should provide 150W for any CPU over and above anything the 24-pin ATX power connector can provide – there is no need for an 8+4 or two 8-pins here.  There is a 6-pin PCIe power connector below the socket, although this is for the VGA slots.  That connector in itself is in a slightly awkward position (between the DRAM, PLX heatsink and the PCIe slots), and is the only one for the PCIe slots which could create power issues (more on that later).

Moving clockwise around the motherboard, the top right contains the common power and reset buttons (a must for almost any system I find, especially the more expensive models), a ‘Dr. Power’ switch which enables the PSU monitoring circuits, a Mem OK button (for recovering a bad memory overclock) and the 24-pin ATX power connector.  Due to the heatsink in this area of the motherboard, it does look quite cramped up against the edge of what is already an E-ATX (or technically, CEB) sized motherboard.  Further on is one of the fan headers, an EPU switch (for power saving modes) and a USB 3.0 header, powered by an ASMedia USB 3.0 controller.

The P9X79-E WS features 10 SATA ports in total: the six from the PCH (two SATA 6 Gbps, four SATA 3 Gbps) and four SATA 6 Gbps from a Marvell PCIe 9230 controller.  Although the Marvell is used in order to enable SSD caching via software, one of the reasons X79 needs an update is this: Intel needed to push for more SATA 6 Gbps ports so we can move away from controllers altogether.  Technically X79 has the silicon workings for six SAS/SATA ports, but requested that manufacturers did not use them (the X79R-AX we reviewed decided to) unless they employed the enterprise C6xx versions of the chipset which were validated (like the X79S-UP5).  The only difference between the X79 and C606 chipsets, as far as I can tell, is cost and the SAS connectivity, and thus ASUS went for the X79 chipset here either because they did not want SAS or to reduce cost.

Along the bottom of the motherboard, from right to left, we have a two-digit debug LED, the front panel headers, an internal USB 2.0 port (useful for servers that require license dongles), a TPM header, two USB 2.0 headers, a fan header, a COM port header (often advised in WS builds in case a user requires one), a TPU switch (for one option CPU overclocking) and a front panel audio connector.  What surprised me about the specification list for the P9X79-E WS is that for this front panel audio, ASUS went for the Realtek ALC1150 codec rather than a cheaper option.  The ALC1150 is rated for superior SNR in one of the outputs, but less in the others, and unlike the SupremeFX variants on the consumer ROG motherboards, there is no PCB separation or headphone amplifiers here.  We still managed 105 dB in our audio test however.

So while the PCIe layout is a full array of PCIe 2.0/3.0 x16 slots, the way that it is all wired up via PLX switches to the CPU is actually rather interesting (from my point of view).  A socket 2011 CPU offers 40 PCIe lanes direct, and ASUS are transcribing a total of 72 lanes via the specification list of x16/x8/x8/x8/x16/x8/x8 when all the slots are populated.  This requires a PCIe switch, such as a PLX chip, to be used.  The most common one in use is the PLX PEX 8747, which takes x8 or x16 lanes from the CPU to provide 32 lanes as output.  We covered the workings of the PLX chip in a previous review.  However, 40 – 8 + 32 = 64, or 40 – 16 + 32 = 56, and thus there either ASUS are using two PLX 8747 chips (40 – 16 + 32 – 16 + 32 = 72) or using a different PLX chip, such as the 8780 which takes 16 lanes and creates 48 (40 – 16 + 48 = 72), like we saw on the Galaxy Z87 HOF at Computex.  To put this into perspective, manufacturers like ASUS use PLX 8747 switches enough to get a nice discount (~$20 each), whereas PLX 8480 are rare enough to add $100 to the price.  In this situation, it is almost like SLI/CFX, whereby one PLX chip uses less power (and requires less engineering) than two.  Thankfully ASUS provided a PCIe diagram layout to dispel any inaccurate hypotheses:

In this diagram the thick lines are where x16 lanes are directed, and the thin lanes are x8.  So PCIe 3 has 8 lanes from the PLX and 8 lanes from the Quick Switch normally, but when PCIe 2 is populated, the Quick Switch will move the eight lanes over to PCIe 2.

So in order to get the best layout from your devices, start with the blue slots for a full x16/x16/x16/x16 bandwidth and rearrange as necessary.  Beyond this, black slots should be used only when the situation is pertinent, and selecting the right black slots will keep PCIe bandwidth as high as it can be.  In our gaming testing at least, older NF200 PCIe switches used to add a sizable difference in performance, but the PLX chips are ~1-2% at most / on a bad day in terms of non-PLX based performance. 

One feature that ASUS might come into issues with is if a user decides to put graphics cards in every single GPU slot.  Typically a graphics card will draw up to 75W through the PCIe slot and when 3+ GPUs are to be used the manufacturer will add an extra power cable to help supply the juice.  With seven GPUs, this would be up to 525 watts, which the single 6-pin PCIe on board will not be able to cope with.  The system would then draw more power through the 24-pin PCIe connector to compensate, which would cause issues (so a good thing that Dr. Power is there!).  For a comparison, the EVGA SR-2 uses two extra 6-pin PCIe connectors to satisfy the similar PCIe layout it provides.

At some point, we will see top end motherboards in the performance segment of ASUS' lineup having Thunderbolt 2, but today is not that day.  Due to X79’s positioning and the CPUs therein, something needs to be updated to provide this long-term addition.  As it stands, our rear IO is thus standard enough for any WS product.  From left to right there is a PS/2 combination port, ten USB 2.0 ports (one in white for USB BIOS Flashback), a USB BIOS Flashback button, SPDIF output, two USB 3.0 ports in blue, two eSATA ports in red (another controller), and two Intel I210 NICs and then the audio jacks.  It is safe to assume that WS builders are within reach of Ethernet, and thus while WiFi would be a welcome addition, nothing is inherently lost by not having it.  There are plenty of PCIe lanes to pick up a WiFi card if needed.

Board Features

ASUS P9X79-E WS
Price Link
Size SSI CEB (12" x 10")
CPU Interface LGA-2011
Chipset Intel X79
Memory Slots Eight DDR3 DIMM slots supporting up to 64 GB
ECC and non-ECC supported
Up to Quad Channel, 1333-2400 MHz
Video Outputs None
Onboard LAN 2 x Intel I210
Onboard Audio Realtek ALC 1150
Expansion Slots 7 x PCIe 3.0 x16 via 2x PLX 8747
- x16/-/x16/x8/x16/-/x16 or
- x16/x8/x8/x8/x16/8/x8
Onboard SATA/RAID 2 x SATA 6 Gbps (X79), RAID 0, 1, 5, 10
4 x SATA 3 Gbps (X79), RAID 0, 1, 5, 10
4 x SATA 6 Gbps (Marvell PCIe 9230)
2 x eSATA 6 Gbps (ASMedia)
USB 3.0 / IEEE 1394 4 x USB 3.0 (ASMedia ASM1042) [1 header, 2 back panel]
12 x USB 2.0 (PCH) [10 back panel, 1 headers]
1 x Vertical USB 2.0
1 x IEEE 1394a Header (Via 6315N)
Onboard 6 x SATA 6 Gbps
4 x SATA 3 Gbps
1 x USB 3.0 Header
1 x USB 2.0 Headers
6 x Fan Headers
1 x Vertical USB 2.0
1 x TPM Header
TPU/EPU Switches
Clear_CMOS Jumper
MemOK! Button
Dr. Power Switch
Power/Reset Buttons
Two Digit Debug
Power Connectors 1 x 24-pin ATX Power Connector
1 x 8-pin CPU Power Connector
1 x 6-pin PCIe Power Connector
Fan Headers 2 x CPU (4-pin)
4 x CHA (4-pin)
IO Panel 10 x USB 2.0
1 x USB 3.0
2 x eSATA 6 Gbps
1 x PS/2 Combination Port
2 x Intel I210 NIC
1 x USB BIOS Flashback Button
Audio Jacks
Warranty Period 3 Years, APS in North America
Product Page Link

Big motherboard means big feature set – 12 SATA ports in total including the four extra SATA 6 Gbps and two eSATA provided by controllers, and PCIe devices are well fed and positioned due to the all-out slot configuration. 

The Realtek ALC1150 is a little odd given its status in many of the high end audio solutions in the mainstream consumer range, although its presence is not unwelcome.  The vertical USB 2.0 port on board might be strange for some – this is typically a server feature whereby expensive software that requires USB dongle licenses can place the license onto the machine without having to worry about anyone stealing it / it knocking off, as well as keeping it cool in the case if needs be. 

Dr. Power is a feature we have not come across before at AnandTech, but it is a hardware and software implementation to monitor abnormal power supply readings via the 24-pin ATX power connector and others.  With the installed driver and software, the OS will report if the power supply is near abnormal in some of its ranges.

Although not required by any stretch, absent from the motherboard is a management tool to allow users to configure the system over a network without the system being powered on.  We typically see this on server level motherboards (such as the ones we have reviewed previously) in combination with an ASpeed 23xx 2D video chip.  One could argue that as this is a workstation product rather than a server product, there is no need, but it would be interesting to see the crossover.

BIOS and Software
Comments Locked

53 Comments

View All Comments

  • pewterrock - Friday, January 10, 2014 - link

    Intel Widi-capable network card (http://intel.ly/1iY9cjx) or if on Windows 8.1 use Miracast (http://bit.ly/1ktIfpq). Either will work with this receiver (http://amzn.to/1lJjrYS) at the TV or monitor.
  • dgingeri - Friday, January 10, 2014 - link

    WiDi would only work for one user at a time. It would have to be a Virtual Desktop type thing like extide mentions, but, as he said, that doesn't work too well for home user activities. Although, it could be with thin-clients: one of these for each user http://www.amazon.com/HP-Smart-Client-T5565z-1-00G...
  • eanazag - Wednesday, January 15, 2014 - link

    Yes and no. Virtual Desktops exist and can be done. Gaming is kind of a weak and expensive option. You can allocate graphics cards to VMs, but latency for screen are not going to be optimal for the money. Cheaper and better to go individual systems. If you're just watchnig youtube and converting video it wouldn't be a bad option and can be done reasonably. Check out nVidia's game streaming servers. It exists. The Grid GPUs are pushing in the thousands of dollars, but you would only need one. Supermicro has some systems that, I believe, fall into that category. VMware and Xenserver/Xendesktop can share the video cards as the hypervisors. Windows server with RemoteFX may work better. I haven't tried that.
  • extide - Friday, January 10, 2014 - link

    Note: At the beginning of the article you mention 5 year warranty but at the end you mention 3 years. Which is it?
  • Ian Cutress - Friday, January 10, 2014 - link

    Thanks for pointing the error. I initially thought I had read it as five but it is three.
  • Li_Thium - Friday, January 10, 2014 - link

    At last...triple SLI with space between from ASUS.
    Plus one and only SLI bridge: ASRock 3way 2S2S.
  • artemisgoldfish - Friday, January 10, 2014 - link

    I'd like to see how this board compares against an x16/x16/x8 board with 3 290Xs (if thermal issues didn't prevent this). Since they communicate from card to card through PCIe rather than a Crossfire bridge, a card in PCIe 5 communicating with a card in PCIe 1 would have to traverse the root complex and 2 switches. Wonder what the performance penalty would be like.
  • mapesdhs - Friday, January 10, 2014 - link


    I have the older P9X79 WS board, very nice BIOS to work with, easy to setup a good oc,
    currently have a 3930K @ 4.7. I see your NV tests had two 580s; aww, only two? Mine
    has four. :D (though this is more for exploring CUDA issues with AE rather than gaming)
    See: http://valid.canardpc.com/zk69q8

    The main thing I'd like to know is if the Marvell controller is any good, because so far
    every Marvell controller I've tested has been pretty awful, including the one on the older
    WS board. And how does the ASMedia controller compare? Come to think of it, does
    Intel sell any kind of simple SATA RAID PCIe card which just has its own controller so
    one can add a bunch of 6gbit ports that work properly?

    Should anyone contemplate using this newer WS, here are some build hints: fans on the
    chipset heatsinks are essential; it helps a lot with GPU swapping to have a water cooler
    (I recommend the Corsair H110 if your case can take it, though I'm using an H80 since
    I only have a HAF 932 with the PSU at the top); take note of what case you choose if you
    want to have a 2/3-slot GPU in the lowest slot (if so, the PSU needs space such as there
    is in an Aerocool X-Predator, or put the PSU at the top as I've done with my HAF 932);
    and if multiple GPUs are pumping out heat then remove the drive cage & reverse the front
    fan to be an exhaust.

    Also, the CPU socket is very close to the top PCIe slot, so if you do use an air cooler,
    note that larger units may press right up against the back of the top-slot GPU (a Phanteks
    will do this, the cooler I originally had before switching to an H80).

    I can mention a few other things if anyone's interested, plus some picture build links. All
    the same stuff would apply to the newer E version. Ah, an important point: if one upgrades
    the BIOS on this board, all oc profiles will be erased, so make sure you've either used the
    screenshot function to make a record of your oc settings, or written them down manually.

    Btw Ian, something you missed which I think is worth mentioning: compared to the older
    WS, ASUS have moved the 2-digit debug LED to the right side edge of the PCB. I suspect
    they did this because, as I discovered, with four GPUs installed one cannot see the debug
    display at all, which is rather annoying. Glad they've moved it, but a pity it wasn't on the
    right side edge to begin with.

    Hmm, one other question Ian, do you know if it's possible to use any of the lower slots
    as the primary display GPU slot with the E version? (presumably one of the blue slots)
    I tried this with the older board but it didn't work.

    Ian.

    PS. Are you sure your 580 isn't being hampered in any of the tests by its meagre 1.5GB RAM?
    I sourced only 3GB 580s for my build (four MSI Lightning Xtremes, 832MHz stock, though they
    oc like crazy).
  • Ian Cutress - Saturday, January 11, 2014 - link

    Dual GTX 580s is all I got! We don't all work in one big office at AnandTech, as we are dotted around the world. It is hard to source four GPUs of exactly the same type without laying down some personal cash in the process. That being said, for my new 2014 benchmark suite starting soon, I have three GTX 770 Lightnings which will feature in the testing.

    On the couple of points:
    Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it. That is perhaps at the expense of speed, although I do not have appropriate hardware (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) connected via SATA. Perhaps if I had something like an ACARD ANS-9010 that would be good, but sourcing one would be difficult, as well as being expensive.
    Close proximity to first PCIe: This happens with all motherboards that use the first slot as a PCIe device, hence the change in mainstream boards to now make that top slot a PCIe x1 or nothing at all.
    OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC Profiles included.
    2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes that users will use 4 cards has it moved there. You also need an E-ATX layout or it becomes an issue with routing (at least more difficult to trace on the PCB).
    Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason why not, but I have not tested it. If I get a chance to put the motherboard back on the test bed (never always easy with a backlog of boards waiting to be tested) I will attempt.
    GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an issue for 2014 testing.

    -Ian
  • mapesdhs - Saturday, January 11, 2014 - link


    Ian Cutress writes:
    > ... It is hard to source four GPUs of exactly the same type without
    > laying down some personal cash in the process. ...

    True, it took a while and some moolah to get the cards for my system,
    all off eBay of course (eg. item 161179653299).

    > ... I have three GTX 770 Lightnings which will feature in the testing.

    Sounds good!

    > Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it.

    So far I've found it's more useful for providing RAID1 with mechanical drives.
    A while ago I built an AE system using the older WS board; 3930K @ 4.7, 64GB @ 2133,
    two Samsung 830s on the Intel 6gbit ports (C-drive and AE cache), two Enterprise SATA
    2TB on the Marvell in RAID1 for long term data storage. GPUs were a Quadro 4000 and
    three GTX 580 3GB for CUDA.

    > (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) ...

    I tested an HP branded LSI card with 512MB cache, behaved much as expected:
    2GB/sec for accesses that can exploit the cache, less than that when the drives
    have to be read/written, scaling pretty much based on the no. of drives.

    > Close proximity to first PCIe: This happens with all motherboards that use the first
    > slot as a PCIe device, hence the change in mainstream boards to now make that top slot
    > a PCIe x1 or nothing at all.

    It certainly helps with HS spacing on an M4E.

    > OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC
    > Profiles included.

    Pity they can't find a way to preserve the profiles though, or at the very least
    include a warning when about to flash that the oc profiles are going to be wiped.

    > 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes
    > that users will use 4 cards has it moved there. ...

    Which is why it's a bit surprising that the older P9X79 WS doesn't have it on the edge.

    > Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason
    > why not, but I have not tested it. If I get a chance to put the motherboard back on
    > the test bed (never always easy with a backlog of boards waiting to be tested) I will
    > attempt.

    Ach I wouldn't worry about it too much. It was a more interesting idea with the older
    WS because the slot spacing meant being able to fit a 1-slot Quadro in a lower slot
    would give a more efficient slot usage for 2-slot CUDA cards & RAID cards.

    > GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an
    > issue for 2014 testing.

    I asked because of my experiences of playing Crysis2 at max settings just at 1920x1200
    on two 1GB cards SLI (switching to 3GB cards made a nice difference). Couldn't help
    wondering if Metro, etc., at 1440p would exceed 1.5GB.

    Ian.

Log in

Don't have an account? Sign up now