yes/no. You can control the turbo frequencies on a Xeon though a motherboard, and make it goe full turbo no matter the load, but not go beyond the multiplier range of the chip. So, you could potentially make an E5-2687w v2 go 4GHz on all 8 cores, but not any faster than that. With an E5-2603, you'd still be stuck at 1.8GHz
No! You can't! ASUS says NO. There's not proof in the internet, no screenshot telling it is possible. The only time Xeon was overclocked is in Intel's demo with a test chip.
The problem is the CPUs, not the motherboard. In this server space, all the Xeons are locked - you can play around with BCLK at best, although do not expect much headroom. Even the CPU straps (1.00x, 1.25x, 1.66x) are locked down. It's an Intel issue - they do not want to sell unlocked Xeons any more. That being said, a picture was shared on twitter a few months ago by an Intel engineer trying to gauge interest in unlocked Xeons - whether that comes with or without warranty we will have to see, but I wouldn't get any hopes up just yet.
If i wanted to make a uber machine for the whole family and VM everything to their respective rooms, how would i do this in a "true headless" fashion? ie: no computer required in their room, just a screen and somehow beam video wirelessly to a monitor
You are talking about a VDi infrastructure, and you do need some sort of very basic pc at each terminal to run the remote desktop connection. Probably not really a great idea for home use, as things like 3d and video do not work very well in that scenario.
WiDi would only work for one user at a time. It would have to be a Virtual Desktop type thing like extide mentions, but, as he said, that doesn't work too well for home user activities. Although, it could be with thin-clients: one of these for each user http://www.amazon.com/HP-Smart-Client-T5565z-1-00G...
Yes and no. Virtual Desktops exist and can be done. Gaming is kind of a weak and expensive option. You can allocate graphics cards to VMs, but latency for screen are not going to be optimal for the money. Cheaper and better to go individual systems. If you're just watchnig youtube and converting video it wouldn't be a bad option and can be done reasonably. Check out nVidia's game streaming servers. It exists. The Grid GPUs are pushing in the thousands of dollars, but you would only need one. Supermicro has some systems that, I believe, fall into that category. VMware and Xenserver/Xendesktop can share the video cards as the hypervisors. Windows server with RemoteFX may work better. I haven't tried that.
I'd like to see how this board compares against an x16/x16/x8 board with 3 290Xs (if thermal issues didn't prevent this). Since they communicate from card to card through PCIe rather than a Crossfire bridge, a card in PCIe 5 communicating with a card in PCIe 1 would have to traverse the root complex and 2 switches. Wonder what the performance penalty would be like.
I have the older P9X79 WS board, very nice BIOS to work with, easy to setup a good oc, currently have a 3930K @ 4.7. I see your NV tests had two 580s; aww, only two? Mine has four. :D (though this is more for exploring CUDA issues with AE rather than gaming) See: http://valid.canardpc.com/zk69q8
The main thing I'd like to know is if the Marvell controller is any good, because so far every Marvell controller I've tested has been pretty awful, including the one on the older WS board. And how does the ASMedia controller compare? Come to think of it, does Intel sell any kind of simple SATA RAID PCIe card which just has its own controller so one can add a bunch of 6gbit ports that work properly?
Should anyone contemplate using this newer WS, here are some build hints: fans on the chipset heatsinks are essential; it helps a lot with GPU swapping to have a water cooler (I recommend the Corsair H110 if your case can take it, though I'm using an H80 since I only have a HAF 932 with the PSU at the top); take note of what case you choose if you want to have a 2/3-slot GPU in the lowest slot (if so, the PSU needs space such as there is in an Aerocool X-Predator, or put the PSU at the top as I've done with my HAF 932); and if multiple GPUs are pumping out heat then remove the drive cage & reverse the front fan to be an exhaust.
Also, the CPU socket is very close to the top PCIe slot, so if you do use an air cooler, note that larger units may press right up against the back of the top-slot GPU (a Phanteks will do this, the cooler I originally had before switching to an H80).
I can mention a few other things if anyone's interested, plus some picture build links. All the same stuff would apply to the newer E version. Ah, an important point: if one upgrades the BIOS on this board, all oc profiles will be erased, so make sure you've either used the screenshot function to make a record of your oc settings, or written them down manually.
Btw Ian, something you missed which I think is worth mentioning: compared to the older WS, ASUS have moved the 2-digit debug LED to the right side edge of the PCB. I suspect they did this because, as I discovered, with four GPUs installed one cannot see the debug display at all, which is rather annoying. Glad they've moved it, but a pity it wasn't on the right side edge to begin with.
Hmm, one other question Ian, do you know if it's possible to use any of the lower slots as the primary display GPU slot with the E version? (presumably one of the blue slots) I tried this with the older board but it didn't work.
Ian.
PS. Are you sure your 580 isn't being hampered in any of the tests by its meagre 1.5GB RAM? I sourced only 3GB 580s for my build (four MSI Lightning Xtremes, 832MHz stock, though they oc like crazy).
Dual GTX 580s is all I got! We don't all work in one big office at AnandTech, as we are dotted around the world. It is hard to source four GPUs of exactly the same type without laying down some personal cash in the process. That being said, for my new 2014 benchmark suite starting soon, I have three GTX 770 Lightnings which will feature in the testing.
On the couple of points: Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it. That is perhaps at the expense of speed, although I do not have appropriate hardware (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) connected via SATA. Perhaps if I had something like an ACARD ANS-9010 that would be good, but sourcing one would be difficult, as well as being expensive. Close proximity to first PCIe: This happens with all motherboards that use the first slot as a PCIe device, hence the change in mainstream boards to now make that top slot a PCIe x1 or nothing at all. OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC Profiles included. 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes that users will use 4 cards has it moved there. You also need an E-ATX layout or it becomes an issue with routing (at least more difficult to trace on the PCB). Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason why not, but I have not tested it. If I get a chance to put the motherboard back on the test bed (never always easy with a backlog of boards waiting to be tested) I will attempt. GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an issue for 2014 testing.
Ian Cutress writes: > ... It is hard to source four GPUs of exactly the same type without > laying down some personal cash in the process. ...
True, it took a while and some moolah to get the cards for my system, all off eBay of course (eg. item 161179653299).
> ... I have three GTX 770 Lightnings which will feature in the testing.
Sounds good!
> Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it.
So far I've found it's more useful for providing RAID1 with mechanical drives. A while ago I built an AE system using the older WS board; 3930K @ 4.7, 64GB @ 2133, two Samsung 830s on the Intel 6gbit ports (C-drive and AE cache), two Enterprise SATA 2TB on the Marvell in RAID1 for long term data storage. GPUs were a Quadro 4000 and three GTX 580 3GB for CUDA.
> (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) ...
I tested an HP branded LSI card with 512MB cache, behaved much as expected: 2GB/sec for accesses that can exploit the cache, less than that when the drives have to be read/written, scaling pretty much based on the no. of drives.
> Close proximity to first PCIe: This happens with all motherboards that use the first > slot as a PCIe device, hence the change in mainstream boards to now make that top slot > a PCIe x1 or nothing at all.
It certainly helps with HS spacing on an M4E.
> OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC > Profiles included.
Pity they can't find a way to preserve the profiles though, or at the very least include a warning when about to flash that the oc profiles are going to be wiped.
> 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes > that users will use 4 cards has it moved there. ...
Which is why it's a bit surprising that the older P9X79 WS doesn't have it on the edge.
> Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason > why not, but I have not tested it. If I get a chance to put the motherboard back on > the test bed (never always easy with a backlog of boards waiting to be tested) I will > attempt.
Ach I wouldn't worry about it too much. It was a more interesting idea with the older WS because the slot spacing meant being able to fit a 1-slot Quadro in a lower slot would give a more efficient slot usage for 2-slot CUDA cards & RAID cards.
> GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an > issue for 2014 testing.
I asked because of my experiences of playing Crysis2 at max settings just at 1920x1200 on two 1GB cards SLI (switching to 3GB cards made a nice difference). Couldn't help wondering if Metro, etc., at 1440p would exceed 1.5GB.
What are the effects of PLX chip when using two or more R9 290 in crossfire ? We know that when doing AFR , R9 290 and R9 290X uses the PCIe lanes to move the frame around from on GPU to another . A frame time testing with two GPUs in different lanes will be very interesting . PCIe 1 - PCIe 2 -> Goes through the PLX chip and QS PCIe 2 - PCIe 3 -> Goes through QS only PCIe 1 - PCIe 5 -> Goes through both PLX chips PCIe 2 - PCIe 6 -> Goes through both PLX and QS chips and possibly more combinations. I am not saying that all possible combinations need to be tested , just two combinations to give us and idea of the latency involved is good enough like 1) PCIe1 - PCIe3 (only PLX) 2) PCIe2 - PCIe6 (both PLX and QS)
I did some PLX testing on various Z87 motherboards that use one of the chips, and the overall defecit over ideal routing was a 1-2% loss per PLX chip in the worst case scenario. This is better than the old NF200s, which had up to a 5-10% loss iirc? Of course with X79 it's a little different in that the CPU could go for an x16/x8/x8/x8 layout and whether going for an x16/x16/x16/x16 would make a difference. While I don't have 290 cards to hand, I do have 7970s and now GTX 770s to do a small comparison in the future.
I am not a gamer, but my science and storage workloads are well met by Xeon workstations. The build-your-own route can make financial sense sometimes, depends on the job.
The main benefit of a DIY oc build is gaining access to the performance equivalent to an expensive high-core XEON on a lower budget. XEONs with lots of cores have much lower clocks, so a 6-core SB-E or IB-E at 4.7+ runs very well. There are tradeoffs of course, such as non-ECC RAM being used; this might rule out the idea for some tasks. Still, there's a lot of scope for building something fast without breaking the bank. If one needs a degree of reliability though then I guess just step back a step or two on the oc, say 4.5GHz, and/or go for top-end cooling by default such as an H110 + suitable case.
Great article, Ian. Although I wish you focused more on workstation aspects of the motherboard, not gamin and stuff :D 1. Do you know any motherboards from other manufacturers with similar specs? 2. ASUS says it's a CEB motherboard. So the case has to be CEB as well? Or can it be E-ATX? Isn't that kinda small for it? Thanks again for the review.
The only other board I could find that came close in overall concept to ASUS' X79 WS series is Asrock's X79 Extreme 11. However, apart from being quite a bit more expensive, in the end I felt Asrock messed up a bit by not using a SAS controller with any onboard cache, which can spoil 4K performance. Given the board cost, I can't imagine why the didn't choose an equivalent LSI chip that had a 1GB cache or something, would have been much better. Maybe the added cost was just too much.
Can't remember offhand about CEB vs. EATX; I think CEB means the board can be deeper aswell as longer. Either way, fits fine in a HAF 932, though the case I'd recommend atm is an Aercool X-Predator. Caveat: if one has to move a system around a lot, eg. transport to company sites, then choose a different case that has handles. Either way, for max expandability, use a 10-slot case.
I thought it only fits in a CEB case. That's why I was gonna get a Silverstone RV03, because that's the only CEB I could find. This is a great help for me. It means I have other options for the case. Thanks a lot!
An old thread I know, but a minor update for anyone who finds this for some reason as I recently built an editing setup with a P9X79-E WS I managed to get for only 200 quid (fitted with an i7 3970X, Quadro 6000, GTX 580 3GB, etc.): now I'm using a Corsair C70 Military Green case, definitely better. More rear slots than the HAF 932, though I'm only using two NDS fans with the H110 (decided after several builds that four is unnecessary). The C70 has fewer front 5.25" bays than the 932, but using more SSDs, etc. has meant that's not an issue.
Hoping to see if it's possible to boot from a 950 Pro soon...
"Being a Workstation board, the P9X79-E WS is designed to accept any socket 2011 Xeon, as well as ECC memory – up to 64GB is listed on the specification sheet, although 16GB ECC DRAM modules are now available through Newegg for $210 each."
The X79 chipset supports unbuffered ECC with a Xeon. 16GB DIMMs are not available as ECC unbuffered, only ECC registered. You need a C600 series chipset with a Xeon to use registered memory.
When I looked at the user reviews on Newegg and forums, I saw that there are a lot of issues with this motherboard. So I went with the P9X79 WS motherboard instead that have less negative reviews.
Perhaps there are a few issues with these PLX chips that needs to be addressed before it becomes stable...
I saw some of those reviews, mainly being linked to upgrading to Ivy-E, or buying one when they first come out and then upgrading to the CAP BIOS system. My review sample (as the ones on sale should be) was already in CAP, so I just put in the latest BIOS and it worked fine. The PLX chips are tried and tested in many other boards, so no issues there on the chip itself.
A motherboard that refuses to post because of a too modern CPU makes things very hard if you don't happen to have an "old" LGA2011 CPU lying around, and most people don't.
But the PLX chips tend to give me the heebie jeebies when considering virtualized configurations that use PCI passthrough (IOMMU through Intel VT-d). It is a 'workstation grade' motherboard after all so such usage scenarios should be considered. It would be interesting to know how PLX switch chips affect the PCI passthrough capabilities.
Otherwise, a motherboard with 7 full-lane PCIe slots is really attractive but I guess a dual CPU motherboard is needed for that.
This is why these motherboards support USB BIOS Flashback: the ability to flash a BIOS onto the motherboard without a CPU, DRAM or a VGA installed. It requires renaming the BIOS file, putting it onto a suitable memory stick and following ASUS' instructions. I've used it a couple of times before, and as long as you follow the instructions it is ok: people get frustrated when it doesn't seem to work and there is no feedback (file misnamed, USB not suitable, BIOS not copied properly, BIOS still in old mode requires old BIOS not CAP BIOS).
Hi, I find the benches useless on Mobo review, all the Mobo perform the same of course +-1/2%, so nobody cares. in this case the only useful Bench is to measure the impact of PLX of graphic performance in games. it's look like a minimal impact and this it' good, but you can see that x16@PCE3 Vs x8@PCIE3 is at moment of no use.
IMHO the Mobo review should be around stability, quirks, measuring features performance. in this case : performances of Marvel 930 and Asmedia SATA3 controllers Vs intel. Performance of ASmedia USB3 Vs Intel z87. Stability with 64GB RAM and 3-SLI.
I've this board for few days with E5-1650v2. I don't like : You can't run the cpu at Stock Intel Spec. If you enable the Turbo, you get all the core always at the turbo speed with Vcore ramp up. this is no good for a WS board. Why ? there's no option to disable Multicore option. Fewer Sensor voltages to monitor that board at this price level. The IB-E support isn't that great still. The default voltage are not correct for CPU PLL (1.8 instead 1.7) VTTIO (1.05 vs 1.00) there's no way to respect Intel VID of the CPU, there're the manual fixed or the ASUS adaptive. Like: 64GB rock solid at Intel Specs for VSSA (0.95V) Stable so far.
If you want to run everything at their baseline defaults, I don't see the relevance of a board like this in the first place. The whole point of this WS board is that it pairs the oc'ing features of the ROG series with the kind of workstation features normally found on pro boards. It's an excellent middleground. You'd really want to run 64GB at minimal speed, etc.? I have 64GB @ 2133 just fine. Plus, in reality the various voltages you refer to vary from one chip to another wrt their ideal baseline values; there are no absolutes.
If you want to run stuff at 'stock intel spec', then buy a boring ordinary XEON board, not one like this which is intended to allow one to do sooo much more.
well I dont' agree. I'd have prefer the options to run all at the specs and the options to 'switch' the gear with overclock. this is not a RIVE dressed in WS... and it'd not to be. If you want to overclock to the hell the ROG extreme lines is for you. If you want a stable classic workstation Mobo, with a Xeon, with the option to tweak it, if you wish, well this is what I think the WS lines should be, not a hybrid. it lacks additional power for CPU for example... only one ordinary 8-pin and nothing else. I find it strange. even Z87 boards have additional power input, and the Haswell top at 89W TDP from the start... E5-2687Wv2 is a 150W part at 3.4Ghz.... turbo @4GHz is over 200W..
If you buy a I7 why select this board ? there's the Deluxe for you, 2-3SLI to gamming ? RIVE/MF is for you. This board is for Xeon, ECC memory first, so why force the cpu to run overclocked on stock settings?
The ROG boards are for gamers. I didn't buy one for gaming, so your logic is flawed from the outset. I built a system for AE and wanted RAID card compatibility, among other things. Plus, the only ROG board I felt was any good was a lot more expensive.
Whatever you might think the WS should be doesn't matter. It is what it is, a blend of workstation and top-end gamer board features, the best of both IMO. I don't understand your concerns; afterall, you don't *have* to oc on _any_ board. Leave everything at their defaults and it'll be fine as-is. Me, I wanted 64GB RAM @ 2133 and a 6-core @ 4.7+, with the ability to run four GPUs for CUDA, and RAID card support. The WS is perfect for this. As for the CPU power issue, I don't see it as even being an issue. Where's your evidence the WS in any way suffers from not having an extra power connector? The WS will handle a 3930K @ 5.0 no problem.
Basically, your assumptions are wrong, and thus your conclusions are wrong. The Deluxe was definitely not for me. The WS supports XEONs just as it supports i7s; saying it's "for" one chip type or the other doesn't make sense.
For those who _are_ looking for a gaming board though then you do have a point, except that the PCIe structure is better on the WS-E IMO.
Ian.
PS. And btw, how many CPU-Z submissions have you seen which have a ROG board with a 3930K @ 4.7+ and max RAM at 2133+ with four GPUs? I've never seen one. What I wanted to build is in a different league to gaming setups. Games tax just parts of a system and often not much at that; AE hammers everything at times, gobbling 40GB RAM no problem, hence the SSD for cache, etc.
Question on the Dr. Power feature. Does this application show you the wattage usage on each separate PCI lane ? Also , did your GPU have a power feed direct from the PSU ?
Does a purchase like this make sense in early 2014 when Haswell-E/X99 is coming out later this year? A $500 mobo, plus $500 CPU, plus another few hundred for RAM and you are spending a lot on a part that will be replaced in < a year with something better. I just feel at this time, that this platform is a bit long in the tooth no native USB3 for instance.
I'm currently using SB/Z68 (:<) and I'm pretty comfortable waiting for Haswell-E/X99 at this point. It's only been in the last 6 months I've come to desire the X79 feature set.
Makes perfect sense if you need to build something now. :D I've been talking to a movie guy who's about to construct something based on this E revision. Similar to mine but better GPUs, beginning with one 780Ti, expanding to 4 later. Only slight hitch is I've been trying to convince him to use a Corsair H110 for the CPU instead of a big HS, the latter making transport more difficult. Either way, it'll be a good AE system until he switches to a dual-socket 24-core XEON setup next year.
The only thing really missing from X79 (apart from a proper 8-core consumer chip option) is more Intel SATA3 ports which don't suffer from the perils one can encounter with Marvell ports. Both performance and reliability are better with the Intel ports, in some cases by a huge margin. People harp on about USB3, but a lot of pro users I know rarely use it and if they do need a USB link they're usually happy with USB2. Depends on the task though of course, I'm sure some would find it important.
if You compare this funny mobo with professional supermicro e.g. X9SRL-F (7 PCIe slots for server use) or X9SRA for workstation it looks like a toy for kids. ASUS uses a lot of tricks but it cant overcome 40 lanes limitation from single CPU. motherboard is to complicated. 64GB of RAM is the limit? something is wrong with ASUS, Supermicro support 512GB. If You go for XEON chose Supermicro and Tyan
Actually the SM boards look more like demo sample that a real board with so few surface Caps, Mosfet that this Board. :) The reality that they're not necessary to run the system at stock with the wide margin on Xeon about voltages. BTW, the 64GB limit is about UDIMM Vs RDIMM, only on the C600 series the RDIMM & UDIMM are supported, on X79 only UDIMM.
P.S. SM rocks, you can't really go wrong with them for WS/Server rig.
The 64 Gb limitation is in Intel Xeon and i7 CPUs, the same. For buffered memory. 500-700 Gb are supported with buffered memory on E5 Xeons only (E7 have hybrid controllers with external components). Buffered memory is MUCH slower than unbuffered due to penalties introduced by buffer and its latent logic. The same, inter-CPU RAM access introduces big penalties on multicore Xeon sustems. That's why sometimes (generally in HPC simulations) single core system with unbuffered RAM is preferred.
Hi djezik, although your post is fairly old, I had a look at the two board you mentioned, and they don't have the same PCI-E lanes expansion capabilites as the P9X79-E WS board, which has 72 PCI-E lines, due to 2 additional PLX chips, and can run at 16x x 4 slots at full speed or 2 x 16x and 5 x 8x, the two boards you mentioned do have greater mem capacity though, ie; 512GB ECC vs 64GB ECC or non ECC.
So it really depends what you need this board to do. If you want to put 4 x 16X graphics cards in at once and don't need more than the 64GB ram limit, then this is the board to get, but if you do need more that the 64GB then the ASUS should not be considered.
Liars! x79 DOE NOT support "full turbo mode (4.0 GHz) no matter the loading" "One new feature called ASUS Ratio Boost is in the BIOS, which implements MultiCore Turbo for Xeon CPUs" Is a lie, according to ASUS itself. I don't know why did they trick people this dirty way. Maybe Intel pays them for spreading weird rumors. But ASUS says that turbo bins can not be reconfigured on their boards for XEON CPUs!
I recently purchased an Asus Motherboard and the problems started from day 1. The drivers update never works, the same for AI Suite III (there´s a lot of updates for this model in Asus webpage). After 2 months I still can´t install BitDefender cause a clock watchdog error. Asus technical support is the worst, mails comes and goes with no solution. I will not recommend this brand to anyone. The brand has a very good Marketing but the product and the service are very disappointment.
There is no driver support for Server 2012 so if you want to run it as a server using that OS then forget it. After all Asus has only had about 2-3 years to make the drivers for it. Using the Windows 8.x drivers doesn't work either, I tried to run them under admin mode and also compatibilty mode without success.
Just a followup to my previous post, there are drivers out there but not on the Asus site, got some links for the chipsets on another forum site, just need to find one more.
I know this is old but I have to say, having bought one recently, I did not make the connection on how few USB headers on the mobo this had. I wish I'd caught that it had an internal USB 2.0 connector instead of a header.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
53 Comments
Back to Article
dstarr3 - Friday, January 10, 2014 - link
"If a user needs to run seven RAID cards should not be a problem here."Is that strictly true without any onboard video?
Ian Cutress - Friday, January 10, 2014 - link
Run it headless with desktop remote or Teamviewer over a network.nightbringer57 - Friday, January 10, 2014 - link
Or a USB video adapter.JlHADJOE - Friday, January 10, 2014 - link
Can you overclock a Xeon on it?dgingeri - Friday, January 10, 2014 - link
yes/no. You can control the turbo frequencies on a Xeon though a motherboard, and make it goe full turbo no matter the load, but not go beyond the multiplier range of the chip. So, you could potentially make an E5-2687w v2 go 4GHz on all 8 cores, but not any faster than that. With an E5-2603, you'd still be stuck at 1.8GHzHale_Kyou - Monday, March 3, 2014 - link
No! You can't! ASUS says NO. There's not proof in the internet, no screenshot telling it is possible. The only time Xeon was overclocked is in Intel's demo with a test chip.Hale_Kyou - Monday, March 3, 2014 - link
I.e. neither overclock, nor all-core full turbo. Full 4GHz is not possible.Ian Cutress - Friday, January 10, 2014 - link
The problem is the CPUs, not the motherboard. In this server space, all the Xeons are locked - you can play around with BCLK at best, although do not expect much headroom. Even the CPU straps (1.00x, 1.25x, 1.66x) are locked down. It's an Intel issue - they do not want to sell unlocked Xeons any more. That being said, a picture was shared on twitter a few months ago by an Intel engineer trying to gauge interest in unlocked Xeons - whether that comes with or without warranty we will have to see, but I wouldn't get any hopes up just yet.jasonelmore - Friday, January 10, 2014 - link
If i wanted to make a uber machine for the whole family and VM everything to their respective rooms, how would i do this in a "true headless" fashion? ie: no computer required in their room, just a screen and somehow beam video wirelessly to a monitorAny solutions exist?
extide - Friday, January 10, 2014 - link
You are talking about a VDi infrastructure, and you do need some sort of very basic pc at each terminal to run the remote desktop connection. Probably not really a great idea for home use, as things like 3d and video do not work very well in that scenario.pewterrock - Friday, January 10, 2014 - link
Intel Widi-capable network card (http://intel.ly/1iY9cjx) or if on Windows 8.1 use Miracast (http://bit.ly/1ktIfpq). Either will work with this receiver (http://amzn.to/1lJjrYS) at the TV or monitor.dgingeri - Friday, January 10, 2014 - link
WiDi would only work for one user at a time. It would have to be a Virtual Desktop type thing like extide mentions, but, as he said, that doesn't work too well for home user activities. Although, it could be with thin-clients: one of these for each user http://www.amazon.com/HP-Smart-Client-T5565z-1-00G...eanazag - Wednesday, January 15, 2014 - link
Yes and no. Virtual Desktops exist and can be done. Gaming is kind of a weak and expensive option. You can allocate graphics cards to VMs, but latency for screen are not going to be optimal for the money. Cheaper and better to go individual systems. If you're just watchnig youtube and converting video it wouldn't be a bad option and can be done reasonably. Check out nVidia's game streaming servers. It exists. The Grid GPUs are pushing in the thousands of dollars, but you would only need one. Supermicro has some systems that, I believe, fall into that category. VMware and Xenserver/Xendesktop can share the video cards as the hypervisors. Windows server with RemoteFX may work better. I haven't tried that.extide - Friday, January 10, 2014 - link
Note: At the beginning of the article you mention 5 year warranty but at the end you mention 3 years. Which is it?Ian Cutress - Friday, January 10, 2014 - link
Thanks for pointing the error. I initially thought I had read it as five but it is three.Li_Thium - Friday, January 10, 2014 - link
At last...triple SLI with space between from ASUS.Plus one and only SLI bridge: ASRock 3way 2S2S.
artemisgoldfish - Friday, January 10, 2014 - link
I'd like to see how this board compares against an x16/x16/x8 board with 3 290Xs (if thermal issues didn't prevent this). Since they communicate from card to card through PCIe rather than a Crossfire bridge, a card in PCIe 5 communicating with a card in PCIe 1 would have to traverse the root complex and 2 switches. Wonder what the performance penalty would be like.mapesdhs - Friday, January 10, 2014 - link
I have the older P9X79 WS board, very nice BIOS to work with, easy to setup a good oc,
currently have a 3930K @ 4.7. I see your NV tests had two 580s; aww, only two? Mine
has four. :D (though this is more for exploring CUDA issues with AE rather than gaming)
See: http://valid.canardpc.com/zk69q8
The main thing I'd like to know is if the Marvell controller is any good, because so far
every Marvell controller I've tested has been pretty awful, including the one on the older
WS board. And how does the ASMedia controller compare? Come to think of it, does
Intel sell any kind of simple SATA RAID PCIe card which just has its own controller so
one can add a bunch of 6gbit ports that work properly?
Should anyone contemplate using this newer WS, here are some build hints: fans on the
chipset heatsinks are essential; it helps a lot with GPU swapping to have a water cooler
(I recommend the Corsair H110 if your case can take it, though I'm using an H80 since
I only have a HAF 932 with the PSU at the top); take note of what case you choose if you
want to have a 2/3-slot GPU in the lowest slot (if so, the PSU needs space such as there
is in an Aerocool X-Predator, or put the PSU at the top as I've done with my HAF 932);
and if multiple GPUs are pumping out heat then remove the drive cage & reverse the front
fan to be an exhaust.
Also, the CPU socket is very close to the top PCIe slot, so if you do use an air cooler,
note that larger units may press right up against the back of the top-slot GPU (a Phanteks
will do this, the cooler I originally had before switching to an H80).
I can mention a few other things if anyone's interested, plus some picture build links. All
the same stuff would apply to the newer E version. Ah, an important point: if one upgrades
the BIOS on this board, all oc profiles will be erased, so make sure you've either used the
screenshot function to make a record of your oc settings, or written them down manually.
Btw Ian, something you missed which I think is worth mentioning: compared to the older
WS, ASUS have moved the 2-digit debug LED to the right side edge of the PCB. I suspect
they did this because, as I discovered, with four GPUs installed one cannot see the debug
display at all, which is rather annoying. Glad they've moved it, but a pity it wasn't on the
right side edge to begin with.
Hmm, one other question Ian, do you know if it's possible to use any of the lower slots
as the primary display GPU slot with the E version? (presumably one of the blue slots)
I tried this with the older board but it didn't work.
Ian.
PS. Are you sure your 580 isn't being hampered in any of the tests by its meagre 1.5GB RAM?
I sourced only 3GB 580s for my build (four MSI Lightning Xtremes, 832MHz stock, though they
oc like crazy).
Ian Cutress - Saturday, January 11, 2014 - link
Dual GTX 580s is all I got! We don't all work in one big office at AnandTech, as we are dotted around the world. It is hard to source four GPUs of exactly the same type without laying down some personal cash in the process. That being said, for my new 2014 benchmark suite starting soon, I have three GTX 770 Lightnings which will feature in the testing.On the couple of points:
Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it. That is perhaps at the expense of speed, although I do not have appropriate hardware (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) connected via SATA. Perhaps if I had something like an ACARD ANS-9010 that would be good, but sourcing one would be difficult, as well as being expensive.
Close proximity to first PCIe: This happens with all motherboards that use the first slot as a PCIe device, hence the change in mainstream boards to now make that top slot a PCIe x1 or nothing at all.
OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC Profiles included.
2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes that users will use 4 cards has it moved there. You also need an E-ATX layout or it becomes an issue with routing (at least more difficult to trace on the PCB).
Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason why not, but I have not tested it. If I get a chance to put the motherboard back on the test bed (never always easy with a backlog of boards waiting to be tested) I will attempt.
GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an issue for 2014 testing.
-Ian
mapesdhs - Saturday, January 11, 2014 - link
Ian Cutress writes:
> ... It is hard to source four GPUs of exactly the same type without
> laying down some personal cash in the process. ...
True, it took a while and some moolah to get the cards for my system,
all off eBay of course (eg. item 161179653299).
> ... I have three GTX 770 Lightnings which will feature in the testing.
Sounds good!
> Marvell Controller: ASUS use this to enable SSD Caching, other controllers do not do it.
So far I've found it's more useful for providing RAID1 with mechanical drives.
A while ago I built an AE system using the older WS board; 3930K @ 4.7, 64GB @ 2133,
two Samsung 830s on the Intel 6gbit ports (C-drive and AE cache), two Enterprise SATA
2TB on the Marvell in RAID1 for long term data storage. GPUs were a Quadro 4000 and
three GTX 580 3GB for CUDA.
> (i.e. two drives in RAID 0 suitable of breaking SATA 6 Gbps) ...
I tested an HP branded LSI card with 512MB cache, behaved much as expected:
2GB/sec for accesses that can exploit the cache, less than that when the drives
have to be read/written, scaling pretty much based on the no. of drives.
> Close proximity to first PCIe: This happens with all motherboards that use the first
> slot as a PCIe device, hence the change in mainstream boards to now make that top slot
> a PCIe x1 or nothing at all.
It certainly helps with HS spacing on an M4E.
> OC Profiles being erased: Again, this is standard. You're flashing the whole BIOS, OC
> Profiles included.
Pity they can't find a way to preserve the profiles though, or at the very least
include a warning when about to flash that the oc profiles are going to be wiped.
> 2-Digit Debug: The RIVE had this as well - basically any board where ASUS believes
> that users will use 4 cards has it moved there. ...
Which is why it's a bit surprising that the older P9X79 WS doesn't have it on the edge.
> Lower slots for GPUs: I would assume so, but I am not 100%. I cannot see any reason
> why not, but I have not tested it. If I get a chance to put the motherboard back on
> the test bed (never always easy with a backlog of boards waiting to be tested) I will
> attempt.
Ach I wouldn't worry about it too much. It was a more interesting idea with the older
WS because the slot spacing meant being able to fit a 1-slot Quadro in a lower slot
would give a more efficient slot usage for 2-slot CUDA cards & RAID cards.
> GPU Memory: Perhaps. I work with what I have at the time ;) That should be less of an
> issue for 2014 testing.
I asked because of my experiences of playing Crysis2 at max settings just at 1920x1200
on two 1GB cards SLI (switching to 3GB cards made a nice difference). Couldn't help
wondering if Metro, etc., at 1440p would exceed 1.5GB.
Ian.
Hammerfist - Friday, January 10, 2014 - link
What are the effects of PLX chip when using two or more R9 290 in crossfire ?We know that when doing AFR , R9 290 and R9 290X uses the PCIe lanes to move the frame around from on GPU to another .
A frame time testing with two GPUs in different lanes will be very interesting .
PCIe 1 - PCIe 2 -> Goes through the PLX chip and QS
PCIe 2 - PCIe 3 -> Goes through QS only
PCIe 1 - PCIe 5 -> Goes through both PLX chips
PCIe 2 - PCIe 6 -> Goes through both PLX and QS chips
and possibly more combinations.
I am not saying that all possible combinations need to be tested , just two combinations to give us and idea of the latency involved is good enough like
1) PCIe1 - PCIe3 (only PLX)
2) PCIe2 - PCIe6 (both PLX and QS)
Ian Cutress - Saturday, January 11, 2014 - link
I did some PLX testing on various Z87 motherboards that use one of the chips, and the overall defecit over ideal routing was a 1-2% loss per PLX chip in the worst case scenario. This is better than the old NF200s, which had up to a 5-10% loss iirc? Of course with X79 it's a little different in that the CPU could go for an x16/x8/x8/x8 layout and whether going for an x16/x16/x16/x16 would make a difference. While I don't have 290 cards to hand, I do have 7970s and now GTX 770s to do a small comparison in the future.watersb - Saturday, January 11, 2014 - link
Ian, thanks very much for this review.I am not a gamer, but my science and storage workloads are well met by Xeon workstations. The build-your-own route can make financial sense sometimes, depends on the job.
Glad you are there checking it all out.
mapesdhs - Saturday, January 11, 2014 - link
The main benefit of a DIY oc build is gaining access to the performance equivalent to anexpensive high-core XEON on a lower budget. XEONs with lots of cores have much lower
clocks, so a 6-core SB-E or IB-E at 4.7+ runs very well. There are tradeoffs of course,
such as non-ECC RAM being used; this might rule out the idea for some tasks. Still, there's
a lot of scope for building something fast without breaking the bank. If one needs a degree
of reliability though then I guess just step back a step or two on the oc, say 4.5GHz, and/or
go for top-end cooling by default such as an H110 + suitable case.
Ian.
Pooyan - Saturday, January 11, 2014 - link
Great article, Ian. Although I wish you focused more on workstation aspects of the motherboard, not gamin and stuff :D1. Do you know any motherboards from other manufacturers with similar specs?
2. ASUS says it's a CEB motherboard. So the case has to be CEB as well? Or can it be E-ATX? Isn't that kinda small for it?
Thanks again for the review.
mapesdhs - Saturday, January 11, 2014 - link
The only other board I could find that came close in overall concept to ASUS' X79 WSseries is Asrock's X79 Extreme 11. However, apart from being quite a bit more expensive,
in the end I felt Asrock messed up a bit by not using a SAS controller with any onboard
cache, which can spoil 4K performance. Given the board cost, I can't imagine why the
didn't choose an equivalent LSI chip that had a 1GB cache or something, would have
been much better. Maybe the added cost was just too much.
Can't remember offhand about CEB vs. EATX; I think CEB means the board can be
deeper aswell as longer. Either way, fits fine in a HAF 932, though the case I'd
recommend atm is an Aercool X-Predator. Caveat: if one has to move a system
around a lot, eg. transport to company sites, then choose a different case that has
handles. Either way, for max expandability, use a 10-slot case.
Ian.
Pooyan - Tuesday, January 14, 2014 - link
I thought it only fits in a CEB case. That's why I was gonna get a Silverstone RV03, because that's the only CEB I could find. This is a great help for me. It means I have other options for the case. Thanks a lot!mapesdhs - Tuesday, June 7, 2016 - link
An old thread I know, but a minor update for anyone who finds this for some reason as I recently built an editing setup with a P9X79-E WS I managed to get for only 200 quid (fitted with an i7 3970X, Quadro 6000, GTX 580 3GB, etc.): now I'm using a Corsair C70 Military Green case, definitely better. More rear slots than the HAF 932, though I'm only using two NDS fans with the H110 (decided after several builds that four is unnecessary). The C70 has fewer front 5.25" bays than the 932, but using more SSDs, etc. has meant that's not an issue.Hoping to see if it's possible to boot from a 950 Pro soon...
Ian.
Umbongo - Saturday, January 11, 2014 - link
"Being a Workstation board, the P9X79-E WS is designed to accept any socket 2011 Xeon, as well as ECC memory – up to 64GB is listed on the specification sheet, although 16GB ECC DRAM modules are now available through Newegg for $210 each."The X79 chipset supports unbuffered ECC with a Xeon. 16GB DIMMs are not available as ECC unbuffered, only ECC registered. You need a C600 series chipset with a Xeon to use registered memory.
Ian Cutress - Saturday, January 11, 2014 - link
Ah, I thought I had seen 16GB unregistered memory. Seems like I was mistaken (!)g00ey - Saturday, January 11, 2014 - link
When I looked at the user reviews on Newegg and forums, I saw that there are a lot of issues with this motherboard. So I went with the P9X79 WS motherboard instead that have less negative reviews.Perhaps there are a few issues with these PLX chips that needs to be addressed before it becomes stable...
Ian Cutress - Saturday, January 11, 2014 - link
I saw some of those reviews, mainly being linked to upgrading to Ivy-E, or buying one when they first come out and then upgrading to the CAP BIOS system. My review sample (as the ones on sale should be) was already in CAP, so I just put in the latest BIOS and it worked fine. The PLX chips are tried and tested in many other boards, so no issues there on the chip itself.g00ey - Saturday, January 11, 2014 - link
A motherboard that refuses to post because of a too modern CPU makes things very hard if you don't happen to have an "old" LGA2011 CPU lying around, and most people don't.But the PLX chips tend to give me the heebie jeebies when considering virtualized configurations that use PCI passthrough (IOMMU through Intel VT-d). It is a 'workstation grade' motherboard after all so such usage scenarios should be considered. It would be interesting to know how PLX switch chips affect the PCI passthrough capabilities.
Otherwise, a motherboard with 7 full-lane PCIe slots is really attractive but I guess a dual CPU motherboard is needed for that.
Ian Cutress - Saturday, January 11, 2014 - link
This is why these motherboards support USB BIOS Flashback: the ability to flash a BIOS onto the motherboard without a CPU, DRAM or a VGA installed. It requires renaming the BIOS file, putting it onto a suitable memory stick and following ASUS' instructions. I've used it a couple of times before, and as long as you follow the instructions it is ok: people get frustrated when it doesn't seem to work and there is no feedback (file misnamed, USB not suitable, BIOS not copied properly, BIOS still in old mode requires old BIOS not CAP BIOS).mazzy80 - Sunday, January 12, 2014 - link
Hi,I find the benches useless on Mobo review, all the Mobo perform the same of course +-1/2%, so nobody cares.
in this case the only useful Bench is to measure the impact of PLX of graphic performance in games. it's look like a minimal impact and this it' good, but you can see that x16@PCE3 Vs x8@PCIE3 is at moment of no use.
IMHO the Mobo review should be around stability, quirks, measuring features performance.
in this case :
performances of Marvel 930 and Asmedia SATA3 controllers Vs intel.
Performance of ASmedia USB3 Vs Intel z87.
Stability with 64GB RAM and 3-SLI.
I've this board for few days with E5-1650v2.
I don't like :
You can't run the cpu at Stock Intel Spec. If you enable the Turbo, you get all the core always at the turbo speed with Vcore ramp up. this is no good for a WS board. Why ? there's no option to disable Multicore option.
Fewer Sensor voltages to monitor that board at this price level.
The IB-E support isn't that great still. The default voltage are not correct for CPU PLL (1.8 instead 1.7) VTTIO (1.05 vs 1.00)
there's no way to respect Intel VID of the CPU, there're the manual fixed or the ASUS adaptive.
Like:
64GB rock solid at Intel Specs for VSSA (0.95V)
Stable so far.
mapesdhs - Sunday, January 12, 2014 - link
If you want to run everything at their baseline defaults, I don't see the relevance
of a board like this in the first place. The whole point of this WS board is that it
pairs the oc'ing features of the ROG series with the kind of workstation features
normally found on pro boards. It's an excellent middleground. You'd really want
to run 64GB at minimal speed, etc.? I have 64GB @ 2133 just fine. Plus, in
reality the various voltages you refer to vary from one chip to another wrt their
ideal baseline values; there are no absolutes.
If you want to run stuff at 'stock intel spec', then buy a boring ordinary XEON board,
not one like this which is intended to allow one to do sooo much more.
Ian.
mazzy80 - Monday, January 13, 2014 - link
well I dont' agree.I'd have prefer the options to run all at the specs and the options to 'switch' the gear with overclock. this is not a RIVE dressed in WS... and it'd not to be.
If you want to overclock to the hell the ROG extreme lines is for you.
If you want a stable classic workstation Mobo, with a Xeon, with the option to tweak it, if you wish, well this is what I think the WS lines should be, not a hybrid.
it lacks additional power for CPU for example... only one ordinary 8-pin and nothing else. I find it strange.
even Z87 boards have additional power input, and the Haswell top at 89W TDP from the start... E5-2687Wv2 is a 150W part at 3.4Ghz.... turbo @4GHz is over 200W..
If you buy a I7 why select this board ? there's the Deluxe for you, 2-3SLI to gamming ? RIVE/MF is for you.
This board is for Xeon, ECC memory first, so why force the cpu to run overclocked on stock settings?
mapesdhs - Wednesday, January 15, 2014 - link
The ROG boards are for gamers. I didn't buy one for gaming, so your logic is flawed
from the outset. I built a system for AE and wanted RAID card compatibility, among
other things. Plus, the only ROG board I felt was any good was a lot more expensive.
Whatever you might think the WS should be doesn't matter. It is what it is, a blend of
workstation and top-end gamer board features, the best of both IMO. I don't understand
your concerns; afterall, you don't *have* to oc on _any_ board. Leave everything at their
defaults and it'll be fine as-is. Me, I wanted 64GB RAM @ 2133 and a 6-core @ 4.7+,
with the ability to run four GPUs for CUDA, and RAID card support. The WS is perfect
for this. As for the CPU power issue, I don't see it as even being an issue. Where's your
evidence the WS in any way suffers from not having an extra power connector? The WS
will handle a 3930K @ 5.0 no problem.
Basically, your assumptions are wrong, and thus your conclusions are wrong. The Deluxe
was definitely not for me. The WS supports XEONs just as it supports i7s; saying it's "for"
one chip type or the other doesn't make sense.
For those who _are_ looking for a gaming board though then you do have a point, except
that the PCIe structure is better on the WS-E IMO.
Ian.
PS. And btw, how many CPU-Z submissions have you seen which have a ROG board
with a 3930K @ 4.7+ and max RAM at 2133+ with four GPUs? I've never seen one.
What I wanted to build is in a different league to gaming setups. Games tax just parts
of a system and often not much at that; AE hammers everything at times, gobbling 40GB
RAM no problem, hence the SSD for cache, etc.
viper131 - Sunday, January 12, 2014 - link
Question on the Dr. Power feature. Does this application show you the wattage usage on each separate PCI lane ? Also , did your GPU have a power feed direct from the PSU ?thanks,
luwalo - Tuesday, January 14, 2014 - link
Does a purchase like this make sense in early 2014 when Haswell-E/X99 is coming out later this year? A $500 mobo, plus $500 CPU, plus another few hundred for RAM and you are spending a lot on a part that will be replaced in < a year with something better. I just feel at this time, that this platform is a bit long in the tooth no native USB3 for instance.I'm currently using SB/Z68 (:<) and I'm pretty comfortable waiting for Haswell-E/X99 at this point. It's only been in the last 6 months I've come to desire the X79 feature set.
mapesdhs - Wednesday, January 15, 2014 - link
Makes perfect sense if you need to build something now. :D I've been talking to a movie
guy who's about to construct something based on this E revision. Similar to mine but better
GPUs, beginning with one 780Ti, expanding to 4 later. Only slight hitch is I've been trying
to convince him to use a Corsair H110 for the CPU instead of a big HS, the latter making
transport more difficult. Either way, it'll be a good AE system until he switches to a dual-socket
24-core XEON setup next year.
The only thing really missing from X79 (apart from a proper 8-core consumer chip option)
is more Intel SATA3 ports which don't suffer from the perils one can encounter with Marvell
ports. Both performance and reliability are better with the Intel ports, in some cases by a
huge margin. People harp on about USB3, but a lot of pro users I know rarely use it and if
they do need a USB link they're usually happy with USB2. Depends on the task though of
course, I'm sure some would find it important.
Ian.
almajnall - Wednesday, January 15, 2014 - link
hidzezik - Monday, February 10, 2014 - link
if You compare this funny mobo with professional supermicro e.g. X9SRL-F (7 PCIe slots for server use) or X9SRA for workstation it looks like a toy for kids. ASUS uses a lot of tricks but it cant overcome 40 lanes limitation from single CPU. motherboard is to complicated. 64GB of RAM is the limit? something is wrong with ASUS, Supermicro support 512GB. If You go for XEON chose Supermicro and Tyanmazzy80 - Friday, February 21, 2014 - link
Actually the SM boards look more like demo sample that a real board with so few surface Caps, Mosfet that this Board. :)The reality that they're not necessary to run the system at stock with the wide margin on Xeon about voltages.
BTW, the 64GB limit is about UDIMM Vs RDIMM, only on the C600 series the RDIMM & UDIMM are supported, on X79 only UDIMM.
P.S.
SM rocks, you can't really go wrong with them for WS/Server rig.
Hale_Kyou - Tuesday, March 4, 2014 - link
The 64 Gb limitation is in Intel Xeon and i7 CPUs, the same. For buffered memory. 500-700 Gb are supported with buffered memory on E5 Xeons only (E7 have hybrid controllers with external components). Buffered memory is MUCH slower than unbuffered due to penalties introduced by buffer and its latent logic. The same, inter-CPU RAM access introduces big penalties on multicore Xeon sustems. That's why sometimes (generally in HPC simulations) single core system with unbuffered RAM is preferred.Hale_Kyou - Tuesday, March 4, 2014 - link
there's a typo. Of course 64Gb limitation is for unbuffered RAM both in Xeon and i7. The limitation is removed on latest server-oriented Atoms.EdB1 - Thursday, July 31, 2014 - link
Hi djezik, although your post is fairly old, I had a look at the two board you mentioned, and they don't have the same PCI-E lanes expansion capabilites as the P9X79-E WS board, which has 72 PCI-E lines, due to 2 additional PLX chips, and can run at 16x x 4 slots at full speed or 2 x 16x and 5 x 8x, the two boards you mentioned do have greater mem capacity though, ie; 512GB ECC vs 64GB ECC or non ECC.So it really depends what you need this board to do. If you want to put 4 x 16X graphics cards in at once and don't need more than the 64GB ram limit, then this is the board to get, but if you do need more that the 64GB then the ASUS should not be considered.
Hale_Kyou - Monday, March 3, 2014 - link
Liars! x79 DOE NOT support "full turbo mode (4.0 GHz) no matter the loading""One new feature called ASUS Ratio Boost is in the BIOS, which implements MultiCore Turbo for Xeon CPUs"
Is a lie, according to ASUS itself. I don't know why did they trick people this dirty way. Maybe Intel pays them for spreading weird rumors. But ASUS says that turbo bins can not be reconfigured on their boards for XEON CPUs!
Hale_Kyou - Monday, March 3, 2014 - link
P.S. Of course it works on i7, that's why they lied about Xeon, but "proofed" with screenshots only with i7X running "all core full turbo".ReneGQ - Thursday, March 13, 2014 - link
I recently purchased an Asus Motherboard and the problems started from day 1. The drivers update never works, the same for AI Suite III (there´s a lot of updates for this model in Asus webpage). After 2 months I still can´t install BitDefender cause a clock watchdog error.Asus technical support is the worst, mails comes and goes with no solution.
I will not recommend this brand to anyone. The brand has a very good Marketing but the product and the service are very disappointment.
EdB1 - Tuesday, July 29, 2014 - link
There is no driver support for Server 2012 so if you want to run it as a server using that OS then forget it. After all Asus has only had about 2-3 years to make the drivers for it. Using the Windows 8.x drivers doesn't work either, I tried to run them under admin mode and also compatibilty mode without success.EdB1 - Wednesday, July 30, 2014 - link
Just a followup to my previous post, there are drivers out there but not on the Asus site, got some links for the chipsets on another forum site, just need to find one more.lymang - Sunday, December 14, 2014 - link
I know this is old but I have to say, having bought one recently, I did not make the connection on how few USB headers on the mobo this had. I wish I'd caught that it had an internal USB 2.0 connector instead of a header.