ASRock Rack C2750D4I and U-NAS NSC-800: A DIY File Server
by Ganesh T S on August 10, 2015 8:45 AM EST- Posted in
- NAS
- storage server
- Avoton
- ASRock Rack
- U-NAS
Introduction and Testing Methodology
Small businesses and power users in a home setting have begun to face challenges with managing large amounts of data. These are generated either as part of day-to-day business operations or backing up of multimedia files from phones / tablets / TV recordings etc. One option is to use a dedicated COTS (commercial off-the-shelf) NAS from a vendor such as Synology or QNAP. Sometimes, it is also necessary to have a file server that is much more flexible with respect to programs that can be run on it. This is where storage servers based on Microsoft's offerings or even units based on Linux distributions such as Red Hat and Ubuntu come into play. These servers can either be bought as an appliance or assembled in a DIY fashion. Today, we will be looking at a system based on the latter approach.
A DIY approach involves selection of an appropriate motherboard and a chassis to place it in. Depending on the requirements and motherboard characteristics, one can opt for ECC or ordinary RAM. The platform choice and the number of drives would dictate the PSU capacity. The file server being discussed today uses the ASRock C2750D4I mini-ITX motherboard in a U-NAS NSC 800 chassis. 8 GB of ECC DRAM and a 400 W PSU round up the barebones components. The table below lists the components of the system.
ASRock C2750D4I + U-NAS NSC-800 | |
Form Factor | 8-bay mini-tower / mITX motherboard |
Platform | Intel Avoton C2750 |
CPU Configuration | 8C/8T Silvermont x86 Cores 4 MB L2, 20W TDP 2.4 GHz (Turbo: 2.6 GHz) |
SoC SATA Ports | 2x SATA III (for two hot-swap bays) 4x SATA II (for one OS drive) |
Additional SATA Ports | Marvell SE9172 (2x) (for two hot-swap bays) Marvell SE9230 (4x) (for four hot-swap bays) |
I/O Ports | 3x USB 2.0 1x D-Sub 2x RJ-45 GbE LAN 1x RJ-45 IPMI LAN 1x COM1 Serial Port |
Expansion Slots | 1x PCIe 2.0 x8 (Unused) |
Memory | 2x 4GB DDR3-1333 ECC UDIMM Samsung M391B5273DH0-YH9 |
Data Drives | 8x OCZ Vector 128 GB |
Chassis Dimensions | 316mm x 254mm x 180mm |
Power Supply | 400W Internal PSU |
Diskless Price (when built) | USD 845 |
Evaluation Methodology
A file server can be used for multiple purposes, unlike a dedicated NAS. Evaluating a file server with our standard NAS testing methodology wouldn't do justice to the eventual use-cases and would tell only a part of the story to the reader. Hence, we adopt a hybrid approach in which the evaluation is divided into two parts - one, as a standalone computing system and another as a storage device on a network.
In order to get an idea of the performance of the file server as a standalone computing system, we boot up the unit with a USB key containing a Ubuntu-on-the-go installation. The drives in the bays are configured in a mdadm RAID-5 array. Selected benchmarks from the Phoronix Test Suite (i.e, those benchmarks relevant to the usage of a system as a file server) are processed after ensuring that any test utilizing local storage (disk benchmarks, in particular) point to the mdadm RAID-5 array. Usage of the Phoronix Test Suite allows readers to have comparison points for the file server against multiple systems (even those that haven't been benchmarked by us).
As a storage device on a network, there are multiple ways to determine the performance. One option would be to repeat all our NAS benchmarks on the system, but that would be take too much time to process for a given system that we are already testing as a standalone computer. On the other hand, it is also important to look beyond numbers from artificial benchmarks and see how a system performs in terms of business metrics. SPEC SFS 2014 comes to our help here. The benchmark tool is best used for evaluation of SANs. However, it also helps us here to see the effectiveness of the file server as a storage node in a network. The SPEC SFS 2014 has been developed by the IOZone folks, and covers evaluation of the filer in specific application scenarios like the number of virtual machines that can be run off the filer, number of simultaneous databases, number of video streams that can be simultaneously recorded and the number of simultaneous software builds that can be processed.
Our SPEC SFS 2014 setup consists of a SMB share on the file server under test connected over an Ethernet network to our NAS evaluation testbed outlined below. Further details about the SPEC SFS 2014 workloads will be provided in the appropriate section.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evolution 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
The above testbed runs 10 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 10 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
48 Comments
View All Comments
xicaque - Monday, November 23, 2015 - link
Can you elaborate on redundant power supplies? Please? What is their purpose?nxsfan - Tuesday, August 11, 2015 - link
I have the ASRack C2750d4i + Silverstone DS380, with 8x3.5" HDDs and one SSD (& 16GB ECC). Your CPU and MB temps seem high, particularly when (if I understand correctly) you populated the U-NAS with SSDs.If lm-sensors is correct my CPU cores idle around 25 C and under peak load get to 50 C. My MB sits around 41 C. My HDDs range from ~50 C (TOSHIBA MD04ACA500) to ~37 C (WDC WD40EFRX). "Peak" (logged in the last month) power consumption (obtained from the UPS - so includes a 24 port switch) was 60 W. Idle is 41 W.
The hardware itself is great. I virtualize with KVM and the hardware handles multiple VMs plus multiple realtime 1080p H.264 transcodes with aplomb (VC-1 not so much). File transfers saturate my gigabit network, but I am not a power user (i.e. typically only 2-3 active clients).
bill.rookard - Tuesday, August 11, 2015 - link
I really like this unit. Compact. Flexible. Well thought out. Best of all, -affordable-. Putting together a budget media server just became much easier. Now to just find a good itx based mobo with enough SATA ports to handle the 8 bays...KateH - Tuesday, August 11, 2015 - link
Another good turnkey solution from ASRock, but I still think they missed a golden opportunity by not making an "ASRack" brand for their NAS units ;)e1jones - Wednesday, August 12, 2015 - link
Would be great for a Xeon D-15*0 board, but most of the ones I've seen so far only have 6 sata ports. A little more horsepower to virtualize and run CPU intensive programs.akula2 - Monday, August 17, 2015 - link
>A file server can be used for multiple purposes, unlike a dedicated NAS.Well, I paused reading right there! What does that mean? You should improve on that sentence; it could be quite confusing to novice members who aspire to buy/build storage systems.
Next, I don't use Windows on any Servers. I never recommend that OS to anyone either, especially when the data is sensitive be it from business or personal perspective.
I use couple of NAS Servers based on OpenIndiana (Solaris based) and BSD OSes. ZFS can be great if one understands its design goals and philosophy.
I don't use FreeNAS or NAS boxes such as from Synology et al. I build the Hardware from the scratch to have greater choice and cost saving factors. Currently, I'm in Alpha stage building a large NAS Server (200+ TB) based on ZoL (ZFS on Linux). It will take at least two more months of effort to integrate to my company networks; few hundreds of associates based in three nations work more closely to augment efficiency and productivity.
Yeah, few more things to share:
1) Whatever I plan I look at Power consumption factor (green), especially high gulping ones such as Servers, Workstations, Hydrib Cluster, NAS Server etc. Hence, I allocate more funds to address the Power demand by deploying Solar solutions wherever it is viable in order to save some good money in the long run.
2) I mostly go for Hitachi SAS drives and SATA III about 20% (Enterprise segment).
3) ECC memory is mandatory. No compromise on this one to save some dough.
4) Moved away from Cloud service providers by building by private cloud (NAS based) to protect my employee privacy. All employee data should remain in the respective nations. Period.
GuizmoPhil - Friday, August 21, 2015 - link
I built a new server using their 4 bay model (NSC-400) last year. extremely sastisfied.Here's the pictures: https://picasaweb.google.com/117887570503925809876...
Below the specs:
CPU: Intel Core i3-4130T
CPU cooler: Thermolab ITX30 (not shown on the pictures, was upgraded after)
MOBO: ASUS H87i-PLUS
RAM: Crucial Ballistix Tactical Low Profile 1.35V XMP 8-8-8-24 (1x4GB)
SSD: Intel 320 series 80GB SATA 2.5"
HDD: 4x HGST 4TB CoolSpin 3.5"
FAN: Gelid 120mm sleeve silent fan (came with the unit)
PSU: Seasonic SS-350M1U
CASE: U-NAS NSC-400
OS: LinuxMint 17.1 x64 (basically ubuntu 14.04 lts, but hassle-free)
Iozone_guy - Wednesday, September 2, 2015 - link
I'm struggling to understand the test configuration. There seems to be a disconnect in the results. Almost all of the results have an average latency that is looking like a physical spindle, but yet the storage is all SSDs. How can the latency be so high ? Was there some problem with the setup, such that it wasn't measuring the SSD storage but something else ? Could the tester post the sfs_rc file and the sfslog.* and sfsc*.log files ? So we can try to sort out what happened ?