Background and Internals
We do a quick round of testing on Intel’s first contender in the PCIe SSD arena.
A little over two weeks back, Intel briefed me on their new SSD 910 Series PCIe SSD. Since that day I’ve been patiently awaiting its arrival, which happened just a few short hours ago. I’ve burned the midnight oil for the sake of getting some greater details out there. Before we get into the goods, here’s a quick recap of the specs for the 800 (or 400) GB model:
- PCIe 2.0 x8 LSI Falcon 2008 SAS HBA driving 4 (or 2) Hitachi Ultrastar SAS controllers, each in turn driving 200GB of IMFT 25nm High Endurance Technology flash memory, all on a triple stacked half-height PCB.
- 400GB model yields (r/w) 1GB/s / 750MB/s sequential and 90,000 / 38,000 4k IOPS.
- 800GB model yields (r/w) 2GB/s / 1GB/s sequential and 180,000 / 75,000 4k IOPS.
- 800GB ‘performance mode’ (r/w) 2GB/s / 1.5GB/s sequential and 180,000 / 75,000 4k IOPS.
"Performance Mode" is a feature that can be enabled through the Intel Data Center Tool Software. This feature is only possible on the 800GB model, but not for the reason you might think. The 400GB model is *always* in Performance Mode, since it can go full speed without drawing greater than the standard PCIe 25W power specification. The 800GB model has twice the components to drive yet it stays below the 25W limit so long as it is in its Default Mode. Switching the 800GB model to Performance Mode increases that draw to 38W (the initial press briefing stated 28W, which appears to have been a typo). Note that this increased draw is only seen during writes.
Ok, now into the goodies:
Behold the 800GB SSD 910!
The four capacitors pictured are Aluminum Electrolytic, "V" Type "FK" Series. Each is rated at 330uF and each appears to be routed to its respective power converter circuit, which in turn drives one of the four 200GB SAS SSD units.
A side profile of the 910, showing the stacked layout, which I could only look at for just long enough to take this photo before the screwdriver came out:
The top two PCBs contain nothing but flash, while the bottom PCB holds four SAS SSD controllers and the LSI SAS HBA (hidden under the heatsink, which I opted to not remove considering I hadn’t even fired up the 910 for the first time yet):
Each SAS controller gets a fair chunk of DDR RAM (bottom right), while the LSI HBA gets a little to itself as well (center left).
The beefy connectors mating each flash memory PCB to the main board are fairly stout:
And finally the backs of the three PCBs for your viewing pleasure. Power inverters and additional RAM for the controllers lines the bottom of the main board, while the large chip in the center holds the firmware for the LSI SAS HBA.
Continue on page two for more pictures and preliminary benchmarks.
It’s true that on a 1155
It’s true that on a 1155 mainboard/cpu combo you should keep all available PCIe bandwidth for the 910. But over at The SSD Review they were testing it on a X79 with 40 PCIe lanes and not 16.
Re-tested using the same
Re-tested using the same QD=64 and saw same result. Updated the piece with that analysis. Thanks!
You are comparing your 4k
You are comparing your 4k random write speeds to their posted 4k random read speeds. Their read results are actually higher than your posted results, and they did not post their 4k write results.
You’re absolutely correct!
You’re absolutely correct! I’ve fixed this just now.
When you are comparing the
When you are comparing the LUN performance, you are using these as individual volumes?
Or are they in a raid configuration? So with 4 LUN, is that 4RO, or 4 separate volumes being accessed simultaneously?
The ATTO run was using
The ATTO run was using standard Windows RAID-0 for the 4 LUNs combined. The IOMeter run accessed the tested LUNs simultaneously in RAW form. The latter was done to properly evaluate IOPS scaling of the LSI HBA without adding variables caused by the Windows RAID layer.
Therein lies the issue with
Therein lies the issue with the max latency. The results that we posted on thessdreview.com were from a RAID 0 of the four volumes.
Under similar tests to yours, with the same parameters, the results come in at 29310.72 IOPS, 895.74 MB/s (binary), Average response time of 1.116263 and a maximum response time of 2.170227. CPU utilization is 11.39%.
The higher maximum latency reported from RAID 0 is indicative of typical RAID overhead with windows. These were cursory benchmarks, ran before the SSD went into a automated test regimen.
Of note; The maximum latency is the single I/O that requires the longest time to complete. If there is a correlation between a very high maximum latency and an overall higher average latency, that can be indicative of a device/host issue. Even with the RAID result kicking out an appreciably higher maximum latency, that result would have to be in conjunction with higher overall latency to indicate a serious problem.
The SLI GPUs are rarely used during bench sessions, unless we are doing 3d benchmarks. They are on an entirely separate loop, allowing them to be used, or removed easily. During all testing thus far, we have used a 9800gt as the video card.
No worries, the X79 Patsburg (C600)chipset is designed for servers and high end workstations. Plenty of bandwidth there.
Actually, if you were testing
Actually, if you were testing a single RAID volume with 4 workers and QD=64, you were actually testing with QD=256, which might have upped the latency. From the Iometer User’s Guide:
—
It was with 16 QD for each
It was with 16 QD for each worker. It is elementary that it adds up to 64. The reason that we stated that the QD was 64 is because 16 X 4 = 64. When listing results typically the results are listed as the overall QD, with the number of workers noted.
If there were an overall issue with the latency, it would show in the average latency measurement, which is actually slightly lower than your results. Did you receive the email with the Iometer results that I sent you?
our full review is up, you
our full review is up, you should give it a glance allyn at http://thessdreview.com/our-reviews/intel-910-pcie-ssd-review-amazing-performance-results-in-both-400gb-and-800gb-configurations/