Random Performance – Iometer (IOPS/latency), YAPT (random)
We are trying something different here. Folks tend to not like to click through pages and pages of benchmarks, so I'm going to weed out those that show little to no delta across different units (PCMark). I'm also going to group results performance trait tested. Here are the random access results:
Iometer:
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry. Intel later discontinued work on Iometer and passed it onto the Open Source Development Lab (OSDL). In November 2001, code was dropped on SourceForge.net. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Iometer – IOPS
When I first created these charts, I spent a few minutes making sure I had not entered the wrong data. This is just down right embarrassing for the rest of the field. Sure the P3700 did the same thing to other SSDs back when we tested it, but that was nearly a year ago. I figured nine months of newer PCIe SSD releases would at least close the gap a little bit, but I was clearly mistaken. Even at QD=1, the extremely low latencies result in nearly double the IOPS of all competing units. To put it simply, this means that any single request issued to the SSD 750 will complete *twice as fast* as anything else out there, and that's with competing PCIe SSDs included in the results!
To make matters worse, I should point out that the Iometer configuration used for these tests (unmodified here to retain equivalent results) pegs its worker thread at just over 200,000 IOPS. This means the SSD 750 would go even higher in at least three of these charts – if it wasn't so busy pegging a CPU core! In fairness, any single app that was applying heavy IO to an SSD would likely saturate its storage handling thread at a similar maximum IOPS. Moral of the story: If you want to peg an SSD 750, you had better be running a whole lot of, well, everything you have installed on your system. All at once.
Iometer – Average Transaction Time
For SSD reviews, HDD results are removed here as they throw the scale too far to tell any meaningful difference in the results. Queue depth has been reduced to 8 to further clarify the results (especially as typical consumer workloads rarely exceed QD=8). Some notes for interpreting results:
- Times measured at QD=1 can double as a value of seek time (in HDD terms, that is).
- A 'flatter' line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously, especially if that line falls lower than competing units.
I normally don't comment here, but just look at how much lower the IO latencies are for the SSD 750. NVMe absolutely has its perks, and Intel's 18-channel controller is certainly taking full advantage of it.
YAPT (random)
YAPT (yet another performance test) is a benchmark recommended by a pair of drive manufacturers and was incredibly difficult to locate as it hasn't been updated or used in quite some time. That doesn't make it irrelevant by any means though, as the benchmark is quite useful. It creates a test file of about 100 MB in size and runs both random and sequential read and write tests with it while changing the data I/O size in the process. The misaligned nature of this test exposes the read-modify-write performance of SSDs and Advanced Format HDDs.
This test has no regard for 4k alignment, and it brings many SSDs to their knees rather quickly. As we mentioned earlier, the SSD 750 is heavily optimized for 4k aligned writes, which explains the inconsistent results in this test.
If anyone is interested, here
If anyone is interested, here is a review with photos of a Supermicro server with room for multiple 2.5″ NVMe SSDs:
http://www.tomsitpro.com/articles/supermicro-nvme-storage,2-878.html
Start reading at:
“NVMe Hot Swap Capabilities”
e.g.:
“NVMe has made a massive impact in the server space, specifically for applications where low latency and high queue depths are the norm. Applications such as databases and real-time analytics are seeing massive speed-ups from the technology.”
“… the PCIe x4 2.5″ form factor drives are made to fit into similar spaces as their SAS/SATA counterparts.”
“One can see that these fit into standard Supermicro 2.5″ to 3.5″ converters so a major aspect of these drives is fitting into familiar infrastructure. These drives can be inserted and removed similar to traditional disks. Modern OSes are able to handle these drives and use them in hot swap applications such as RAID arrays.”
And so, as many prosumers
And so, as many prosumers have already done with 2 x 6G SSDs, we can reach your preferred capacity of 800GB with 2 x 400GB 2.5″ Intel 750 SSDs in RAID 0.
Now, where do we find a host controller with at least 2 x SFF-8639 ports?
Am I dreaming (again)?
MRFS
FOUND
FOUND ONE:
http://www.newegg.com/Product/Product.aspx?Item=9SIA5EM2KK5178&cm_re=NVMe-_-9SIA5EM2KK5178-_-Product
Supermicro AOC-SLG3-2E4R NVMe AOC card, Standard LP, 2 internal NVMe ports, x4 per port, Gen-3
Only $150 at Newegg.
ftp://ftp.seagate.com/sff/SFF
ftp://ftp.seagate.com/sff/SFF-8639.PDF
NOTE the roadmap implied by “24 Gb/s”
There are multiple using generations based on performance.
12 Gb/s SFF-8637
24 Gb/s SFF-8638
MSI Preparing SFF-8639
MSI Preparing SFF-8639 Adapter Card for Motherboards
http://www.kitguru.net/components/motherboard/luke-hill/msi-preparing-sff-8639-adapter-card-for-motherboards/
“There is no (measurable) performance difference between a four-lane PCIe Gen 3 link routed via a PCIe expansion slot or an SFF-8639 connector. The biggest difference is compatibility; many small form factor and multi-VGA systems simply cannot surrender a PCIe slot to anything other than a graphics card, so housing an ultra-fast SSD elsewhere may be the only viable option.”
I want one!
I want one!
Great SSD
Great SSD
JJ at ASUS says that heat is
JJ at ASUS says that heat is a factor with the 2.5″ Intel 750:
https://www.youtube.com/watch?v=YLqL2g13V-U
I wonder if Icy Dock is preparing a 5.25″ enclosure for 4 x Intel 750s?
The Icy Dock model Fits 7, 9.5, 12.5, 15mm height drive:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994095&cm_re=Icy_Dock_5.25-_-17-994-095-_-Product
For comparison purposes, we
For comparison purposes, we got these numbers from an inexpensive Highpoint RocketRAID x8 model 2720SGL PCIe RAID controller:
ATTO on 4 x Samsung 128GB model 840 Pro SSDs:
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.2.bmp
ATTO on 1 x Samsung 256GB 850 Pro SSD:
http://supremelaw.org/systems/io.tests/1xSamsung.850.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.1.bmp
We are happy with these numbers, because the bulk of our I/O here is batch database updates e.g. drive images written to all data partitions, XCOPY updates to a large HTML database etc.
XCOPY also works fine over a LAN e.g.:
xcopy folder X:folder /s/e/v/d
We’ve also experimented with OS hosting on the same RAID controller, using 4 x Samsung SSDs and also 4 x Intel SSDs: the 4 x Samsung 840 Pro on a PCIe 2.0 motherboard (ASUS P5Q Deluxe) are VERY SNAPPY, particularly with an overclocked quad-core Intel CPU.
MRFS
p.s. JJ reports “up to 1,200
p.s. JJ reports “up to 1,200 MB/s [sequential] WRITE performance” (at 2:00 on the counter).
MRFS
Nice! I want one for my new
Nice! I want one for my new build.
I WANT IT(^_^)
lol
I WANT IT(^_^)
lol
That would make mine the
That would make mine the fastest PC in OZ!!!! LOL
Looks interesting. I hope the
Looks interesting. I hope the price will drop soon though.
Looks interesting. I hope the
Looks interesting. I hope the price will drop soon though.