Sequential Performance – HDTach, HDTune, File Copy, YAPT (sequential)
We have shifted over to combining our results into two groupings for consumer reviews. First up is sequential performance:
HDTach:
HD Tach will test the sequential read, random access and interface burst speeds of your attached storage device (hard drive, flash drive, removable drive, etc). All drive technologies such as SCSI, IDE/ATA, 1394, USB, SATA and RAID are supported. HDTach tests sequential performance by issuing reads in a manner that was optimized more for HDD access, but this unique method has proven useful in evaluating the sequential response time of SSDs. The accesses are relatively small in size (2k), and are issued with a single working thread (QD=1). The end result is that devices with relatively IO high latency will not reach their ultimate rated speed.
Looks like Intel just punched everyone else in the gut to start off this fight. HDTach is not an ideal sequential read test for an SSD, as it performs very small read commands that are issued sequentially (rather than queued). Despite this, the SSD 750 turns in the highest figures we've ever seen in this test.
HDTune:
HDTune tests a similar level of features as compared with HDTach, but with a different access pattern. Thus provides us with an additional set of benchmark numbers to compare between storage configurations. CPU utilization has proven negligible with modern processing horsepower, and is no longer included. Additionally, we do not include write performance due to HDTune's write access pattern not playing nicely with most SSDs we have tested it on.
We're not sure why HDTune has become as inconsistent as it has with these faster SSDs, but it appears to not mesh well with NVMe devices. Speeds are still good, but we know the SSD 750 is much more capable than what this test reports. Included here mostly as a data point, but we will be discontinuing use of this test in the near future.
PCPer File Copy Test
Our custom PCPer-FC test does some fairly simple file creation and copy routines in order to test the storage system for speed. The script creates a set of files of varying sizes, times the creation process, then copies the same files to another partition on the same hard drive and times the copy process. There are four file sizes that we used to try and find any strong or weak points in the hardware: 10 files @ 1000 MB each, 100 files @ 100 MB each, 500 files @ 10 MB each and 1000 files at 1 MB each.
Intel cleans house on file creations right until the 1000 x 1MB file creation, where it slows a bit. This is because its enterprise pedigree heavily optimizes for 4k and higher random access, and writing very small files with our tool means a lot of <4k file table updates, slowing things down a bit.
Overall the fastest we've ever seen the copy test complete. Watching the batch file run was like listing a large directory at the command line. It simply flew by. Since writes during a copy operation can be cached by Windows, the small file creation slow down we saw above no longer had an impact on the SSD 750.
YAPT:
YAPT (yet another performance test) is a benchmark recommended by a pair of drive manufacturers and was incredibly difficult to locate as it hasn't been updated or used in quite some time. That doesn't make it irrelevant by any means though, as the benchmark is quite useful. It creates a test file of about 100 MB in size and runs both random and sequential read and write tests with it while changing the data I/O size in the process. The misaligned nature of this test exposes the read-modify-write performance of SSDs and Advanced Format HDDs.
YAPT has always done a great job of maxing out SSDs, and the same applies here. The SSD 750 takes the crown with nearly 2.6 GB/sec reads…
…but falls short of the Phoenix Blade on sequential writes. This is because Intel was more conservative in its write performance in the interest of lower per-IO latency. This means that overall IOPS of the SSD 750 is higher in mixed workloads as it is not saturating itself with writes.
If anyone is interested, here
If anyone is interested, here is a review with photos of a Supermicro server with room for multiple 2.5″ NVMe SSDs:
http://www.tomsitpro.com/articles/supermicro-nvme-storage,2-878.html
Start reading at:
“NVMe Hot Swap Capabilities”
e.g.:
“NVMe has made a massive impact in the server space, specifically for applications where low latency and high queue depths are the norm. Applications such as databases and real-time analytics are seeing massive speed-ups from the technology.”
“… the PCIe x4 2.5″ form factor drives are made to fit into similar spaces as their SAS/SATA counterparts.”
“One can see that these fit into standard Supermicro 2.5″ to 3.5″ converters so a major aspect of these drives is fitting into familiar infrastructure. These drives can be inserted and removed similar to traditional disks. Modern OSes are able to handle these drives and use them in hot swap applications such as RAID arrays.”
And so, as many prosumers
And so, as many prosumers have already done with 2 x 6G SSDs, we can reach your preferred capacity of 800GB with 2 x 400GB 2.5″ Intel 750 SSDs in RAID 0.
Now, where do we find a host controller with at least 2 x SFF-8639 ports?
Am I dreaming (again)?
MRFS
FOUND
FOUND ONE:
http://www.newegg.com/Product/Product.aspx?Item=9SIA5EM2KK5178&cm_re=NVMe-_-9SIA5EM2KK5178-_-Product
Supermicro AOC-SLG3-2E4R NVMe AOC card, Standard LP, 2 internal NVMe ports, x4 per port, Gen-3
Only $150 at Newegg.
ftp://ftp.seagate.com/sff/SFF
ftp://ftp.seagate.com/sff/SFF-8639.PDF
NOTE the roadmap implied by “24 Gb/s”
There are multiple using generations based on performance.
12 Gb/s SFF-8637
24 Gb/s SFF-8638
MSI Preparing SFF-8639
MSI Preparing SFF-8639 Adapter Card for Motherboards
http://www.kitguru.net/components/motherboard/luke-hill/msi-preparing-sff-8639-adapter-card-for-motherboards/
“There is no (measurable) performance difference between a four-lane PCIe Gen 3 link routed via a PCIe expansion slot or an SFF-8639 connector. The biggest difference is compatibility; many small form factor and multi-VGA systems simply cannot surrender a PCIe slot to anything other than a graphics card, so housing an ultra-fast SSD elsewhere may be the only viable option.”
I want one!
I want one!
Great SSD
Great SSD
JJ at ASUS says that heat is
JJ at ASUS says that heat is a factor with the 2.5″ Intel 750:
https://www.youtube.com/watch?v=YLqL2g13V-U
I wonder if Icy Dock is preparing a 5.25″ enclosure for 4 x Intel 750s?
The Icy Dock model Fits 7, 9.5, 12.5, 15mm height drive:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994095&cm_re=Icy_Dock_5.25-_-17-994-095-_-Product
For comparison purposes, we
For comparison purposes, we got these numbers from an inexpensive Highpoint RocketRAID x8 model 2720SGL PCIe RAID controller:
ATTO on 4 x Samsung 128GB model 840 Pro SSDs:
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.2.bmp
ATTO on 1 x Samsung 256GB 850 Pro SSD:
http://supremelaw.org/systems/io.tests/1xSamsung.850.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.1.bmp
We are happy with these numbers, because the bulk of our I/O here is batch database updates e.g. drive images written to all data partitions, XCOPY updates to a large HTML database etc.
XCOPY also works fine over a LAN e.g.:
xcopy folder X:folder /s/e/v/d
We’ve also experimented with OS hosting on the same RAID controller, using 4 x Samsung SSDs and also 4 x Intel SSDs: the 4 x Samsung 840 Pro on a PCIe 2.0 motherboard (ASUS P5Q Deluxe) are VERY SNAPPY, particularly with an overclocked quad-core Intel CPU.
MRFS
p.s. JJ reports “up to 1,200
p.s. JJ reports “up to 1,200 MB/s [sequential] WRITE performance” (at 2:00 on the counter).
MRFS
Nice! I want one for my new
Nice! I want one for my new build.
I WANT IT(^_^)
lol
I WANT IT(^_^)
lol
That would make mine the
That would make mine the fastest PC in OZ!!!! LOL
Looks interesting. I hope the
Looks interesting. I hope the price will drop soon though.
Looks interesting. I hope the
Looks interesting. I hope the price will drop soon though.