Client Performance (including RAID testing!)

We pushed all cached and uncached solutions through our exclusive custom client suite. I also snuck in a few surprises:

Random performance results are quite interesting. I've highlighted and labeled read performance here as it is more relevant to the discussion. Typical NAND-based SSDs have higher low QD write performance as they can cache incoming requests and let the NAND handle them after acknowledgment from the host, but their reads are still limited to the response time of the NAND flash memory. This is why we see the HDD + Optane random reads (blue bars) score greater than 2x higher than the Samsung 850 and even 960 EVO SSDs added for comparison.

I also went a bit crazy and tested the Optane 32GB SSD solo. You might not want to try and cram an OS partition onto a 32GB SSD these days, but it might fit onto a 64GB SSD, which is why I also included a RAID-0 stripe of a pair of them on a Z170 motherboard! While we were going crazy, I also ran our client suite on the P4800X. Note that the single 32GB Optane part is actually quicker than the enterprise variant at these lower queue depths.

Taking a closer look at read latencies, we see the reason Optane is able to see such speed gains here. I cropped the chart scale significantly due to the HDD (note the labeled value).

Sequential performance seems right about where it should be. Reads are cached, writes occur mostly at the uncached speed. The crazy Optane Memory RAID scales appropriately, doubling performance over its throughput limited single drive variant.

Note the read performance boost to the SATA SSD, and how close both Optane Memory cached results come to the 960 EVO 250GB's performance. Also note the HDD score of 2 (yes, two).

Read service time is basically how long you'd be sitting there waiting for things to load, and the cached results make the differences and net gains painfully clear here.

« PreviousNext »