Performance Comparisons – Mixed Burst
These are the Mixed Burst results introduced in the Samsung 850 EVO 4TB Review. Some tweaks have been made, namely, QD reduced to a more realistic value of 2. Read bursts have been increased to 400MB each. 'Download' speed remains unchanged.
In an attempt to better represent the true performance of hybrid (SLC+TLC) SSDs and to include some general trace-style testing, I’m trying out a new test methodology. First, all tested SSDs are sequentially filled to near maximum capacity. Then the first 8GB span is preconditioned with 4KB random workload, resulting in the condition called out for in many of Intel’s client SSD testing guides. The idea is that most of the data on an SSD is sequential in nature (installed applications, MP3, video, etc), while some portions of the SSD have been written to in a random fashion (MFT, directory structure, log file updates, other randomly written files, etc). The 8GB figure is reasonably practical since 4KB random writes across the whole drive is not a workload that client SSDs are optimized for (it is reserved for enterprise). We may try larger spans in the future, but for now, we’re sticking with the 8GB random write area.
Using that condition as a base for our workload, we now needed a workload! I wanted to start with some background activity, so I captured a BitTorrent download:
This download was over a saturated 300 Mbit link. While the average download speed was reported as 30 MB/s, the application’s own internal caching meant the writes to disk were more ‘bursty’ in nature. We’re trying to adapt this workload to one that will allow SLC+TLC (caching) SSDs some time to unload their cache between write bursts, so I came to a simple pattern of 40 MB written every 2 seconds. These accesses are more random than sequential, so we will apply it to the designated 8GB span of our pre-conditioned SSD.
Now for the more important part. Since the above ‘download workload’ is a background task that would likely go unnoticed by the user, we also need is a workload that the user *would* be sensitive to. The times where someone really notices their SSD speed is when they are waiting for it to complete a task, and the most common tasks are application and game/level loads. I observed a round of different tasks and came to a 200MB figure for the typical amount of data requested when launching a modern application. Larger games can pull in as much as 2GB (or more), varying with game and level, so we will repeat the 200MB request 10 times during the recorded portion of the run. We will assume 64KB sequential access for this portion of the workload.
Assuming a max Queue Depth of 4 (reasonable for typical desktop apps), we end up with something that looks like this when applied to a couple of SSDs:
The OCZ Trion 150 (left) is able to keep up with the writes (dashed line) throughout the 60 seconds pictured, but note that the read requests occasionally catch it off guard. Apparently, if some SSDs are busy with a relatively small stream of incoming writes, read performance can suffer, which is exactly the sort of thing we are looking for here.
When we applied the same workload to the 4TB 850 EVO (right), we see an extremely consistent and speedy response to all IOs, regardless of if they are writes or reads. The 200MB read bursts are so fast that they all occur within the same second, and none of them spill over due to other delays caused by the simultaneous writes taking place.
Now that we have a reasonably practical workload that gives NAND SSDs the best fighting chance possible, let’s see what happens when we compare them against Intel Optane:
From our Latency Percentile data, we are able to derive the total service time for both reads and writes, and independently show the throughputs seen for both. Remember that these workloads are being applied simultaneously, as to simulate launching apps or games during a 20 MB/s download. The above figures are not simple averages – they represent only the speed *during* each burst. Idle time is not counted.
The important metric here is reads, since writes would be in the background in this scenario. The 800P's match each other in performance and come within striking distance of the 960 EVO. The 900P naturally dominates here.
Now we are going to focus only on reads, and present some different data. I’ve added up the total service time seen during the 10x 400MB reads that take place during the recorded portion of the test. These figures represent how long you would be sitting there waiting for 4GB of data to be read, but remember this is happening while a download (or another similar background task) is simultaneously writing to the SSD. This metric should closely equate to the 'feel' of using each SSD in a moderate to heavy load.
While most folks would never use the smaller Optane Memory parts as system disks, we have had plenty of reports of that happening (and have even tried it ourselves), which is why we have included them here for perspective. 800P's do well, but the 960 EVO still beats them by nearly a full second, and the 900P nearly cuts that figure in half.
OK so with all that data…is
OK so with all that data…is it faster than a 960 evo for everyday windows use? and does it use more or less power for a laptop?
Likely less power in laptop
Likely less power in laptop usage as it spends less time servicing IOs on average, but the 'feel' will likely be similar unless you are doing some heavy mixed/random read workloads.
Any idea why Intel chose the
Any idea why Intel chose the PCI-E 3.0 x2 interface? All M.2 slot that support this drive are x4 capable. Seems like they are leaving performance on the table….
On a related note, correct me if I’m wrong, but most Z370 motherboards have 20 PCI-E lanes available without going through the chipset’s 4 additional lanes. Usually they are assigning 16 lanes to the graphics card and 4 lanes to an M.2 slot. Additional M.2 slots are running through the chipset’s lanes. Are existing motherboards capable of assigning 2 CPU lanes to 2 different M.2 slots?
They likely went with x2
They likely went with x2 since they already had hardware/controller close to that ready to go (via Optane Memory parts).
Current X299 boards / VMDs can only bifurcate in x4 chunks, but someone can probably make a PCIe switch that can split further.
Re: ASUS Sabertooth
Re: ASUS Sabertooth X99
Allyn,
Would it make any sense to evaluate these Optane parts on an AMD Threadripper system?
My best guess is that your X99 test system was capable of driving both the M.2 and x16 slots at max speed, so an AMD motherboard should not make much of a difference.
Again, thanks for your consistently excellent reviews!
p.s. Did you ever solve the problems you encountered with your Threadripper system?
In short, it wouldn’t, not if
In short, it wouldn't, not if I wanted the lowest and most consistent latencies.
I've been sticking with the X99 platform because 1. If it ain't broke… and 2. I'd rather not re-test 100+ SSDs, not until I've added some workloads to the suite at least.
Re: problems on TR – the problems are still an issue, and are still preventing me from getting results consistent enough to feel comfortable publishing.
Copy that and … THANKS
Copy that and … THANKS AGAIN!
Re:
https://www.pcper.com/ima
Re:
https://www.pcper.com/image/view/89768?return=node%2F69337
To clarify, I am still curious if the latter AIC will be bootable when installed in an AMD Threadripper motherboard.
For prosumers and workstation users who now prefer a TR system, this AIC with 4 x Optanes could work.
Nevertheless, you are correct: 4 x Samsung 960 Pro make more sense in such an AIC, for reasons you have already explained in your other reviews.
Yes, it’s bootable since the
Yes, it's bootable since the UEFI initializes the array each boot. Consistent performance? that's another story entirely.
Apples-to-apples, perhaps we
Apples-to-apples, perhaps we should wait for M.2 Optanes to be enhanced with support for x4 PCIe lanes each.
Also, there is the 16 GHz clock rate approved for PCIe 4.0:
Intel may be “vectoring” x2 PCIe lanes to a future date
when x2 @ 16G = x4 @ 8G .
https://www.anandtech.com/show/11967/pcisig-finalizes-and-releasees-pcie-40-spec
PCIe 4.0 is gopping to be
PCIe 4.0 is gopping to be power hungry at least initially. Don't be surprised to see SSDs hang out at 3.0 for a good while after 4.0 is available and shipping on motherboards.
So I just want to make sure I
So I just want to make sure I can use this….I have an Asus Z370E, 8700K, one 1080Ti….with 4 hard drives (3 standard- 2@7200RPM, one 5400RPM) and one SSD boot drive from Crucial. Also, External backup HD from western digital. Will I have any issues if I decide to purchase this?
From what I been told the
From what I been told the Intel Optane SSD 800P has to be boot drive.
It doesn’t *have* to be a
It doesn't *have* to be a boot drive. Can be used as a fast random access temp drive, etc. All depends on where you want the traits of this particular SSD.
*edit* you may be thinking of Optane Memory (caching), which will only cache the boot drive. 800P can be used this way, by installing the Optane Memory driver, but it's overkill with >32GB of cache.
The Z370E has a pair of M.2
The Z370E has a pair of M.2 slots, so even if your Crucial SSD is M.2, you should have room. You could put the 800P in the primary M.2 slot to use for boot and shift any other (larger) M.2 SSD to the second M.2 / other SATA port (depending on what it is) for use as a game / other SSD.
Interesting graph:
history
Interesting graph:
history of PCIe actual bandwidth
compared to
every 3 years, I/O bandwidth doubles:
https://images.anandtech.com/doci/11967/pci-sig_history_graphic_wide_rgb_0533_575px.jpg
Allyn, On behalf of all
Allyn, On behalf of all pcper.com users,
please accept our sincere appreciation
for your very prompt and professional
answers to all our questions.
I just got great deal on a i7
I just got great deal on a i7 7800x and a Gigabyte Aorus Gaming Ultimate,
can I use say a 960 Pro 1 Tb as boot then the 2 118gb optane as a raid cache.
I won’t be using high end gpu this will be a sever build with alot of Hdds?
Any news on Micron’s QuantX ?
Any news on Micron’s QuantX ?
From their earnings call I
From their earnings call I don’t think we’ll see it until 2019
Pretty sure that was just
Pretty sure that was just vaporware to force intel to release hypetane asap. as there has been zero products from micron with there brand of xpoint.
i was excited to see the dimms they were working on it all looked so good on paper but sadly that will most likely be a pipe dream that will never bear fruit.
you got about as much chance as valve releasing HL3 then micron actually make a product with xpoint at this point.
Hi Allyn, i’m just curious about “Intel’s recommended client SSD conditioning pass”,could you tell me where can i find those recommendation?
I reviewed and nothing there.
Allyn’s not here man … he’s writing those things now!
I think this is the paper he was referring to https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ssd-server-storage-applications-paper.pdf
Thank you Jeremy!
I think i misunderstood his thoughts, and now i’m clear that leaving 8GB portion for random access is a better way to simulate real-world condition.