Performance Comparisons – Mixed Burst
These are the Mixed Burst results introduced in the Samsung 850 EVO 4TB Review. Some tweaks have been made, namely, QD reduced to a more realistic value of 2. Read bursts have been increased to 400MB each. 'Download' speed remains unchanged.
In an attempt to better represent the true performance of hybrid (SLC+TLC) SSDs and to include some general trace-style testing, I’m trying out a new test methodology. First, all tested SSDs are sequentially filled to near maximum capacity. Then the first 8GB span is preconditioned with 4KB random workload, resulting in the condition called out for in many of Intel’s client SSD testing guides. The idea is that most of the data on an SSD is sequential in nature (installed applications, MP3, video, etc), while some portions of the SSD have been written to in a random fashion (MFT, directory structure, log file updates, other randomly written files, etc). The 8GB figure is reasonably practical since 4KB random writes across the whole drive is not a workload that client SSDs are optimized for (it is reserved for enterprise). We may try larger spans in the future, but for now, we’re sticking with the 8GB random write area.
Using that condition as a base for our workload, we now needed a workload! I wanted to start with some background activity, so I captured a BitTorrent download:
This download was over a saturated 300 Mbit link. While the average download speed was reported as 30 MB/s, the application’s own internal caching meant the writes to disk were more ‘bursty’ in nature. We’re trying to adapt this workload to one that will allow SLC+TLC (caching) SSDs some time to unload their cache between write bursts, so I came to a simple pattern of 40 MB written every 2 seconds. These accesses are more random than sequential, so we will apply it to the designated 8GB span of our pre-conditioned SSD.
Now for the more important part. Since the above ‘download workload’ is a background task that would likely go unnoticed by the user, we also need is a workload that the user *would* be sensitive to. The times where someone really notices their SSD speed is when they are waiting for it to complete a task, and the most common tasks are application and game/level loads. I observed a round of different tasks and came to a 200MB figure for the typical amount of data requested when launching a modern application. Larger games can pull in as much as 2GB (or more), varying with game and level, so we will repeat the 200MB request 10 times during the recorded portion of the run. We will assume 64KB sequential access for this portion of the workload.
Assuming a max Queue Depth of 4 (reasonable for typical desktop apps), we end up with something that looks like this when applied to a couple of SSDs:
The OCZ Trion 150 (left) is able to keep up with the writes (dashed line) throughout the 60 seconds pictured, but note that the read requests occasionally catch it off guard. Apparently, if some SSDs are busy with a relatively small stream of incoming writes, read performance can suffer, which is exactly the sort of thing we are looking for here.
When we applied the same workload to the 4TB 850 EVO (right), we see an extremely consistent and speedy response to all IOs, regardless of if they are writes or reads. The 200MB read bursts are so fast that they all occur within the same second, and none of them spill over due to other delays caused by the simultaneous writes taking place.
Now that we have a reasonably practical workload, let’s see what happens when we run it on a small batch of SSDs:
From our Latency Percentile data, we are able to derive the total service time for both reads and writes, and independently show the throughputs seen for both. Remember that these workloads are being applied simultaneously, as to simulate launching apps or games during a 20 MB/s download. The above figures are not simple averages – they represent only the speed *during* each burst. Idle time is not counted.
The focus point here is the read speeds since it only matters if the write speeds are fast enough to keep up with the demand (they all are). While the 760p does hold up well under the write load, that added load is enough to cause its reads to fall behind the 960 EVO – but just barely. This is again an extremely competitive showing from Intel!
Now we are going to focus only on reads, and present some different data. I’ve added up the total service time seen during the 10x 400MB reads that take place during the recorded portion of the test. These figures represent how long you would be sitting there waiting for 4GB of data to be read, but remember this is happening while a download (or another similar background task) is simultaneously writing to the SSD. This metric should closely equate to the 'feel' of using each SSD in a moderate to heavy load.
The 512GB Intel SSD 760p, reading 4GB of data while dealing with background write activity, came within 0.3 seconds of the 960 EVO. The 960 EVO's are still winning here, but the 760p's are so close that it is nearly an even race. Also, check out that SSD 750 with the nearly 20 seconds (>3x) time required to complete this same task.
Finally, it crazy how long
Finally, it crazy how long it’s taken to get a reasonable competitor to the Samsung NVME juggernaut! At least it’s competitive price and performance wise with the 960evo.
This is a very interesting
This is a very interesting NVMe M.2 drive but the 960 evo is barely any more expensive at this point. 10% cheaper isn’t going to make up for the large performance delta.
960 EVO offers only 3 year
960 EVO offers only 3 year warranty which is quite a difference. Yet I will not buy a single intel product anymore unless the performance delta favours them immensely, good bye asshole corp.
That’s why it didn’t get
That's why it didn't get Editor's Choice. It would need to have outperformed the 960 in more ways than it did for me to go that far in the recommendation. If the price delta is $10-20, I'd personally still buy the EVO today. Still a good showing from Intel through – the 960's needed some healthy competition.
“I’m awarding gold to the
“I’m awarding gold to the 256GB and 512GB models of the 760p. These products nearly match the current M.2 NVMe class leader, and win in some of our more critical metrics, all while coming in at a lower cost.”
Totally corrupt /s
Dude go get your tinfoil hat and play in the corner.
A white paper doesn’t lie about a product, it put the strengths on display and show when it would make sense to choose one product over another. Allyn is one of the best storage editors out there, of course they would go to them to write a third party paper. You wouldn’t go to LTT for this kind of in depth reporting, they aren’t geared for that type of work. Also why duplicate work or not use work you gained in the research of a product in your own sites review?
You seemingly don’t
You seemingly don’t understand how conflict of interest pertains to journalism. A conflict of interest exists regardless of whether this conflict ends up influencing Allyn’s review at PCPer. Ultimately, it is the responsibility of any proper journalist to keep a professional distance (read: financial independence) from the subject of their coverage.
This has nothing to do with whether Allyn should have been chosen over some other youtube reviewer (hint: no reviewer should conduct paid work for a vendor whose products they review). If you are a journalist/reviewer, you have the responsibility to ensure that you are not in any position where you stand to personally benefit from your professional conduct. It is absolutely unacceptable to be paid by a company (for real work), and fail to disclose this financial relationship to your readers.
This is such a blatant example of COI that I’m shocked they thought it would go unnoticed. To answer your question: if you were paid by a company (Intel) to perform work for them, you stand to benefit from them continuing to pay you, or provide you with other benefits (like privileged access to products, or early access). Adored’s video discussed how PCPer’s access to optane did not reflect the relative size and reach of their outfit (read: they were given privileged access to hardware that was not available to the rest of the press). This (indirectly) has monetary value, since it allowed PCper to produce content that other outlets could not feasibly produce. Unique content results in views, and therefore money. Readers have the right to know that this relationship existed, and PCPer knowingly chose not to disclose any such relationship. It’s extremely disappointing, and this is coming from a frequent consumer of PCPer content.
To be clear, we duplicate the
To be clear, we duplicate the work regardless. It would be extremely unlikely for any possible white paper work / other research work to use an identical test configuration as the test suite used for reviews, and even if it were, I'd do separate work for both sides anyway.
Shrout Research’s commercial
Shrout Research’s commercial conflict of interest makes this site in best case questionable. Sorry Allyn and Ryan, your credibility is in the gutter for now. 🙁
PCPer is now dead to me. In
PCPer is now dead to me. In nearly 35 years of IT work I have never seen such a serious conflict of interest as this one. Everything that now comes out of PCPer’s so-called journalists mouths will be nothing but meaningless blablabla to me. PCPer needs to be served with a Class Action Lawsuit, at the very least.
The only surprise is that the
The only surprise is that the AMD fanboy community still watches AdoredTV after all his BS from the previous two years. You guys are seriously in love with siege mentality.
Error with results 256gb:
Error with results 256gb:
1. Saturated vs. Burst Performance (for 128 gb (two graphic)).
It doesn’t appear there is
It doesn’t appear there is any spare area on these drives. Would it be worthwhile to overprovision them to say 250GB, 500GB etc ?