Performance Focus – SSD 760p 128GB
A quick note on these results: I’ve been analyzing the effects of how full an SSD is on its performance. I’ve found that most SSDs perform greater when empty (FOB) as they do when half or nearly filled to capacity. Most people actually put stuff on their SSD. To properly capture performance at various levels of fill, the entire suite is run multiple times and at varying levels of drive fill. This is done in a way to emulate the actual use of the SSD over time. Random and sequential performance is rechecked in the same areas as data is added. Those checks are made on the same files and areas checked throughout the test. Once all of this data is obtained, we again apply the weighting method mentioned in the intro in order to balance the results towards the more realistic levels of fill. The below results all use this method.
Sequential performance looks reasonable, but read speeds do not ramp up fully until QD=8.
Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.
Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.
- Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
- Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.
The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.
Our new results are derived from a very large dataset. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.
For the power users out there, here's the full read/write burst sweep at all queue depths:
Saturated vs. Burst Performance
These tests are intended to show the 'max' sustained performance figures of the SSD being tested. In the case of caching SSDs, sustained (or 'saturated') results will typically be lower for writes, as any cache would be full and writes would be direct to the bulk media (direct-to-die writes). Other SSDs may suffer more greatly by sending all writes to the cache and entering a swap state when that cache is overfilled, where data must be transferred from the host to the cache and simultaneously from the cache to the bulk media. While the average speed may look ok in those cases, immediate performance will be very stuttery and may fluctuate wildly. Do take saturated results with a large grain of salt as this sort of write workload is rarely encountered in real-world use.
The reduced die count of the smallest capacity 128GB model results in a rather sharp hit to burst and especially saturated writes, which are low enough that typical use will be impacted as compared to the higher capacity products of the line.
Write Cache Testing
Uh oh, this looks like the write toggle issue that we saw with the 600p. Granted this is the smallest capacity model and we might have just caught it trying to do some garbage collection. Fortunately, this is just one of our cache test runs:
Above is an additional run performed starting +3 seconds after the completion of the previous run. Things looked significantly better here, as well as in the rest of the runs of this test sequence:
I'll chalk this up to simply stating that we shouldn't expect the 128GB model to be as stellar or consistent when compared to its larger brothers. Note that competitors shy away from offering such low capacities in higher performance packaging to avoid this type of thing.
Finally, it crazy how long
Finally, it crazy how long it’s taken to get a reasonable competitor to the Samsung NVME juggernaut! At least it’s competitive price and performance wise with the 960evo.
This is a very interesting
This is a very interesting NVMe M.2 drive but the 960 evo is barely any more expensive at this point. 10% cheaper isn’t going to make up for the large performance delta.
960 EVO offers only 3 year
960 EVO offers only 3 year warranty which is quite a difference. Yet I will not buy a single intel product anymore unless the performance delta favours them immensely, good bye asshole corp.
That’s why it didn’t get
That's why it didn't get Editor's Choice. It would need to have outperformed the 960 in more ways than it did for me to go that far in the recommendation. If the price delta is $10-20, I'd personally still buy the EVO today. Still a good showing from Intel through – the 960's needed some healthy competition.
“I’m awarding gold to the
“I’m awarding gold to the 256GB and 512GB models of the 760p. These products nearly match the current M.2 NVMe class leader, and win in some of our more critical metrics, all while coming in at a lower cost.”
Totally corrupt /s
Dude go get your tinfoil hat and play in the corner.
A white paper doesn’t lie about a product, it put the strengths on display and show when it would make sense to choose one product over another. Allyn is one of the best storage editors out there, of course they would go to them to write a third party paper. You wouldn’t go to LTT for this kind of in depth reporting, they aren’t geared for that type of work. Also why duplicate work or not use work you gained in the research of a product in your own sites review?
You seemingly don’t
You seemingly don’t understand how conflict of interest pertains to journalism. A conflict of interest exists regardless of whether this conflict ends up influencing Allyn’s review at PCPer. Ultimately, it is the responsibility of any proper journalist to keep a professional distance (read: financial independence) from the subject of their coverage.
This has nothing to do with whether Allyn should have been chosen over some other youtube reviewer (hint: no reviewer should conduct paid work for a vendor whose products they review). If you are a journalist/reviewer, you have the responsibility to ensure that you are not in any position where you stand to personally benefit from your professional conduct. It is absolutely unacceptable to be paid by a company (for real work), and fail to disclose this financial relationship to your readers.
This is such a blatant example of COI that I’m shocked they thought it would go unnoticed. To answer your question: if you were paid by a company (Intel) to perform work for them, you stand to benefit from them continuing to pay you, or provide you with other benefits (like privileged access to products, or early access). Adored’s video discussed how PCPer’s access to optane did not reflect the relative size and reach of their outfit (read: they were given privileged access to hardware that was not available to the rest of the press). This (indirectly) has monetary value, since it allowed PCper to produce content that other outlets could not feasibly produce. Unique content results in views, and therefore money. Readers have the right to know that this relationship existed, and PCPer knowingly chose not to disclose any such relationship. It’s extremely disappointing, and this is coming from a frequent consumer of PCPer content.
To be clear, we duplicate the
To be clear, we duplicate the work regardless. It would be extremely unlikely for any possible white paper work / other research work to use an identical test configuration as the test suite used for reviews, and even if it were, I'd do separate work for both sides anyway.
Shrout Research’s commercial
Shrout Research’s commercial conflict of interest makes this site in best case questionable. Sorry Allyn and Ryan, your credibility is in the gutter for now. 🙁
PCPer is now dead to me. In
PCPer is now dead to me. In nearly 35 years of IT work I have never seen such a serious conflict of interest as this one. Everything that now comes out of PCPer’s so-called journalists mouths will be nothing but meaningless blablabla to me. PCPer needs to be served with a Class Action Lawsuit, at the very least.
The only surprise is that the
The only surprise is that the AMD fanboy community still watches AdoredTV after all his BS from the previous two years. You guys are seriously in love with siege mentality.
Error with results 256gb:
1.
Error with results 256gb:
1. Saturated vs. Burst Performance (for 128 gb (two graphic)).
It doesn’t appear there is
It doesn’t appear there is any spare area on these drives. Would it be worthwhile to overprovision them to say 250GB, 500GB etc ?