Performance Comparisons – TRIM Speed
Thanks to the plethora of data we have at our disposal from the new suite, I can derive some additional interesting data that nobody seems to have been paying any attention to yet. Have you ever deleted a large file and then noticed your system seem to hang for some time afterward? Maybe file moves from your SSD seemed to take longer than expected?
That's your problem right there. In the above capture, a 16GB file was deleted while a minimal level of background IO was taking place. Note how that IO completely stalls for a few seconds shortly after the file was deleted? That's a bad thing. We don't want that, but to fix it, someone needs to measure it and point it out. Enter another aspect of our testing:
Latency Percentile data was obtained while running a 'light' (1000 IOPS) workload in the background while files of varying sizes were deleted. The amount of latency added during the deletions was measured, compared with a baseline, and correlated with the sizes of the deleted files. The result is how much latency is added to the active workload per GB of file size that was deleted. In short, this is how long you may notice a stutter last after deleting a 1GB file.
To avoid confusion, I've maintained the performance-based sort from the mixed test for these charts. Here you can tell that some drives that did perform well on that test stick out a bit here when it comes to how they handle TRIM. Ideally, these results should all be as close to 0.000 as possible. Higher figures translate to longer performance dips after files have been moved or deleted.
The SSD 660p turns in a 'perfect 0' here. No measurable increase in latency as a result of TRIM loading on the controller.
This is another result from a different set of data. While our suite runs, it issues a full drive TRIM several times. Some of those times it is done on an empty SSD, others it is done on a full SSD. Any difference in time taken is measured and calculated, normalizing to a response time per GB TRIMmed. In short, this is how long an otherwise idle SSD would hang upon receiving a TRIM command for a 1GB file. These times are shorter than the last chart because the SSD controller does not have to juggle this TRIM with background activity and can throw all of its resources at the request.
The SSD 660p did very well here, with no issues noted and little to no impact seen from TRIM operations.
holy shitballs this is pretty
holy shitballs this is pretty impressive.
I cant wait to see this with Samsung Controllers and NAND. Though, at 20c per gig, getting one of these on a sale will be an insane steal.
Totally with you on price.
Totally with you on price. Intel has undercut the market a few times in the past and I’m happy to see them doing it again. I’d also like to see Samsung come down to this same price point.
Thought that too, when it
Thought that too, when it came out. But now it’s down to around 10c per gig, at least in Germany, which it should have launched at. Now I’m definitely considering getting one, but I might wait for a sale since I’m stingy.
You forgot an edit –
You forgot an edit – Toshiba:
PC Perspective Compensation: Neither PC Perspective nor any of its staff was paid or compensated in any way by Toshiba for this review.
Well, I’d be surprised if
Well, I’d be surprised if Toshiba paid for this review.
Fixed. Thanks for the catch
Fixed. Thanks for the catch guys!
PC per has a long history of
PC per has a long history of shilling for Intel and Nvidia at the cost of AMD. As far as I can tell they have no reason to change. Their motto is fake tech reviews and to hell what anyone thinks.
Yeah, such a long history of
Yeah, such a long history of that (PCPer.com is previously AMDmb.com / Athlonmb.com). Also funny how our results line up with other reviews. Must be some grand conspiracy theory against AMD. /sarcasm
This is why I wish Ryan would
This is why I wish Ryan would turn verified comments back on so asshats like the previous one don’t post. I don’t understand why it was turned off in the first place, it made the comment sections much more bearable and pleasant to read, now, not so much.
Allyn, Going way back to a
Allyn, Going way back to a conversation we had many months ago (years?), given the low price per GB, is there any performance to be gained by joining these QLC devices in a RAID-0 array? The main reason why I ask is the “additive” effect of multiple SLC-mode caches that obtains with a RAID-0 array. I’m using this concept presently with 4 x Samsung 750 EVO SSDs in RAID-0 (each cache=256MB), and the “feel” is very snappy when C: is the primary NTFS partition on that RAID-0 array. How about a VROC test and/or trying these in the ASRock Ultra Quad M.2 AIC? Thanks, and keep up the good work!
Yeah RAID will help as it
Yeah RAID will help as it does with most SSDs. For SSDs with dynamic caches, that means more available cache for a given amount of data stored, and a better chance that the cache will be empty since the given incoming write load is spread across more devices.
Many thanks for the
Many thanks for the confirmation. I don’t have any better “measurement” tools to use, other than the subjective “feel” of doing routine interaction with Windows. But, here’s something that fully supports your observation: the “feel” I am experiencing is snappier on a RAID-0 hosted by a RocketRAID 2720SGL in an aging PCIe 1.0 motherboard, as compared to the “feel” I am sensing on a RAID-0 hosted by the same controller in a newer PCIe 2.0 motherboard. The only significant difference is the presence of DRAM cache in all SSDs in the RAID-0 on the PCIe 1.0 motherboard, and the SSDs on the newer PCIe 2.0 motherboard have no DRAM caches. I would have expected a different result, because each PCIe lane in the newer chipset has twice the raw bandwidth of each PCIe lane in the older chipset. With 4 x SSDs in both RAID-0 arrays, the slower chipset tops out just under 1,000 MB/second, whereas the faster chipset tops out just under 2,000 MB/second.
p.s. Samsung 860 Pro SSDs
p.s. Samsung 860 Pro SSDs are reported to have 512MB LPDDR4 cache in both the 256GB and 512GB versions:
https://s3.ap-northeast-2.amazonaws.com/global.semi.static/Samsung_SSD_860_PRO_Data_Sheet_Rev1.pdf
As such, a RAID-0 array with 4 such members has a cumulative DRAM cache of 512 x 4 = 2,048MB (~2GB LPDDR4).
DRAM caches on SSDs very
DRAM caches on SSDs very rarely cache any user data – it’s for the FTL.
Thanks, Allyn. FTL = Flash
Thanks, Allyn. FTL = Flash Transaction Layer
https://www.youtube.com/watch?v=bu4saRek7QM
So the tests are done with
So the tests are done with practically a full drive, right? Written sequentially except for last 8GB which are written to randomly. In a normal drive even when My Computer says the drive is full there is still a little bit of space left over, so you put 18GB of space free. So is this test simulating what it’s like to have a full or close to full drive from the user’s perspective?
Anandtech’s tests made a big deal about performance changing from empty versus full. Anandtech didn’t figure out when that performance drops (if it’s a cliff or a gradual decline), but it almost makes the reader feel like you might want to buy double the capacity you normally need just to be safe. It’s probably not that bad, but it feels like that emotionally.
Performance gains due to
Performance gains due to drive being empty are typically leveled out once you hit 10-20% or so (lower if you’ve done a bunch of random activity like a Windows install, etc. My suite does a full pass of all measurements at three capacity points and then applies a weighted average to reach the final result. The average weighs half full and mostly full more heavily than mostly empty performance. The results you see in my reviews are inline with what you could expect with actual use of the drive.
“Heavy sustained workloads
“Heavy sustained workloads may saturate the cache and result in low QLC write speeds.”
Looks like up to a third of good HDD level, right? Scary.
A third sequentially. Random
A third sequentially. Random on HDD is still utter crap. Also, it’s extremely hard to hit this state in actual use. I was trying.
hey Allyn, is there a way to
hey Allyn, is there a way to include these few tests. one where exam QLC sequential write performance once SLC buffer fills up. another being similar to Anand’s sequential fragmentation sequential performance testing for both read/write.
The sustained write
The sustained write performance appears in two tests – saturated vs. burst (where I show it at variable QD – something nobody else does), and on the cache test, where you can see occasional dips to SLC-> QLC folding speed. Aside from a few hiccups it did very well and was able to maintain SLC speed during the majority of a bunch of saturated writes in a row. If you need more than that out of your SSD and the possibility of a slow down is unacceptable, then QLC is not for you and you’ll need to step up to a faster part.
oh and FFS PLEASE PLEASE
oh and FFS PLEASE PLEASE remove google recaptcha its a waste of time, it took me TEN minutes to solve and to make 1 post
And you wasted it on that?
And you wasted it on that?
Google Recaptcha and street
Google Recaptcha and street signs! All those damn street signs and no proper explanation of just what Google considers a street sign. If you get too good at solving the ReCrapAtYa the AI thinks you are an automated bot!
Google’s ReCrapAtYa AI has gone over to the HAL9000 side and is evil to the power of 1 followed By 100 zeros! Just like Google’s search AI that forces you to accept it’s twisted judgment of just what it thinks you are looking for that’s not actually what you where looking for. Google’s search engine has become the greatest time thief in history of research.
Google’s Recaptcha AI is the damn Bot and Google search now returns mostly useless spam results. Google is a threat to civilization!
Sorry. Without that we spend
Sorry. Without that we spend more time culling spam posts than we do writing articles.
Nice review, Allyn the dram
Nice review, Allyn the dram on the 660p is 256mb and not 1gb. http://www.nanya.com/en/Product/3969/NT5CC128M16IP-DI#
You can also confirm it with the other reviews of the 660p.
Why do you think intel choose that size instead of the classic 1mb dram for 1gb nand?
Do you think it hampered performance?
Dumb question time:
is it
Dumb question time:
is it possible to make the entire drive work in SLC mode? With the size of the drives these days I could sacrifice the space for the speed and reliability.
So long as you only
So long as you only partition/use 1/4 of the available capacity, the majority of the media should remain in SLC-only mode.
I wonder if there is a way to
I wonder if there is a way to force it at the firmware level. Might be a good selling feature. I am sure i am not the only overcautious nerd who would value a modern ‘SLC’ drive.
I didn’t see any mention of
I didn’t see any mention of which NVMe drivers were used during this review. Not sure if the Windows drivers are much different than Intel’s own drivers.
@Allyn, you mentioned in the
@Allyn, you mentioned in the podcast that you weren’t able to saturate the writes with a copy. Rather than doing a copy have you considered creating data in RAM and then writing that? For example, create a huge numpy float and write it as binary to disk. Or a simple C program that just writes random noise to disk in a while 1 loop. Maybe even just pipe /dev/urandom to a file in several different terminals at once.
Hello, Allyn!
Did you use
Hello, Allyn!
Did you use IPEAK to create custom trace-based test suite?
IPEAK and similar developer
IPEAK and similar developer tools were used to capture traces, but our suite's playback workloads are based on analysis of those results, not directly playing back the streams. We do this so that we can properly condition and evaluate SSDs of varying capacities, etc.
May I ask when these 660p
May I ask when these 660p NVMe ssds will be readily available in the market place? I see the 512GB model at Newegg.com but neither that sku or any other sku at Newegg.ca OR Amazon.ca OR anywhere… 🙁 I would like to buy the 1TB model personally.
Don’t buy from the evil
Don’t buy from the evil non-tax-paying Intel corporation. Crucial have a new 1Tb QLC nvme ssd, Write Endurance 200Tb, 1Gb dram cache, at newegg.ca (CA$192, US$145):
https://www.newegg.ca/Product/Product.aspx?Item=N82E16820156199&Description=crucial%20p1%20ssd&cm_re=crucial_p1_ssd-_-20-156-199-_-Product
First of all, thanks for all
First of all, thanks for all of your ridiculously in-depth storage reviews. PC Perspective is my first, and usually only, stop when looking to purchase new storage.
Second, I believe there is a typo on the “Conclusion” page. You listed the 2TB endurance as “200TBW” instead of the “400TBW” Intel specs it as on ARK.
Happy Veterans Day from a fellow vet. Thank you for your service!
All three capacities have
All three capacities have 256MB of DRAM, not 1GB. This was already pointed out by a previous reader.
Also, the 660p uses a static SLC cache that is 6GB, 12GB, or 24GB, along with a dynamic SLC pool.
It’s possible this drive is using Host Memory Buffer or compressing the LBA map.