Write Cache Testing

We've already seen some of the effects of how the write cache interacts with our other testing. While the IOMeter results were interesting, that is a very heavy mixed workload running at high Queue Depths and is more representative of what an SSD would see in the enterprise. In a consumer environment, heavy write workloads are only typically seen during application and OS installs, and even then it is doubtful that GigaBytes of data are going to be written to the EVO at greater speeds than the cache can write back out to the TLC flash area. This only leaves sequential bulk writes to the SSD, and hitting sustained 500MB/sec writes will only take place if the user was performing a large file transfer to the EVO from either another fast SSD or from a very capable HDD RAID setup.

After dabbling with all sorts of snazzy IOMeter profiles and charts and graphs, it turned out the simplest and most repeatable demonstration came from HD Tune's write test.

With how I had HD Tune configured, it gets just past the 100GB mark on the chart by the time it has written 12GB of data to the SSD. As a matter of fact, I timed it and it took exactly 24 seconds to get to the dip. 24*500MB = 12GB. Now to figure out how fast that cache gets completely flushed back to the TLC:

For the above result, I aborted and re-ran the test with 10 seconds in-between. You can see the cache was not completely flushed as the EVO write speed drops to an in-between value only a couple of seconds into the write. It keeps writes as high as it can until the cache is once again completely full. Lets see what happens after a 15 seconds delay between runs:

Then 20 seconds:

…and finally 25 seconds:

25 seconds was enough of a repeatable delay for the EVO to flush enough cache to disk to support another 12GB being subsequently written to it. On a few occasions I saw a full flush with only 20 seconds, and on even rarer occasions, 15 seconds. You'd figure this does not add up, as it would have to write to the TLC at greater than its (slower) sustained transfer rate. My theory is that the SLC is so fast that it can accept data at 500MB/sec and flush some of that data out at the TLC transfer rate *simultaneously*. That's some speedy TLC. That said, the EVO only appears to be this agressive if the cache has been running on the full side for quite some time. Given enough breathing room, it seems to favor filling the cache first, switching over to TLC writes once full, and emptying the cache during the next idle period.

*Note*: These tests were performed on the 1TB model. Lower capacity models are going to have differing cache sizes and TLC write speed, and will therefore alter the sustained write transfer duration prior to filling the cache and dropping to the TLC (sustained) write speed.

« PreviousNext »