It looks like Samsung has finally figured this one out, and they have done so in a way that actually puts an 840 EVO above other SSDs. Allow me to explain. Typical SSDs will see random writes to some existing files (pagefile, etc). Those files will in turn lead to fragmentation of the flash memory structure those files are linked to by a metadata table stored within the SSD. If you were to rewrite those files sequentially, performance would increase, as there is less metadata to handle and the file would be stored more linearly within the flash it was associated with. Pretty much any SSD would see a slight performance boost if you were able to rewrite all of its data sequentially (i.e. cloning the OS and restoring that image back to the same SSD). In trying to fix their stale data issue, Samsung now has a built-in tool that can trigger a background refresh procedure that accomplishes this same task, so in trying to fix one problem, they have actually added a useful feature to this product line.

I'm glad Samsung has stuck with it. Not many manufacturers would put so much effort into a two year old product, and the 840 EVO has proven to need a lot of work to get a difficult problem under control. Now to try to get them to enable the advanced optimization feature for all Samsung SSDs. I will continue to push Samsung in recognizing that users of other 19nm planar TLC flash SSDs (i.e. the 840) also see this issue. We will also continue to keep these samples stored with cold / stale data and retest occasionally.

One final note – this issue was *only* on older TLC Samsung SSDs. Your 850 EVO is not affected (it has a completely different flash architecture), and neither is your 840 Pro or 850 Pro (those use MLC flash, not TLC).

« PreviousNext »