Many drives have died over the last year and a bit. The Tech Report has been torturing SSDs with writes until they drop. Before a full petabyte of data was written, three of the six drives kicked the bucket. They are now at 1500TB of total writes and one of the three survivors, the 240GB Corsair Neutron GTX, dropped out. This was a bit surprising as it was reporting fairly high health when it entered "the petabyte club" aside from a dip in read speeds.
The two remaining drives are the Samsung 840 Pro (256GB) and Kingston HyperX 3K (240GB).
Two stand, one fell (Image Credit: Tech Report)
Between those two, the Samsung 840 Pro is given the nod as the Kingston drive lived through uncorrectable errors; meanwhile, the Samsung has yet to report any true errors (only reallocations). Since the test considers a failure to be a whole drive failure, though, the lashings will persist until the final drive gives out (or until Scott Wasson gives up in a glorious sledgehammer apocalypse — could you imagine if one of them lasted a decade? :3).
Of course, with just one unit from each model, it is difficult to faithfully compare brands with this marathon. While each lasted a ridiculously long time, the worst of the bunch putting up with a whole 2800 full-drive writes, it would not be fair to determine an average lifespan for a given model with one data point each. It is good to suggest that your SSD probably did not die from a defrag run — but it is still a complete waste of your time and you should never do it.
“Dont defrag” ? Unless you
“Dont defrag” ? Unless you own a samsung with older data, perhaps?
Specifically, a TLC-based
Specifically, a TLC-based Samsung drive which has degraded files. Also, it's much better to overwrite files (such as moving them to another drive and back again).
Defrag is a bit of a hacky way to fix that issue. It could miss some affected data, because it could be bad but not fragmented, while performing many irrelevant writes on others. Sure, as I said, irrelevant writes doesn't seem to matter as much as people thought but a better solution exists, even for this problem.
Now there will have to be a
Now there will have to be a benchmark where the drives are filled with data, and stored in various environments with the systems inactive for months, to see just how much the error correction algorithms are having to be run to restore the old un-accessed data, and how much it slows down reads, etc. maybe this should be done to see on what process node/SSD drive generation the data retention problem is most affected by this problem. Are there any benchmarks that can measure the amount of internal error correction that the SSD drive’s controller is having to run to keep the correct data flowing, relative to reads on blocks that require little or no error correction by the SSD’s controller/error correction circuitry algorithms?
The Dystopian future will be
The Dystopian future will be run by South Koreans. LOL
What about crucial ssd’s? I
What about crucial ssd’s? I have a bunch and want to know how they stack up.
That 840 looks like it has
That 840 looks like it has been consistently operating the entire time, still maintaining good performance. Most impressive!
Too bad the 850 wasn’t around
Too bad the 850 wasn’t around for this. I wonder if we would have lived long enough ….
I bought a Samsung 840 Pro
I bought a Samsung 840 Pro 256 on recommendation by Allyn during a PCper podcast over a year ago. Performance has been great and now it looks like longgg term reliability will be great too. Just thought I’d throw out a very belated ‘thanks’ to PC Perspective for keeping me up to date on current PC tech.
Spinrite 6.0 works great on
Spinrite 6.0 works great on SSD. It has brought 2 SSDs back to life forme. It’s amazing how old tech still works.