Allyn came back from CES with a pre-release Vertex 2 Pro from OCZ. This new drive comes equipped with a previously unknown Sandforce controller, which gave us some shocking results, especially in the areas of IOPS scaling and fragmentation-over-time. Check out his detailed review for the full scoop.Introduction
It’s been a long, hard road for SSD’s. Their high cost per GB makes adoption a tough sell. It also makes reviewers that much more demanding on their performance. Speeds were good for the first few generations, but random IO’s suffered their true potential was not seen until Intel released their X25 series of drives, which despite a few quirks, quickly became the standard. The Intel drives scale well under heavy parallel load, leaving much of the competition behind. OCZ came along and made some good strides, innovating as far as the Indilinx controller would let them push their Vertex line, but it was not enough to dethrone the champ.
Starting late last year, OCZ began working with Sandforce. They kept it under wraps until a few weeks back when Anand broke the news with his excellent write up. I have to admit my mouth watered a bit when I saw the results. Luckily we were all headed to CES in a week. During our meeting with OCZ, I ganked us a sample of their Vertex 2 Pro so I could bring some further testing results to our readers.
I like blockdiagrams, so we’ll kick this off with one for the new Sandforce controller:
This was the best I could find. I realize it looks like blocked-off features, but as these controllers are effectively systems-on-a-chip, it’s layout is mostly logical anyway. This new controller promises significant increases in performance, along with improved ability to detect and correct errors in the data stored in flash.
That’s not just a doubling of error correcting performance – that’s a 2^14 improvement to correctable error probability when compared with other standard drives. Some may say this is overkill, but when the integrity of user data is on the line, you can never be too safe. This is partly the cause of the ‘overprovisioning’ of these units, meaning there is less available space for user data. Our sample came equipped with 128GB of flash, yet only 100GB was available for partitioning.