IOMeter – Average Transaction Time (Rev 1)
Back with the Kingston SSDNow V Series 40GB review, I revised the layout of these graphs to better show SSD latency and access time. First, I have removed HDD results as they throw the scale too far to tell any meaningful difference in the SSD’s you are trying to focus on. Second, I have reduced the queue depth scale down to 4. In practical terms of a running OS, queue depth is how many commands are ‘stacked up’ on the SSD at that time. An SSD is so fast at servicing requests that typical use will rarely see it increasing past 4. In the cases where it does, there is so much going on that you are more concerned with IOPS and throughput at that point than transaction time. The below charts are meant to show how nimble a given SSD is. Think of it as how well a car handles as opposed to how fast it can go.
Some notes for interpreting results:
- Times measured at QD=1 can serve as a more ‘real’ value of seek time.
- A ‘flatter’ line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously.
Great review Allyn. You
Great review Allyn. You mentioned that the RevoDrive would be good for entry-level servers or high performance work stations. Would the Raid 0 nature of the RevoDrive make you a little wary of placing this into a server environment? Maybe you’re implying there would be fail over/redundancy protections in place, but at least with hard drives, I would never consider a R0 server setup for our small biz server.
Would you mind elaborating a bit?
Thanks, Chris
For the type of application
For the type of application where super-high IOPS is desired, you reach a threshold where you just can’t get any higher with redundancy included in the package. For these situations you’d have to add the redundancy factor yourself, be it near-line backups or an identical controller installed into the same server and mirrored using the OS perhaps. The Revo is not unique in this bleeding-edge performance niche – the FusionIO products are not redundant either.
As a side note, many VCA controllers have a mode equivalent to RAID-5. This silicon is fast enough to include parity for one (or two) drive failures with a minimal performance hit. OCZ could probably enable this, perhaps for a more purely business-oriented model, but I don’t see it as that high of a need really. It would be a niche of what is already a niche to begin with!
Consider that adding parity to support failure of one of the SandForce channels. That’s some form of chip failure – be it the SandForce or its bank of flash. There’s plenty of other chips on a RevoDrive that are common to the unit as a whole, and a failure of one of those will still take the whole card down (single point failure). If redundancy was so important for a very high performance application, I would rather skip the parity calculation overhead / capacity reduction and go straight for the ‘2 of everything’ approach. Added bonus – a pair of these mirrored at the OS level would have the write performance of a single unit, but read performance would be doubled, so you would *further* increase performance and add redundancy at the same time. That’s a win-win not possible with overhead-inducing RAID / parity-based solutions.
Also note: You could buy 2 of the 960GB models for *less* than a single ioDrive 160. Ouch.
Al
It’s nice, but for $1600 you
It’s nice, but for $1600 you can get a 3Ware 9750 controller and 4 Vertex 3 120GB drives, get the same capacity, 512MB of buffer, and a wider interface, for $100 cheaper. I think I’d rather go that way.
The interface is irrelevant
The interface is irrelevant if the controller itself is the bottleneck. If the 3Ware can manage to peg the PCIe interface in sequential reads, that’s great, but only marginally better than what the OCZ unit can do (as it’s nearly saturating all 4 SSD’s).
The LSISAS2108 chip (on the 3Ware 9750) is only rated at 1.1GB/sec sequential writes, while the Revo3 hits close to 1.6GB/sec. That’s a 50% reduction in performance.
The LSISAS2108 will very likely come nowhere near the Revo3’s 200k IOPS rating in random access. Every RAID solution I’ve tested tops out at roughly the peak IOPS of *1* good SSD (~50k IOPS). This is due to the latency added by managing the cache. 512MB RAM caches are for HDD’s, not SSD’s. While I haven’t personally tested the LSISAS2108 / 3Ware 9750, I’d be shocked if it could break 100k.
That does raise a question:
That does raise a question: THis and the previous Revodrives are internal RAID 0 devices, so why don’t you test it against RAID setups? Of course it’s going to blow a lone SSD out of the water. So would a quartet of those same SSDs attached to a RAID controller, so that doesn’t, by itself, constitute a reason to get one of these.
I have heard the some comment about why not just get a raid controller and a couple of SSDs instead regarding the original revo too. Some data that supports one of those options is much more useful than including a HDD for us to laugh at.
I have had plenty of
I have had plenty of experience with the Revo and Revo2. I wish to express my words of caution of using the Revo3 as a boot drive. With many weeks of testing several motherboards and adjustings various BIOS’s. The Revo’s could never become stable as a boot drive. Somehow, files would always become corrupt. As a secondary drive, for placing the pagefile, all temp files and general work files. The revo’s are mind blowing fast. No doubt, by looking at the numbers for the Revo3, it will impress anyone.
i would only get this storage
i would only get this storage as a boot drive. I wonder what Allyn has to say about that. i would put in the review that it is a boot-able drive though. i am not sure how you would test it for instability as a boot drive. i dont heard in the news about issues of things being corrupt and such. I will keep this in mind.
i like this storage drive though, hate to hear bad things about it.
I’ve tested all three Revo’s
I’ve tested all three Revo’s in a boot configuration and never noted any corruption issues. Revo / x2 were tested under XP and XP64. I’ve only tested Revo3 under 32-bit (Win 7) as we don’t yet have signed 64 bit drivers, which precludes install without additional hackery.
That is one sexy piece of
That is one sexy piece of silicon lol
Well, i am using a revodrive
Well, i am using a revodrive as boot drive for over a year and ir never fails a boot or damage files…
Are you aware of this
Are you aware of this Allyn?
“However I was wondering about TRIM support, as the last time I checked it needs to be supported by Microsoft. A question that I had to verify with OCZ. Though the Revo3 card supports TRIM, because the architecture is based on SCSI, the Microsoft Windows StorPort architecture currently does not support either TRIM or SCSI UNMAP. As such, these commands are not generated by the OS, which of course prevents VCA from executing them. OCZ is working with Microsoft to have this functionality enabled as soon as possible. So that will take a RAID driver update alright, but that should not effect your data already on the drive.”
Source: http://www.guru3d.com/article/ocz-revodrive-3-x2-review/16
Anandtech and tomshardware say something similar.
Apparently TRIM only works on Linux as of now.
Yes. TRIM won’t pass through
Yes. TRIM won’t pass through StorPort until Microsoft adds support for this functionality to Windows. This will likely come in the form of a hotfix, but the ETA on that is totally up to Microsoft.
As with other SF-controlled devices, keep in mind they are very resilient to the performance hit seen when used without TRIM. The same applies to the Revo3.
What an insightful and
What an insightful and well-written review. Its good to see PC Perspective attracting such great talent! I look forward to more of your articles.