1.6TB of PCIe 2.0 x8 goodness!
Back in June of last year, OCZ released the RevoDrive, followed up rather quickly by the RevoDrive x2. A further jump was made with the introduction of VCA 2.0 architecture with the RevoDrive 3 and 3 x2. Each iteration pushed the envelope further as better implementations of VCA were introduced, using faster and greater numbers of PCIe channels, linked to faster and greater numbers of SandForce controllers.
While the line of RevoDrives was tailored more towards power users and mild server use, OCZ has taken their VCA 2.0 solution to the next level entirely, putting their sights on full blown enterprise purposing. With that, we introduce the OCZ Z-Drive R4:
Continue to the full review for all the details!
We covered VCA 2.0 in greater depth in our RevoDrive 3 x2 Review, but here’s a quick recap:
RevoDrives 1 and 2 used a simple SiliconImage RAID-0 solution, which scaled rather nicely at greater queue depths, something not accomplished by prior integrated SSD RAID solutions. The RevoDrive 3 improved greatly upon this with OCZ’s VCA 2.0 (Virtualized Controller Architecture) SuperScale controller:
In essence, the new VCA controller acts like a supercharged traffic cop. It’s able to intelligently handle multiple IO requests from the host (PC) side, and arrange them, on-the-fly, in the best possible order to be passed onto the SSD controllers. This minimizes the possibility of lag introduced by any single controller from negatively impacting the IO performance of the unit as a whole.
The Z-Drive uses a newer version of the VCA 2.0 solution. As compared with the RevoDrive 3 x2, this new chip doubles PCIe bandwidth (x8 vs. x4), as well as doubling the number of SF-2200 controllers it can handle (again, 8 vs. 4). Pulling off a full doubling of in’s and out’s is not an easy task. Achieving an effective scaling of this doubling is even harder. Let’s see if the proof is in the pudding.
Finally, here’s an OCZ-produced video explaining VCA 2.0 even further:
This drive is so fast, that
This drive is so fast, that it can actually change the way programmers approach problems. With a disk speed a sizable fraction of RAM, I/O stops being a bottleneck. The big users of this will probably be High Performance clusters, where 4 CPU servers (each with 8+ cores) exist. The benefit of This incredibly fast storage will do wonders for these systems.
I’m kinda guessing that CPUs will be the bottleneck for the server crowd, and tthis’ll push CPU development that much further. (Hey, I can hope!)
I really hope developers
I really hope developers won’t stop optimizing their code. Data access can’t ever be fast enough. Wherever you’re running huge databases with tons of users, you’ll be happy for every single timesaving tick.
I think the biggest
I think the biggest bottleneck for the foreseeable future is still network transfer speeds, which also puts a serious onus on programmers to optimize code as far as disk reads/writes, optimizing disk reads/writes to fill out TCP packets as much as possible and not have extraneous information sent over networks is still going to be the key to successful communication with servers. At least until new standards for network communication actually come into play.
Thank you for your effort on
Thank you for your effort on making this review , But I seriously don’t see the point , Did you really think we ( the readers ) can afford such thing ? the item you reviewed is listed at 11,200$ , with this kind of money ill have all the high end pcs for the next minimum 15 years ! Minus this SSD.