IOMeter – Average Transaction Time (Rev 1)
Back with the Kingston SSDNow V Series 40GB review, I revised the layout of these graphs to better show SSD latency and access time. First, I have removed HDD results as they throw the scale too far to tell any meaningful difference in the SSD’s you are trying to focus on. Second, I have reduced the queue depth scale down to 4. In practical terms of a running OS, queue depth is how many commands are ‘stacked up’ on the SSD at that time. An SSD is so fast at servicing requests that typical use will rarely see it increasing past 4. In the cases where it does, there is so much going on that you are more concerned with IOPS and throughput at that point than transaction time. The below charts are meant to show how nimble a given SSD is. Think of it as how well a car handles as opposed to how fast it can go.
Some notes for interpreting results:
- Times measured at QD=1 can serve as a more ‘real’ value of seek time.
- A ‘flatter’ line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously.
Here we see that increased bandwidth translates to reduced latency when compared to the other SF-based solutions. This can only go so far, as there’s still latency for a given SF controller. Since that latency is higher than any of the ioDrive’s, we won’t see them beaten in this respect until OCZ equips their SuperScale VCA with a Flux Capacitor.
This drive is so fast, that
This drive is so fast, that it can actually change the way programmers approach problems. With a disk speed a sizable fraction of RAM, I/O stops being a bottleneck. The big users of this will probably be High Performance clusters, where 4 CPU servers (each with 8+ cores) exist. The benefit of This incredibly fast storage will do wonders for these systems.
I’m kinda guessing that CPUs will be the bottleneck for the server crowd, and tthis’ll push CPU development that much further. (Hey, I can hope!)
I really hope developers
I really hope developers won’t stop optimizing their code. Data access can’t ever be fast enough. Wherever you’re running huge databases with tons of users, you’ll be happy for every single timesaving tick.
I think the biggest
I think the biggest bottleneck for the foreseeable future is still network transfer speeds, which also puts a serious onus on programmers to optimize code as far as disk reads/writes, optimizing disk reads/writes to fill out TCP packets as much as possible and not have extraneous information sent over networks is still going to be the key to successful communication with servers. At least until new standards for network communication actually come into play.
Thank you for your effort on
Thank you for your effort on making this review , But I seriously don’t see the point , Did you really think we ( the readers ) can afford such thing ? the item you reviewed is listed at 11,200$ , with this kind of money ill have all the high end pcs for the next minimum 15 years ! Minus this SSD.