IOMeter v2006.07.27 – IOps
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.
Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Light desktop usage sees QD figures between 1 and 4. Heavy / power user loads run at 8 and higher. Most SSD's are not capable of effectively handling anything higher than QD=32, which explains the plateaus.
Regarding why we use this test as opposed to single-tasker tests like 4KB random reads or 4KB random writes, well, computers are just not single taskers. Writes take place at the same time as reads. We call this mixed-mode testing, and while a given SSD comes with side-of-box specs that boast what it can do while being a uni-tasker, the tests above tend to paint a very different picture.
The new and previous gen Vector models perform right at the top in these tests. It was good to see the lower capacity model perform right up there with the larger ones. Normally the reduced number of available channels shows negatively in raw IOPS performance. Not the case here.
In case you were curious as to why the two 840 models appear to be dogfighting each other, the charts are now presented in the order in which they are run. As they are in a back-to-back sequence, with no breathing room given to the SSD between the tests, the 500GB 840 EVO fills its SLC cache in the middle of the database test, and the reduced write speeds impact performance through the final workstation sequence as well. A smaller capacity EVO would fill its cache sooner, while the 1TB EVO made it through the entire sequence at full speed.
That is one enthusiastic
That is one enthusiastic enthusiast. Is anyone else having Aperture Labratories training films flashbacks?
Flashback? Like this
Flashback? Like this >
http://www.youtube.com/watch?v=b7rZO2ACP3A&feature=share&list=PL2617C5703AFF836B
Actually no, but I do want a shirt like that so I can do the same in my office! 😀
Seriously Allyn still using
Seriously Allyn still using PCMark05 for what imho is the most important real world test, the trace test.
Record your own traces, or get a newer PCMark version, because 05 dose not cut it for years anymore.
Or do you have a special reason to still use it?
I actually run 3 different
I actually run 3 different pcmark versions on each SSD, but the later ones give a whole lot of data that seems to dance around the original point of just how fast (generally) the SSD will be. In that respect, PCMark 5 does just as well to show overall differences as Vantage or 7 does.
More importantly, the trace tests used by PCMark (all versions) are only run for a few passes. You'd have to re-run the test dozens or hundreds of times for a given SSD to reach any sort of true steady state value, and that value would still not be real world, as each time you re-run the test and it re-writes the test file (sequentially), it partially defragments that portion of the SSD. For example, a *real* windows start-up would be reading files that were randomly written to the SSD, not chunks of a large sequentially-written test file.
I'm working on a 'standardized' trace to play back and benchmark, but the catch with those is that they are 100% compressible data, so drives with inline compression give artificially inflated results, which is bad news.
Sometimes the dated benches just work better than the newer versions. This applies to IOMeter as well, where different versions were more / less compressible as far as the data written, which translated to artificially high results on some models.
That Enthusiast user looks so
That Enthusiast user looks so happy. If only we could all be like that.
Good review as always Allyn, happy to see OCZ focusing on longevity. Though it is apparent that the SATA 3 pipeline is now a barrier.
How long is the endurance.
How long is the endurance. 50GB a day does not mean much if it is rated for 50GB a day for 1 month.
These companies need to list the total write endurance and not pull to borderline false advertising/ trick that you will find with some products (eg the vuezone advertising 6 month battery life, but if only used for 5 minutes per day (this you only get the rated life if it is placed in a location where there will be at most 5 minutes of movement a day)
edit: it seems they list it as 50GB per day for 5 years though the warranty does not cover writing it to death.
They clearly state 50GB per
They clearly state 50GB per day for 5 years, so they should honor the warranty at that level of writes. I do agree that's pretty steep for a consumer drive. It's doubtful a typical user will hit 10% of that.
I still cannot see how ssd’s
I still cannot see how ssd’s to be used for storage, 500GB and up, priced around $1.00/GB are not too expensive for most of us. It seems that is where the price has been for quite some time. I get that you made the initial msrp a negative, but you did so with a caveat. I am not suggesting that ssd pricing should be in line with hard drives, but the premium for ssd’s to be used for storage seems way too much after all these years. From all the podcasts I have listened to over the years, I get that you have enough money to buy everything ten times over, but you must remember that most of us do not. I only wish demand would support me, but obviously the ssd manufactures see no reason to lower the price. Wish you would stop condoning the pricing, especially when they keep using nand with shorter lifespans to justify a less expensive drive and not make the drive less expensive.
you have to wonder what will
you have to wonder what will become of OCZ
with its stock falling 40% in short time.
http://translate.google.com/translate?sl=auto&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.iopanel.net%2Fp10790%2F&act=url
Saw the game level load times
Saw the game level load times on Tech Report…I`ll keep my WDCB 1TB
If I`m waiting 12-13 seconds for an SSD , I can wait another 8 seconds…big deal.
If the SSD loaded INSTANTLY….that would be different.
Doesn`t say if you are using
Doesn`t say if you are using W8.1 or 7.
According to Paul Thurrott , 8 is massively faster on file copy as compared to 7. Night/day difference.
Thurrott is right – for
Thurrott is right – for copies done within the Explorer GUI. Our test is done via the command line, and is not subject to that speed difference.
I just want an answer to this
I just want an answer to this question, please. If I get an SSD and load it with my OS and most used / fav. programs. At this point the drive is 80-90% full (because I’m cheap and bought a small ssd 64-128GB). If I use the SSD over the next five years will the remaining 10-20% of the drive be hammered with read/writes because its the only spare space, or do drives actively shift data around (even parts of the flash storing my win 7 dll files for example) so the same spare blocks don’t get used over & over. Quoting a warranty if a drive is completely empty and filled with xGB everyday is one thing, in the real world I would have my SSD filled with only a small amount of space spare every day. Thanks anyone who has/knows the answer.
I am in the same boat. I am
I am in the same boat. I am sorry I do not have an answer, but I would certainly hope the manufactures wrote algorithms in the firmware to guard against this. It also occurs to me that I don’t write gigs of data to my small op sys/program ssd. Actually, when I thing about it, I doubt there are that many writes to the op sys/program drive, but rather a lot of reads. One of my intel x25 m80 ssd’s that I have had for years and use it for this purpose is still close to 100% life and top health according to the intel ssd toolbox. Having said all that, I think we are probably fine.
SSDs implement wear leveling.
SSDs implement wear leveling. Data that is 'stagnant' is not really so. The SSD firmware will juggle this data around as other data is written to your 10-20%. In fact, if you filled / empted that 20% 5 times in a row, you would have written approximately one pass to the *entire* area of the SSD.
Thanks Allyn for the answer.
Thanks Allyn for the answer.
On paper it looks pretty
On paper it looks pretty good. After all the reliability issues this company has had in the past with other products, I might have held off for a while to look at actual RMA numbers.
Personally, I hope they have cleaned it up and we have another good competitor in the mix. However, I’ve been burned enough- I’m waiting before I jump on this one.
Allyn- you do good work.