Performance Over Time and TRIM
We verified the new Vector 150 responded corectly to TRIM requests, and did so within a reasonable turnaround, so nothing to report there. In order to test long-term performance scenarios where the OS is not sending TRIM commands to cover large blocks of data being randomly accessed in-place (such as VM HDD image files), I hit the Vector 150 with random accesses and with no TRIM issued. I then ran a couple of HDTach passes. The way this works is the first pass will show a relative slow-down due to the drive pushing though the fragmented areas of flash. The second pass should then (in theory) show the speeds returning to rated specs across the flash area. Here goes:
The Vector 150 performed as expected here. No surprises. We are looking more for a nice flat line on the second pass moreso than ultimate speed attained. Some SSDs don't agree with the write method HDTach uses, and as a result, actually slow down a bit on the second pass. This does not reflect real world performance of the SSD.
That is one enthusiastic
That is one enthusiastic enthusiast. Is anyone else having Aperture Labratories training films flashbacks?
Flashback? Like this
Flashback? Like this >
Actually no, but I do want a shirt like that so I can do the same in my office! 😀
Seriously Allyn still using
Seriously Allyn still using PCMark05 for what imho is the most important real world test, the trace test.
Record your own traces, or get a newer PCMark version, because 05 dose not cut it for years anymore.
Or do you have a special reason to still use it?
I actually run 3 different
I actually run 3 different pcmark versions on each SSD, but the later ones give a whole lot of data that seems to dance around the original point of just how fast (generally) the SSD will be. In that respect, PCMark 5 does just as well to show overall differences as Vantage or 7 does.
More importantly, the trace tests used by PCMark (all versions) are only run for a few passes. You'd have to re-run the test dozens or hundreds of times for a given SSD to reach any sort of true steady state value, and that value would still not be real world, as each time you re-run the test and it re-writes the test file (sequentially), it partially defragments that portion of the SSD. For example, a *real* windows start-up would be reading files that were randomly written to the SSD, not chunks of a large sequentially-written test file.
I'm working on a 'standardized' trace to play back and benchmark, but the catch with those is that they are 100% compressible data, so drives with inline compression give artificially inflated results, which is bad news.
Sometimes the dated benches just work better than the newer versions. This applies to IOMeter as well, where different versions were more / less compressible as far as the data written, which translated to artificially high results on some models.
That Enthusiast user looks so
That Enthusiast user looks so happy. If only we could all be like that.
Good review as always Allyn, happy to see OCZ focusing on longevity. Though it is apparent that the SATA 3 pipeline is now a barrier.
How long is the endurance.
How long is the endurance. 50GB a day does not mean much if it is rated for 50GB a day for 1 month.
These companies need to list the total write endurance and not pull to borderline false advertising/ trick that you will find with some products (eg the vuezone advertising 6 month battery life, but if only used for 5 minutes per day (this you only get the rated life if it is placed in a location where there will be at most 5 minutes of movement a day)
edit: it seems they list it as 50GB per day for 5 years though the warranty does not cover writing it to death.
They clearly state 50GB per
They clearly state 50GB per day for 5 years, so they should honor the warranty at that level of writes. I do agree that's pretty steep for a consumer drive. It's doubtful a typical user will hit 10% of that.
I still cannot see how ssd’s
I still cannot see how ssd’s to be used for storage, 500GB and up, priced around $1.00/GB are not too expensive for most of us. It seems that is where the price has been for quite some time. I get that you made the initial msrp a negative, but you did so with a caveat. I am not suggesting that ssd pricing should be in line with hard drives, but the premium for ssd’s to be used for storage seems way too much after all these years. From all the podcasts I have listened to over the years, I get that you have enough money to buy everything ten times over, but you must remember that most of us do not. I only wish demand would support me, but obviously the ssd manufactures see no reason to lower the price. Wish you would stop condoning the pricing, especially when they keep using nand with shorter lifespans to justify a less expensive drive and not make the drive less expensive.
you have to wonder what will
you have to wonder what will become of OCZ
with its stock falling 40% in short time.
Saw the game level load times
Saw the game level load times on Tech Report…I`ll keep my WDCB 1TB
If I`m waiting 12-13 seconds for an SSD , I can wait another 8 seconds…big deal.
If the SSD loaded INSTANTLY….that would be different.
Doesn`t say if you are using
Doesn`t say if you are using W8.1 or 7.
According to Paul Thurrott , 8 is massively faster on file copy as compared to 7. Night/day difference.
Thurrott is right – for
Thurrott is right – for copies done within the Explorer GUI. Our test is done via the command line, and is not subject to that speed difference.
I just want an answer to this
I just want an answer to this question, please. If I get an SSD and load it with my OS and most used / fav. programs. At this point the drive is 80-90% full (because I’m cheap and bought a small ssd 64-128GB). If I use the SSD over the next five years will the remaining 10-20% of the drive be hammered with read/writes because its the only spare space, or do drives actively shift data around (even parts of the flash storing my win 7 dll files for example) so the same spare blocks don’t get used over & over. Quoting a warranty if a drive is completely empty and filled with xGB everyday is one thing, in the real world I would have my SSD filled with only a small amount of space spare every day. Thanks anyone who has/knows the answer.
I am in the same boat. I am
I am in the same boat. I am sorry I do not have an answer, but I would certainly hope the manufactures wrote algorithms in the firmware to guard against this. It also occurs to me that I don’t write gigs of data to my small op sys/program ssd. Actually, when I thing about it, I doubt there are that many writes to the op sys/program drive, but rather a lot of reads. One of my intel x25 m80 ssd’s that I have had for years and use it for this purpose is still close to 100% life and top health according to the intel ssd toolbox. Having said all that, I think we are probably fine.
SSDs implement wear leveling.
SSDs implement wear leveling. Data that is 'stagnant' is not really so. The SSD firmware will juggle this data around as other data is written to your 10-20%. In fact, if you filled / empted that 20% 5 times in a row, you would have written approximately one pass to the *entire* area of the SSD.
Thanks Allyn for the answer.
Thanks Allyn for the answer.
On paper it looks pretty
On paper it looks pretty good. After all the reliability issues this company has had in the past with other products, I might have held off for a while to look at actual RMA numbers.
Personally, I hope they have cleaned it up and we have another good competitor in the mix. However, I’ve been burned enough- I’m waiting before I jump on this one.
Allyn- you do good work.