SATA 6G shows up early
SATA 6G technology is being paraded around by motherboard manufacturers as one of the main reasons to upgrade your motherboard in the coming months but do the advantages really make the upgrade worthwhile? And how do these current SATA 6G implementations actually work? We look at the ASUS P7P55D Premium and an early sample of a Seagate SATA 6.0 Gb/s hard drive to see how the hype stands up.The SATA 6G Technology
In August we took an early look at the ASUS P7P55D Premium motherboard, a P55 chipset offering that supports the new Intel Lynnfield processors under the Core i7/i5 brands. You might remember that ALL of the early P55 motherboards were going to support SATA 6G but the feature was suddenly pulled on almost every board due to mysterious performance issues. The P7P55D Premium was one of the only boards that retained the feature thanks to some additional hardware used by ASUS engineers.
I should note that there are some other implementations of SATA 6G coming on other P55 motherboards that don’t create a positive situation for enthusiasts and gamers. One such option will apparently require use of some of the lanes of PCIe 2.0 coming from the Lynnfield CPU; that would force the primary graphics card to run at x8 PCIe 2.0 speed and would not allow for multi-GPU configurations on said motherboard. We are still getting details on what both Gigabyte and MSI are doing for SATA 6G in the future, so stay tuned!
The Seagate Barracuda XT Hard Drive
To get some early impressions on SATA 6G technology I was able to get a hold of an sample of the new Barracuda XT hard drive from Seagate. This is the first HDD out that will support SATA 6G connectivity. I should note of course that this is a very early sample that will no doubt go under some firmware changes between now and retail availability. Still, we couldn’t pass up the opportunity to test it out now!
There are some notes you need to read before looking at our performance results here. We tested three different configurations of storage here:
- P55 Chipset (SATA-II, 3.0 Gb/s)
- Marvell 9123 Driver 1008 (SATA 6G, 6.0 Gb/s)
- Marvell 9123 Driver 1027 (SATA 6G, 6.0 Gb/s)
The two drivers had very different results as you will see below on not only our Barracuda XT 2.0 TB hard drive, but also on the Barracuda 7200.11 1.5 TB hard drive and the Intel X25-M 80GB SSD as well. The reason for this difference is that the 1027 driver implements a new caching system that uses system memory to attempt to improve performance. Here is a better explanation from resident PCPer storage expert, Allyn Malventano:
To kick this off, lets quickly go over the different types of caching:
- Read Cache: There are different types of caching, but the simplest to explain is where data read from the device is passed through memory in a ‘first in first out’ basis. If a location is re-read and has not been altered since the last read attempt, and if that data is still in the cache, it is called a ‘hit’ and the cache provides the data to the system (much faster than the hard disk could have). If the requested data is not in the cache it is called a ‘miss’ and data retrieval takes place the old (slow) way.
- Write-Through Cache: Writes pass through the cache but are immediately and synchronously written to the disk. Some of the data remains in the cache, and in the case where new writes are identical to what is already on the disk, a write does not need to take palce. This is to some extent similar to how the read cache works.
- Write-Back Cache: A more agressive method of write caching where writes are not required to immediately sync up with the contents of the disk. If the data flows from the system faster than it can be written to the disk, the cache will fill with a backlog of data to be written out to the disk. The hard disk will basically ‘fall behind’. When the system is done writing, the cache will then be able to catch up and finish writing the changes to the disk.
The new Marvell driver allocates a portion of system RAM to perform caching duties. A cache is simply a buffer. Hard drives have internal caches that are considerably faster than the physical seek time of the disk itself, but are still limited by the speed of the interface (i.e. SATA). An additional cache on the ‘other side’ of the interface – in this case within the driver running on the host system, may provide additional performance in some situations, but can also introduce potential complications and drawbacks. It is also redundant in that Windows already implements the same types of caching at the kernel level (above the driver):
From the above you will see that the greater the level of enabled caching, the more risk of potential data loss you are willing to accept in the case where power fails (or the system crashes) during a write operation. Even with the above options cranked to the max, there will still be operations that take priority, such as file table and journal entries, as the data within a file is not as important as the means to find the file in the first place. The above settings are only for write caching. Windows will by default donate unused RAM to cache up to several GB of reads.
Now to analyze what type of caching this new Marvell driver is implementing. Burst speeds are measured by writing a small bit of data to the drive and immediately requesting it back, sometimes several times in succession. This usually results in a cache hit at the hard drive’s own internal read cache. Since most hard drive cache operates faster than the interface, the resulting burst speed typically comes out to right around the speed of the interface (i.e. SATA). If you add another cache on the other side of the interface, you will see cache hits taking place at the new higher speed. In the case of the new Marvell driver, that speed is based on system RAM minus driver overhead. Aside from the additional use of system resources, this type of read caching is generally a performance booster with no significant drawbacks.
That said, we did find a catch with Marvell’s cache implementation. Something was awry with our HDTune results:
That spike at the start of the write pass suggests write-back caching is taking place within the Marvell driver. If this is the case, it is significant, as the user is never presented with any warnings of potential data loss. Even worse – there is no user selectable option to disable or modify the write caching policy. If you have the new driver, you are stuck with a type of write caching that would normally require a battery backup to be considered safe. A final note is that since the Marvell driver is not as aware as the kernel of what type of data is being written, file table entries given priority by the kernel would have to wait in line like the rest of the data passing through the Marvell driver cache. This means not only is there a risk of file data loss the user may not be willing to accept, there is also the potential for file table corruption and more major types of data loss that happen when NTFS journal entries are cached without the kernels knowledge.