Power, Temperature, and Overclocking
Power, Temperature, and Noise
Unlike previous generations of high end NVIDIA cards, the GTX 670 is downright miserly when it comes to power consumption and heat generation. The AMD 7000 series are all a step up from what NVIDIA delivers, and the top end cards often pull in upwards of 50 watts more from the wall at full load.
At idle all of these cards perform very much the same. The GPU and memory speeds on all cards are lowered and the card goes into a low power state. Nothing much to see here unless something is horrifically wrong.
When we get to load, things look a bit different. I am not sure why, but the R7950 that I received must have a really impressive GPU when it comes to fabrication. It sips power and it can overclock like the dickens. It is lower at load than either of the other cards. What is quite interesting though is that the R7870 sits right next to the GTX 670 in terms of load power, but is significantly slower than the other card. Then again, the R7870 destroys the GTX 670 when it comes to GPGPU. Jack of all trades? I guess so.
Temperatures are going to be very similar between the cards, primarily because they all run an iteration of MSI’s Twin Frozr cooling solution. All of these cards are manufactured on the latest 28 nm HKMG process from TSMC.
At idle, the GTX 670 is downright frigid as compared to the rest. The R7950 is a bit warmer and the R7870 sits in between. At load, things do change up with the R7950 being the coolest and quietest solution. The R7870 starts to really scream at around 65C, and it maxes out the temp at 69C. The GTX 670 stays at a very pleasant 64C, but most importantly it is not loud at all. The R7950 and GTX 670 are both very, very quiet at load. The R7870 gets surprisingly loud. Still, all of the cooling solutions keep these products under 70C, which is very impressive.
The GTX 600 series of cards introduced a new way of overclocking NVIDIA's graphics cards. Basically, a user pushes to maximum the power limit on the card (on MSI’s Afterburner it goes to 114%). The user can then leave it alone and the card will most likely stay at the maximum boost frequency in any application. The user also has the option to add MHz to the core clock, so if for example they add 50 MHz then a card that typically boosts at 1056 will go to 1106. This is very dependent on the silicon that a user gets when they purchase a card, but NVIDIA and its partners essentially guarantee that the card will at least run (part of the time) at the max boost speed.
There was a very interesting thing about this particular sample. When leaving all of the settings at default, the card was showing a max boost speed of 1179. This is well above the stated 1079 MHz that the documentation details. I am digging further into this, but I am assuming that this is correct. In this case MSI has given the user a nice little gift when it comes to stock performance. At no time was this card unstable during benchmarking and torture testing. And no, torture testing did not involve hours of Farmville [Josh has a smartphone for that now heh].
With just pushing the power limit to 114% I was able to take the clock up to 1250 MHz boost during benchmarking. It stayed at that setting no matter what was done, and would only drop down to lower speeds when the application I was using would close down. This is all before adjusting voltages. The memory was set at a static 1600 MHz (6.4 GHz effective).
Increasing the voltage allowed me to squeeze another 25 MHz out of this card pretty comfortably. 1275 MHz was the max with the extra 0.2v applied. The addition of the voltage unlocking with the Afterburner 2.3 software is a nice touch, but it will not unlock a GPU to go well beyond what stock voltage is able to afford.