PC Perspective Advanced Power Testing
For our power consumption tests on new GPUs, we previewed a new measurement method last year that we are finally taking advantage of. This isn’t an “at the wall” measurement with a Watts Up or something similar to that; instead we are measuring power directly from the graphics card itself.
How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12v and 3.3v power and built some Corsair power cables that measure the 12v coming through those as well.
The result is a data that looks like this.
Click for Larger Version
What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.
There are four lines drawn in the graph, the 12v and 3.3v results are from the PCI Express bus interface direction, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.
From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.
As you would expect, different games and applications impose different loads on the GPU and can cause it draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems. As you’ll see below, the AMD Fury X draws just ~180 watts in one game but ~240 watts in another.
Click for Larger Version
Rise of the Tomb Raider, running in DX12 mode at 2560×1440, shows an interesting story for our detailed power measurement. The GTX 1080 and the Fury X are essentially using the same amount of power despite that the TDP of the Fury X is 70 watts higher than that of the GTX 1080! The GTX 980 is running about 20 watts lower than the GTX 1080, which matches specifications pretty well and the GTX 980 Ti uses 50 watts more than the GTX 1080, again, pretty close to expectations.
Click for Larger Version
Our testing of The Witcher 3, also at 2560×1440, is quite different. First, there is a shift at about the 25 second mark where we go between a town scene and an in-game cut scene. You can ignore the drops in power at that precise moment as the game fades to black before coming back into the game engine. But notice that the Fury X behaves drastically different before and after that switch: it draws ~200 watts before it and ~~235 watts after. The GTX 1080 and GTX 980 maintain their power levels through the transition but the GTX 980 Ti actually falls a bit.
If we look at the second half of the data you can see how the cards really compare. The Fury X is pulling 235 watts, the GTX 980 Ti at 200 watts, the new GTX 1080 at 165 watts and the GTX 980 right around 155 watts.
Now that we are have direct power measurement numbers to use, rather than power from the wall results that could be tainted by increased CPU usage at any of these points, we can do some interesting performance per watt measurements for all parties involved.
While both games show slightly different versions of the story, the obvious fact is that the GTX 1080 is the most efficient enthusiast class GPU we have ever seen. A combination of the 16nm FinFET process technology and NVIDIA’s path improvement to get clock speeds as high as they are is clearly giving the GP104 GPU a huge leap forward. In Rise of the Tomb Raider the average frame rate you get per unit of power is 70% higher than AMD’s Radeon R9 Fury X and the GeForce GTX 980 Ti. Even compared to the GTX 980 based on GM204, the GP104 is 42% more efficient.
Our results in Witcher 3 lessen the efficiency advantage over the Maxwell architecture for the GTX 1080, hitting 48% for the GTX 980 Ti and 21% for the GTX 980. But against the Fury X, the GTX 1080 is producing more than 2.3x the frame rate for the same power.