Temperature and Overclocking Testing
Cooler Testing Methods
To best gage the quality of the GPU cooler under review, GPU temperature was taken with the graphics cards idle and under load. To replicate GPU idle conditions, the system was rebooted and allowed to sit idle for 30 minutes. To replicate a stress graphics load, EVGA OC Scanner X was run over a 30 minute period using the Furry E (GPU memory burner::3072MB) 3D Test, a 1280×1024 resolution, and an 8x MSAA Antialiasing setting. After each run, the system was shut down and allowed to rest for 30 minutes to cool down. This procedure was repeated a total of 12 times – six times for the stock speed runs and six times for the overclocked speed runs.
Temperature measurements were taken directly from the GPU thermistors using TechPowerUp GPU-Z v0.8.3. For both the idle and load temperatures, the highest recorded value in the application were used for the run. Note that the temperature values are reported as deltas rather than absolute temperatures with the delta value reported calculated as GPU temperature – ambient temperature.
To adequately measure the EVGA GTX 970 SC's cooler performance in SLI, performance testing was done for all scenarios with the cards liquid cooled using a modified configuration with the HeatKiller GPU-X3 GPU water block. During the tests with the modified card, it was found that a minimal amount of airflow was needed blowing across the VRM heat sinks to prevent card instabilities. Results are reported for both single card and SLI configurations for comparison purposes.
Note that the temperature values are reported as deltas rather than absolute temperatures with the delta value reported calculated as GPU temperature – ambient temperature. For all tests, room ambient temperature was maintained between 24-27C.
Stock Temperature Testing
The graphics card temperature testing was conducted at stock speeds with air-based and liquid-based cooling.
With the two EVGA GTX 970 SC cards cooled in parallel, an interesting temperature trend emerges. While the cards remain very close in proximity temperature-wise, the primary card exhibits slightly cooler temperatures than the secondary card. This trend was seen with both cards running at stock speeds and the processor running at stock and overclocked speeds. The temperature delta between the two cards running in tandem was a mere 1C with the temperatures scaling as expected in comparison to the system running with a single card. Even with the CPU overclocked (causing more heat to be dumped into the cooling loop), the secondary card was measured to be 14C above ambient when under load.
Overclocked Temperature Testing
Using the EVGA Precision X 16 v6.3.5 tweaking software, the graphics card was overclocked to its highest stable settings using air-based and liquid-based cooling. For details on the overclocked settings used for testing and benchmarks, please see the Manual Overclocking section below.
With the GPU and video card memory overclocked, temperatures only increased slightly over those measured at default stock settings. The same temperature difference between the two cards was seen with the secondary card running slightly warmer than the primary.
Stock settings
With dual cards running under SLI, we began to see oddities with the GPU voltage being supplied to the primary card. THe primary video card showed a load voltage of only 1.5000V while the secondary card showed a load voltage of the expected 1.2000V. According to information from several support forums and NVIDIA itself, there is a known issue when operating multiple cards in SLI affecting the voltage applied to one of the cards in the SLI set. This voltage difference did not have any negative affects on card performance or operating stability at stock settings, but was found to affect card stability during the overclocking runs.
Manual Overclocking
For the overclocking tests, EVGA's Precision X 16 tweaking software (v6.3.5) was used to enable the settings with TechPowerUp GPU-Z (v0.8.3) used to validate the settings properly took effect. Graphics card stability was tested by performing a full run through the FutureMark 3DMark Fire Strike benchmark with a 1920×1080 resolution without crash or artifacting. Once the 3DMark Fire Strike benchmark run stabilized, card stability was checked using the Unigine Heaven, Grid 2, and Metro: Last Light in-game benchmark tests. The settings were further refined until no artifacting or crashes occurred in any of those applications. To further ensure card operational reliability at the configured settings, the card was torture tested over an extended period with EVGA OC Scanner X using the Furry E (GPU memory burner::3072MB) 3D Test, a 1280×1024 resolution, and an 8x MSAA Antialiasing setting.
Running dual GTX 970 SC cards in SLI with the modded configuration and the HeatKiller GPU-X3 GPU water block placed in-line with the Raystorm CPU block, we were able to increase the GPU boost clock and memory speeds to 1468 MHz (+100) and 1977 MHz (+475). As mentioned above, we saw an oddity with the GPU voltage measurements between the two cards with the primary card being undervolted by 0.05V and causing stability issues during our overclocking attempts. According to several forum posts concerning this issue, upping the GPU clock speed on the unaffected card (in our case, the secondary card) between 10-25MHz will cause the GPU voltage on the primary card to approach the expected values. Increasing the GPU offset on the secondary card to +115MHz increased the voltage for the primary card to 1.1870V from the 1.1500V is was sitting at previously. This voltage bump was enough to stabilize the dual card. Though not as aggressive as we were able to achieve with the single card setup, the overclock achieved remains respectable.
EVGA Precision X 16 profile settings
- GPU Clock Offset, card 1 – +100MHz
- GPU Clock Offset, card 2 – +115MHz
- Memory Clock Offset – +500MHz
- GPU Voltage Overvoltage – +37mV (Max)
- Power Target – 110% (Max)
- GPU Temperature Target – 91C (Max)
- Fan Preset – N/A
Performance numbers
- GPU Boost Clock Speed – 1468MHz
- Memory Speed – 1977MHz
- GPU voltage, card 1 – 1.1870V
- GPU voltage, card 2 – 1.2000V
Is this native advertising or
Is this native advertising or why did you test an SLI bridge?? It’s like testing a USB cable.. You only notice if it’s completely broken.
The SLI bridge was not the
The SLI bridge was not the only thing tested. If you read through the entire article, you would see that a good portion of the article is testing the performance of the GTX 970 cards in SLI.
Thanks…
Little touchy there Morry
Little touchy there Morry
What are you talking about,
What are you talking about, if anybody is touchy it’s you.
Does anyone know when the SLI
Does anyone know when the SLI bridges will actually be available to purchase at normal retail prices? All I’ve been able to find is random units popping up from shady vendors on Ebay or Amazon that mark them up +$100.
Does anyone know when the SLI
Does anyone know when the SLI bridges will actually be available to purchase at normal retail prices? All I’ve been able to find is random units popping up from shady vendors on Ebay or Amazon that mark them up +$100.
I like the sleek and
I like the sleek and simplicity over EVGA ugly ass SLI bridges
I recently got the EVGA 2.0
I recently got the EVGA 2.0 3-way SLI bridge, and though it looks great, it didn’t fit on my triple XS-PC watercooled cards without a bit of modification. The metal plate over the top was hitting the card inlet/outlet ports. It was rather unfortunate. I should probably return it for an ASUS one that would fit a lot easier.
Matter of taste and personal
Matter of taste and personal preferences. I would never, out of my free will, put anything red in my rig. That’s why if I need SLI bridge I will get EVGA not the ASUS.
And I find EVGA bridges more interesting to look at that ASUS. But we are back at the beginning. A matter of taste and personal preferences.
EVGA is obnoxious when it
EVGA is obnoxious when it comes to branding on every piece of EVGA hardware from every direction even more so when lit up.
http://static.evga.com/articles/00919/images/features_slider/SLI_bridge_slides_action_shot.jpg
Asus is just a symbol and you can mod the light to any color
With a 780 sli setup and
With a 780 sli setup and windows 10 with all drivers past 350.12 has a directX error “out of memory” playing battlefield 4. No error with sli disabled. 200 fps with 350.12, compared to 120fps with any driver past 350.12. But windows 10 keeps auto updating me to the newest driver that doesn’t work. 353.30 works great with single card but sli has to be disabled in order for me to play battlefield 4.
Terrible drivers with directX 12 in it.
Morry should try 350.12
Morry should try 350.12 drivers and see if the difference is large.
Morry Thanks for the
Morry Thanks for the article.I have a water cooled 5960x with 2 Sapphire Tri-X R9 290s with EK blocks. My cpu is at 4.4Ghz (44×10) using the Asus Suite III OCing software that sets the bios parameters. With both GPUs stock (1000/1300) my Firestrike scores are Overall: 17042;Graphic 21683;Physics 20,391 and Combined 5986
Overclocking the GPUS to 1125core/1425 mem yields the following: Overall 19099;Graphics 24317; Physics 20938 and combined 6968
Is it me or is multi gpu
Is it me or is multi gpu scaling getting worse i could have sworn that dual sli scaling with Kepler was much higher across the board
SLI bridges are so retro.
SLI bridges are so retro. When will Nvidia follow AMDs lead and eliminate them? Oh wait just one more thing to nickle and dime gamers.
What would suggest?
What would suggest?
Is this a big joke? Who in
Is this a big joke? Who in their right minds would pay 70$ for a little 3$ connector? LOL
Buy a better GPU instead, idiots.
who? they are called
who? they are called ‘enthusiasts’ L:)