Display Overclocking and Double Precision Performance
With the GeForce GTX 680 NVIDIA introduced a couple of unique features that weren't directly related to GPU performance. One of them was Frame Rate Target that allowed users to set a maximum frame rate for a game that might be older and not need the full power of the GPU powering your system. Another was Adaptive VSync that enabled a GPU to run with Vsync enabled when going OVER 60 Hz (or 120 Hz on an appropriate panel) but turn it off when dipping below 60 FPS.
Great stuff that improved game play experiences without being directly related to performance.
With the GTX TITAN NVIDIA is doing it again – this time with display overclocking.
With GPU Boost 2.0, we’ve added a new feature that makes this possible: display overclocking. Using tools provided by our add-in card partners, you may be able to overclock the pixel clock of your display, allowing you to hit higher refresh rates.
The implications with this are pretty impressive and I am eager to try out the ability to overclock the various displays in our office and report back with results. NVIDIA claims that this is pretty safe and a simple test will determine if your monitor can handle higher refresh rates. This wasn't enabled in the version of Precision X we were using but hopefully we'll get it working by the end of the week.
Also, because this is a GK110 based part, I can hear the GPGPU users out there salivating over the idea of getting this much compute power for "only" $1000. Good news! All of the double precision capability is accessible in this chip on the GeForce branded card though you do have to go into the control panel and enable the double precision modules.
The hard coded switch is necessary because the GK110 GPU down clocks pretty heavily with DP enabled, and thus gamers would get much lower gaming performance if left on while running Crysis 3, for example.
By default, GeForce GTX TITAN runs the DP cores at 1/8th their full clock speed, and given each SMX has 1/3rd the number of DP cores vs SP cores, the DP operations would run at 1/24 the SP rate, similar to GeForce GTX 680.
By the way love the pcper
By the way love the pcper podcast u guys are halarious 🙂
(Weird, why did it double
(Weird, why did it double post)
+1 on the GPGPU benchmarks as
+1 on the GPGPU benchmarks as well; I’d like to see octane render and blender test results….also if you guys can get your hands on a 4k display and see how much you can push the 690 SLI and Titan SLI to run crysis or 3dmark at the higher resolution.
Now I get why Nvidia locked
Now I get why Nvidia locked the maximum Pixel Clock frequency starting around the ~304 series of drivers. It was all in preparation for this card.
For anyone who wishes to “overclock” their displays, see here: http://www.monitortests.com/forum/Thread-NVIDIA-Pixel-Clock-Patcher
While many people desire it,
While many people desire it, few people will buy it. At it’s projected price it will be redundant.
what’s the name(s) of some of
what’s the name(s) of some of the ray tracing demo’s that are out there? light the mirror orbs that you move around and things like that, one of the graphics card vendors has it all the time but I wasnt able ot write the names down
I would to see some Cuda
I would to see some Cuda performance test as well, in particular Vray RT.
Christ.. why wouldn’t they
Christ.. why wouldn’t they enable this feature on the 600x series?!? At least up the throthle temperature to 80 from it’s current 70 and call it 2.0 GPU boost.. The stock may get fried but.. people who have a reference card could easly handle 80deg.
i want to ask aquestion
iam
i want to ask aquestion
iam now go to buy acomputer for 3d max rendring an design
and I am very confused between asus geforce gtx 780 6gb titanum and gtx 690 4gb
please help me