GK110 Makes Its Way to Gamers
NVIDIA is launching the GeForce GTX TITAN based on GK110 this week and we have all the details for you!
Our NVIDIA GeForce GTX TITAN Coverage Schedule:
- Tuesday, February 19 @ 9am ET: GeForce GTX TITAN Features Preview
- Thursday, February 21 @ 9am ET: GeForce GTX TITAN Benchmarks and Review
- Thursday, February 21 @ 2pm ET: PC Perspective Live! GTX TITAN Stream
Back in May of 2012 NVIDIA released information on GK110, a new GPU that the company was targeting towards HPC (high performance computing) and the GPGPU markets that are eager for more processing power. Almost immediately the questions began on when we might see the GK110 part make its way to consumers and gamers in addition to finding a home in supercomputers like Cray's Titan system capable of 17.59 Petaflops/s.
Watch this same video on our YouTube channel
Nine months later we finally have an answer – the GeForce GTX TITAN is a consumer graphics card built around the GK110 GPU. Comprised of 2,688 CUDA cores, 7.1 billion transistors and with a die size of 551 mm^2, the GTX TITAN is a big step forward (both in performance and physical size).
From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse. While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units. Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz. As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.
The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase. Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s.
Speaking of memory – this card will ship with 6GB on-board. Yes, 6 GeeBees!! That is twice as much as AMD's Radeon HD 7970 and three times as much as NVIDIA's own GeForce GTX 680 card. This is without a doubt a nod to the super-computing capabilities of the GPU and the GPGPU functionality that NVIDIA is enabling with the double precision aspects of GK110.
These are the very same single precision CUDA cores that we know on the GK104 part and as such you can guess at performance (based on clocks and core counts); and you'll have to do that for a couple more days still. (NVIDIA is asking us to hold off on our benchmark results until Thursday so be sure to check back then!)
While the GeForce GTX 680 (and the family of GK104/106/107 GPUs) were built for single precision computing, GK110 was truly built with both single and double precision computing in mind. That is why the die size and transistor count is so much higher than GK104, the double precision units that give TITAN its capability in GPGPU workloads are abscent in the GK104 part. And while most games today don't take advantage of double precision workloads we cannot discount the potential for future GPGPU applications and what GK110 would offer differently than GK104.
I asked our very own Josh Walrath for some feedback on the GK110 GPU release we are seeing today in particular how it related to process technology. Here was his response:
The GK110 is based on TSMC's 28 nm HKMG process. This is the same process used for the other Kepler based products that are out today. The 28 nm process first saw light of day in graphics with the AMD HD 7970 back in December, 2011. NVIDIA followed some months later with the GK104 based GTX 680. 28 nm has been a boon to the graphics market, but it seems that we are at a bit of a standstill at the moment. There have been few basic improvements to the process since its introduction in terms of power and switching speed, though obviously yields have improved dramatically over that time. This has left AMD and NVIDIA in a bit of a bind. With no easy updates due to process improvements, both companies are stuck with thermal and power limits that have remained essentially unchanged for well over a year.
The GK110 is a very large chip at 7.1 billion transistors. It likely is approaching the maximum reticle limit for lithography, and it would make little sense to try to make a larger chip. So it seems that GK110 will be the flagship single GPU for quite some time. Happily for NVIDIA they have made some interesting design decisions about the double precision units and keeping them from affecting TDPs when running single precision applications and games (which are wholly single precision). There is a slight chance for these products to move to a 28 nm HKMG/FD-SOI process which would improve both leakage and transistor switching properties. Apparently the switch would be relatively painless, but FD-SOI has not gone into full production at either TSMC or GLOBALFOUNDRIES.
We still have quite a bit more to share with you on the next pages though including details on GPU Boost 2.0, new overclocking and overVOLTING options for enthusiasts and even a new feature to enable display overclocking!
By the way love the pcper
By the way love the pcper podcast u guys are halarious 🙂
(Weird, why did it double
(Weird, why did it double post)
+1 on the GPGPU benchmarks as
+1 on the GPGPU benchmarks as well; I’d like to see octane render and blender test results….also if you guys can get your hands on a 4k display and see how much you can push the 690 SLI and Titan SLI to run crysis or 3dmark at the higher resolution.
Now I get why Nvidia locked
Now I get why Nvidia locked the maximum Pixel Clock frequency starting around the ~304 series of drivers. It was all in preparation for this card.
For anyone who wishes to “overclock” their displays, see here: http://www.monitortests.com/forum/Thread-NVIDIA-Pixel-Clock-Patcher
While many people desire it,
While many people desire it, few people will buy it. At it’s projected price it will be redundant.
what’s the name(s) of some of
what’s the name(s) of some of the ray tracing demo’s that are out there? light the mirror orbs that you move around and things like that, one of the graphics card vendors has it all the time but I wasnt able ot write the names down
I would to see some Cuda
I would to see some Cuda performance test as well, in particular Vray RT.
Christ.. why wouldn’t they
Christ.. why wouldn’t they enable this feature on the 600x series?!? At least up the throthle temperature to 80 from it’s current 70 and call it 2.0 GPU boost.. The stock may get fried but.. people who have a reference card could easly handle 80deg.
i want to ask aquestion
iam
i want to ask aquestion
iam now go to buy acomputer for 3d max rendring an design
and I am very confused between asus geforce gtx 780 6gb titanum and gtx 690 4gb
please help me