GK110 in all its glory
NVIDIA has upped the ante again on the ultra high end graphics cards with the new GTX 780 Ti will a full GK110 GPU.
I bet you didn't realize that October and November were going to become the onslaught of graphics cards it has been. I know I did not and I tend to have a better background on these things than most of our readers. Starting with the release of the AMD Radeon R9 280X, 270X and R7 260X in the first week of October, it has pretty much been a non-stop battle between NVIDIA and AMD for the hearts, minds, and wallets of PC gamers.
Shortly after the Tahiti refresh came NVIDIA's move into display technology with G-Sync, a variable refresh rate feature that will work with upcoming monitors from ASUS and others as long as you have a GeForce Kepler GPU. The technology was damned impressive, but I am still waiting for NVIDIA to send over some panels for extended testing.
Later in October we were hit with the R9 290X, the Hawaii GPU that brought AMD back in the world of ultra-class single GPU card performance. It has produced stellar benchmarks and undercut the prices (then at least) of the GTX 780 and GTX TITAN. We tested it in both single and multi-GPU configurations and found that AMD had made some impressive progress in fixing its frame pacing issues, even with Eyefinity and 4K tiled displays.
NVIDIA dropped a driver release with ShadowPlay that allows gamers to record playback locally without a hit on performance. I posted a roundup of R9 280X cards which showed alternative coolers and performance ranges. We investigated the R9 290X Hawaii GPU and the claims that performance is variable and configurable based on fan speeds. Finally, the R9 290 (non-X model) was released this week to more fanfare than the 290X thanks to its nearly identical performance and $399 price tag.
And today, yet another release. NVIDIA's GeForce GTX 780 Ti takes the performance of the GK110 and fully unlocks it. The GTX TITAN uses one fewer SMX and the GTX 780 has three fewer SMX units so you can expect the GTX 780 Ti to, at the very least, become the fastest NVIDIA GPU available. But can it hold its lead over the R9 290X and validate its $699 price tag?
GeForce GTX 780 Ti – GK110 Full Implementation
GK110 isn't new to us, but this implementation of it is. When NVIDIA launched the GTX TITAN it was with an incomplete GPU – there was a single SMX disabled of the available 15 on the die, bringing its CUDA core count to 2,688.
But now, with the new GeForce GTX 780 Ti, NVIDIA is enabling all 15 SMX units for a grand total of 2,880 cores, all running with a base clock of 875 MHz and a Boost clock (typical) of 928 MHz. These clock speed are both increases over the GTX TITAN as well (39 MHz on base, 52 MHz on boost) which obviously tells us that overall compute performance of the 780 Ti will outstrip the TITAN for gaming. Texture units also increase from 224 to 240 giving the card a bit more bandwidth for fill rate.
The 384-bit memory interface remains the same, though the GTX 780 Ti has 3GB of GDDR5 memory running at 7.0 Gbps, a full 1.0 Gbps increase over the TITAN specifications. Memory bandwidth improves to 336 GB/s which just passes the 320 GB/s of the R9 290X and its 512-bit memory interface. The 6GB frame buffer on TITAN is still obviously king for users that need it but in my testing there isn't a big gaming advantage for users compared to the 3GB buffer on the GTX 780 or GTX 780 Ti.
If we compare the GTX 780 to the GTX 780 Ti, obviously the newcomer has the advantage in all areas.
|GTX 780 Ti||GTX TITAN||GTX 780||R9 290X||R9 290||R9 280X|
|Transistors||7.1 billion||7.1 billion||7.1 billion||6.2 billion||6.2 billion||4.3 billion|
|Clock Speed||875 MHz||836 MHz||863 MHz||up to 1000 MHz||up to 947 MHz||1000 MHz|
|Memory Clock||1750 MHz||1500 MHz||1500 MHz||1250 MHz||1250 MHz||1500 MHz|
|Compute Perf||5.04 TFLOPS||4.49 TFLOPS||3.97 TFLOPS||5.6 TFLOPS||4.9 TFLOPS||4.1 TFLOPS|
*Please note that these TFLOP ratings are using NVIDIA's base clock while AMD's results are using their "up to" peak clock. While interesting to see, you should really be comparing only NV to NV on the table above.
With 25% more CUDA cores, while also running at slightly higher clocks speeds, the GTX 780 Ti should be a sizeable step ahead of the original GTX 780 in performance in all areas. Add in the higher memory bandwidth
The GTX 780 Ti card itself is rather unremarkable in its deviation from the standard NVIDIA has set for itself with high end GPU offerings in the last year plus. The 780 Ti uses the same style of cooler we saw first with the GTX 690 and it continues to be a stylish, high performance solution for cooling massive, hot GPUs. What will be interesting to evaluate this time is how much louder and hotter NVIDIA was willing to go to compete with AMD's new standards on the R9 290X and R9 290.
The reference card still looks sexy and the addition of the darker lettering on the "GTX 780 Ti" routing adds some aggressiveness to the color scheme. Card length remains the same as the GTX 780 and GTX TITAN at 10.5 inches
NVIDIA has continued to include a pair of dual-link DVI outputs, a full-size HDMI and a full-size DisplayPort for monitor connectivity.
Somewhat of a surprise, the GTX 780 Ti is still only using an 8+6 pin power configuration despite the higher core count and clock speed and maintains the same 250 watt TDP as GTX 780 and TITAN.
I still rather but the Titan.
I still rather but the Titan. I use 3 monitors and with the Titan you only need 1 card. It seems that the 780 ti you would need 2 cards. So the titan is cheaper. Am I wrong?
Why do you guys insist on
Why do you guys insist on using that Skyrim sequence to benchmark GPUs. You could have barely chosen a more ill suited location. All that effort wasted because you run a GPU benchmark in a thoroughly CPU limited area, when there are actually plenty of GPU limited areas to choose from in the game.
What game/test is used to
What game/test is used to determine total power usage of the entire system? How is this number identified?
780ti is capable of way more
780ti is capable of way more , cooler don’t like it tho .so now all those engineering need to be sent to cooling a computer 101 if they hope to gain more performance in the future .cause clearly .both nvidia and AMD have reached or are very close to have reached a plateau .
why are the exact
why are the exact settings for crysis 3 not shown as they are for BF3 or Metro LL. this is an inconsistency you should eliminate.
also did you run R9 290X with uber fan speed. because otherwise the throttling is quite severe for R9 290X CF. there is 10 – 20% perf loss at the lower fan speed.
people who spend USD 550 are definitely going to try and get the best performance possible. since AMD supports uber mode with a BIOS switch there is every reason to test it since its guaranteed by AMD to work and they back it with their warranty.
I am getting pretty tired of
I am getting pretty tired of benchmarks like these, not comparing a new flagship card with the previous title holder on resolutions where they matter. Sure, You give me the most common, such as 1920*1080 and 2560*1440, but nothing for multi-monitor support, and even your 4K resolution review excludes the previous title holder, the GTX Titan.