TITAN is back for more!
We are back with our full performance review of the NVIDIA GeForce GTX TITAN graphics card based on GK110!
Our NVIDIA GeForce GTX TITAN Coverage Schedule:
- Tuesday, February 19 @ 9am ET: GeForce GTX TITAN Features Preview
- Thursday, February 21 @ 9am ET: GeForce GTX TITAN Benchmarks and Review
- Thursday, February 21 @ 2pm ET: PC Perspective Live! GTX TITAN Stream
If you are reading this today, chances are you were here on Tuesday when we first launched our NVIDIA GeForce GTX TITAN features and preview story (accessible from the link above) and were hoping to find benchmarks then. You didn't, but you will now. I am here to show you that the TITAN is indeed the single fastest GPU on the market and MAY be the best graphics cards (single or dual GPU) on the market depending on what usage models you have. Some will argue, some will disagree, but we have an interesting argument to make about this $999 gaming beast.
A brief history of time…er, TITAN
In our previous article we talked all about TITAN's GK110-based GPU, the form factor, card design, GPU Boost 2.0 features and much more and I would highly press you all to read it before going forward. If you just want the cliff notes, I am going to copy and paste some of the most important details below.
From a pure specifications standpoint the GeForce GTX TITAN based on GK110 is a powerhouse. While the full GPU sports a total of 15 SMX units, TITAN will have 14 of them enabled for a total of 2688 shaders and 224 texture units. Clock speeds on TITAN are a bit lower than on GK104 with a base clock rate of 836 MHz and a Boost Clock of 876 MHz. As we will show you later in this article though the GPU Boost technology has been updated and changed quite a bit from what we first saw with the GTX 680.
The bump in the memory bus width is also key, being able to feed that many CUDA cores definitely required a boost from 256-bit to 384-bit, a 50% increase. Even better, the memory bus is still running at 6.0 GHz resulting in total memory bandwdith of 288.4 GB/s.
Speaking of memory – this card will ship with 6GB on-board. Yes, 6 GeeBees!! That is twice as much as AMD's Radeon HD 7970 and three times as much as NVIDIA's own GeForce GTX 680 card. This is without a doubt a nod to the super-computing capabilities of the GPU and the GPGPU functionality that NVIDIA is enabling with the double precision aspects of GK110.
The look and styling of the GeForce GTX TITAN design is very similar to that of the GeForce GTX 690 that launched in May of last year. NVIDIA was less interested in talking about the make up of the materials this time around but it is obvious when looking at and holding the GTX TITAN that it is built to impress buyers. Measuring only 10.5-in long the TITAN will be able to find its way into many more chassis and system designs than the GTX 690 could.
Output configurations are identical to that of the GTX 680 cards including a pair of dual-link DVI connections, a full-size HDMI port and a DisplayPort. You can utilize all four of the outputs at once as well for 3+1 monitor configurations.
With TITAN, NVIDIA will be releasing an updated version of GPU Boost they claim will allow for even higher clocks on the GPU than the original technology would have allowed. This time the key measurement isn't power but temperature.
his updated version of GPU Boost can increase the maximum clock rate because the voltage level is controlled by a known, easily measured data point: temperature. By preventing a combination of high voltages and high temperatures that might break a chip, NVIDIA can increase the voltage on a chip-to-chip basis to increase the overall performance of the card in most cases.
This new version of GPU Boost definitely seems more in line with the original goals of the technology but there are some interesting caveats. First, you'll quickly find that the clock speeds of TITAN will start out higher on a "cold" GPU and then ramp down as the temperature of the die increases. This means that doing quick performance checks of the GPU using 3DMark or even quick game launches will result in performance measurements that are higher than they would be after 5-10 minutes of gaming. As a result. our testing of TITAN required us to "warm up" the GPU for a few minutes before every benchmark run.
Zoomed FRAPS frame time graph
Zoomed FRAPS frame time graph is not labeled properly.
Great review btw, specifically the page before the last.
Seriously? You’re still doing
Seriously? You’re still doing this? Can you please give me one reason, just ONE reason, as to why I would want to play a game without Vsync and Triple Buffering? Because that is the only, and I repeat, the ONLY situation in which your “novel” and “state-of-the-art” method of benchmarking VGAs would be relevant to an actual real-life scenario. I’d nominate you for a Nobel prize if you do.
A little aggressive
A little aggressive considering the topic, maybe?
Competitive multi-player games. Triple buffered vsync introduces significant amounts of input latency. Enough to detect in a blind test, and certainly way more than any display or input device a gamer would be likely to use.
This might not matter to you, but it does to plenty of other PC gamers. I found this article to be extremely valuable.
I kind of agree on Vsync, but
I kind of agree on Vsync, but frame buffering, probably not:
"1. If it is not properly supported by the game in question, it can cause visual glitches. Just as tearing is a visual glitch caused by information being transferred too fast in the buffers for the monitor to keep up, so too in theory, can triple buffering cause visual anomalies, due to game timing issues for example.
2. It uses additional Video RAM, and hence can result in problems for those with less VRAM onboard their graphics card. This is particularly true for people who also want to use very high resolutions with high quality textures and additional effects like Antialiasing and Anisotropic Filtering, since this takes up even more VRAM for each frame. Enabling Triple Buffering on a card without sufficient VRAM results in things like additional hitching (slight pauses) when new textures are being swapped into and out of VRAM as you move into new areas of a game. You may even get an overall performance drop due to the extra processing on the graphics card for the extra Tertiary buffer.
3. It can introduce control lag. This manifests itself as a noticeable lag between when you issue a command to your PC and the effects of it being shown on screen. This may be primarily due to the nature of VSync itself and/or some systems being low on Video RAM due to the extra memory overhead of Triple Buffering."
http://www.tweakguides.com/Graphics_10.html
All valid points, but the
All valid points, but the question remains.
Does turning Vsync ON, fixes CF presenting issues.
If it does, is the performance of Vsycned CF in line when compared to Vsync OFF results.
Also does the system exhibits frame skipping like with RadeonPro “smoothing”?
And I’m not asking specifically about CF alone, but SLI and single GPUs also.
Testing with VSync would
Testing with VSync would “optimize” the video output on either vendor. So, maybe if you did an article with it on and base it as a “quality” type of test.
Both cards would be outputting as best they could with the monitor output being as optimized as it can be in terms of timing the frames. Compare the high end cards on 10 or so games, and it gives you an idea of which vendor has the best “quality” of game.
As far as GPU testing, I wouldn’t want to see it. I would want the game to run and output as much frames as possible with everything turned on, check the times, and see how bad the timing gets.
Why nobody is testing game
Why nobody is testing game performance with vsync ON?
Because vsync sucks, limits
Because vsync sucks, limits framerate and increases latency. Triple buffering even more so.
Vsync is the easy way to
Vsync is the easy way to avoid getting set up correctly.
“For 1920×1080 gaming there
“For 1920×1080 gaming there is no reason to own a $999 GPU and definitely not this one.”
That line is debatable, and all depends on how you game. I spend a lot of time with heavily modified Skyrim and that game gives my 7970 GHz a run for its money at 1080p. One particular setup I use (mainly for screenshots) kills that card, frequently averaging at 30fps.
This is shared in the Crysis 3 graph. Even the Titan struggles to max that game out at 1080p.
There certainly are reasons to own a Titan for 1080p gaming, provided you have the coin.
I am really liking the new
I am really liking the new charts and all except it is a little hard to read the names of the cards on the frametime charts. Would be nice to have a little bit of a larger font on those.
Well done
That’s what I
Well done
That’s what I always say there is a problem on graphics cards that No transfers true FPS
Larger screens 46 ”
If you test a card such this
If you test a card such this without using 3 hd screens you (in my opinion) missed the point.
What about GPGPU benchmarks?
What about GPGPU benchmarks? Is that still coming, or did I miss the article with them? But, thanks for the review!
With this test you become
With this test you become number 1 website !
I really don’t understand the
I really don’t understand the point of this card. I get it, it has lots of cores, but that isn’t really reflected in any normal use. It would make sense for CAD type application, rendering video with GPU acceleration, folding, etc. As far as useability, it just seems like $1k of graphic card made for having a LOT of VRAM for high resolutions, but the price sort of makes it not “worth” that.
If I was reading the specs, comparing them to the 680/7970 and looking at those prices, I would find it really hard to justify the difference if you weren’t looking at VRAM. Architecturally it has more “stuff”, but isn’t really a major shift.
680: 1536 cores @ 1000 MHz
Titan: 2688 cores @ 836-876 MHz
That gives you the same, less, or 10-20 AVG FPS more based on the charts (sleeping dogs being the example of above, dirt 3 isn’t worth discussing because f/r is so high on these cards).
I don’t really know where I’m going with this, but hopefully some amount of a point has come across. What on earth is this card for?
Ryan: Side note, is it possible to add a Blu-Ray or HD video render test with GPU acceleration on your benchmarks? It seems like something worthwhile to demonstrate when you having something like this where the product may be meant for uses other than gaming.
I just didn’t see this the
I just didn’t see this the same at all. The 680 vs the Titan isn’t even close. Other than being “single GPU”, they aren’t even in the same class.
This card brings a LOT of different stuff to the table, such as the new GPU boost, focus on acoustics (if that is your thing), and temp control. I thought the 690 was a big breakthrough, but the Titan probably impressed me more than the 690 did on release.
Game: AVG FPS(680 / Titan /
Game: AVG FPS(680 / Titan / 7970)
———————————-
FC3: 40.6/39.5/32.7
Crysis3: 30.1/41.6/30.6
Sleeping Dogs: 42/63.7/53.2
It depends on the game, but like I said earlier, it isn’t about gaming performance at all. From what Ryan said on tekzilla the main point of titan is the actual physical issues behind it’s fabrication and how THAT will be a big thing for the future of hardware or gives some insight or something.
I think it would be really interesting to see some BD decode testing.
Definitely the best Titan
Definitely the best Titan review out the PCPER! Excellent job guys! Frame Time analysis is invaluable!!!
Question:
What did you have your 3960X clocked at for benchmarks?
Wondering about Titan SLI scaling…..
Ryan can you please review
Ryan can you please review frame-times in BF3 @ 5760×1080 with Titan, SLI, & 3-way SLI.
My currents 670s stutter with MSAA turned on even though vram not close to max.
Your frame-time review will be the determining factor if i get the Titans or not.
Wow – seems this site got big
Wow – seems this site got big money from nvidia !
Did the same test they did and had no differences in framerate whatsoever for amd cf.
If you want to call me biased – go right ahead – I have pcs with nvd cards and pcs with amd cards – I don’t care about the manufacturer, I only care about bang for the buck and titan is a huge bust – two 7970s (680$) outperform a titan (1000$), so for me there is no question about what to get.
Which DVI capture cards were
Which DVI capture cards were you using?
… and where is my cheque NVIDIA!
Deltacast Delta-dvi
Deltacast Delta-dvi
Great Article. Nice to see
Great Article. Nice to see original thought and work in a tech Blog (instead of more useless fps number) .
The Titan has impressive performance, too bad the price is outta my reach :'(
BTW Anyone else notice that 7970GE is starting to kick 680 butt. AMD driver team is on a roll! However CF looks bad and they need to correct it, seeing as they have no single chip competitor to Titan.
@Ryan : To make things interesting, why not benchmark some games which are not as “popular”, i.e driver optimized.
Also, in my (humble?) opinion your articles would be more professional(better) if you avoided superlatives and words like beast (so clichéd). They make refutation of bias harder.
Once again, great work.
@Ryan could you please block
@Ryan could you please block the ip of the rabid fan-atic. Really spoils the whole comments section…
@Anonymous : obvious troll. Not gonna bother replying. Sod off!
This business of runt frames
This business of runt frames significantly padding Crossfire’s fps numbers is a huge story in the GPU world. There needs to be a major effort to expose the truth of this, whatever it is.
I agree wholeheartedly. If
I agree wholeheartedly. If this is true in it’s entirety it would destroy AMD’s credibility in the multi-GPU realm. Before this, most people were in agreeance that for high res and multi-display configurations, AMD is the way to go. This would change everything.
I doubt that the new 3dMark
I doubt that the new 3dMark is reliable. Maybe it’s just created to boost Titan.
I did the benchies with GTX680SLI/i7-3930K/X79. In 3dMark11 I get about nearly P16000 @stock and +P18000 occed.
But in Firestrike:
GTX680 single: 6300
GTX680SLI: 4200 (!)
And it’s not only my system, you can find these biased or faulty results easily.
So the 3dMark benchmark is a joke at the actual state and should not be used in a professional environment.
I’m not sure why but this
I’m not sure why but this site is loading very slow for me.
Is anyone else having this issue or is it a issue on my end?
I’ll check back later and see if the problem still exists.
Little bumed with your
Little bumed with your test???? this is a 2d surround card I am running 3 evga 680`s sc at 6000×1200 please let’s see the real meat and potatoes people buying this card ( me) want to see ….well let’s say three 680`s at 6000×1200 and two titans sli at same res that I think is all that really matters here rite? the 680 only having 2gb men must fail hard against two Titans with its 6gb the titan card is very specifically a hi-res surround gaming card I know you need to test everything but I think you should have started the other way around IMO..