The GTX 750 and Frame Rating Info
Though we aren't covering it in our review today, there is another GPU being announced by NVIDIA as well today: the GeForce GTX 750. The non-Ti model is built off the same GM107 GPU but removes a single SMM from the equation, bringing the CUDA core count down to 512, the texture unit count down to 32 and lowering the TDP from 60 watts to 55 watts. Other than those changes though, the GTX 750 Ti and GTX 750 are going to be identical.
Testing Configuration
The specifications for our testing system haven't changed much.
Test System Setup | |
CPU | Intel Core i7-3960X Sandy Bridge-E |
Motherboard | ASUS P9X79 Deluxe |
Memory | Corsair Dominator DDR3-1600 16GB |
Hard Drive | OCZ Agility 4 256GB SSD |
Sound Card | On-board |
Graphics Card | NVIDIA GeForce GTX 750 Ti 2GB NVIDIA GeForce GTX 650 Ti 2GB NVIDIA GeForce GTX 660 2GB AMD Radeon R7 260X 2GB AMD Radeon R7 265 2GB |
Graphics Drivers | AMD: 14.1 NVIDIA: 334.69 |
Power Supply | Corsair AX1200i |
Operating System | Windows 8 Pro x64 |
- NVIDIA GeForce GTX 660 2GB – $199
- AMD Radeon R7 265 2GB – $149 (MSRP)
- NVIDIA GeForce GTX 750 Ti – $149
- AMD Radeon R7 260X 2GB – $139
- NVIDIA GeForce GTX 650 Ti – $125
What you should be watching for
- GTX 750 Ti vs GTX 650 Ti – Can the smaller, but newer, GTX 750 Ti GPU face down the GTX 650 Ti?
- GTX 750 Ti vs R7 260X – The most direct, and currently available, competition to the GM107 GPU.
- GTX 750 Ti vs R7 265 – Can NVIDIA's newest GPU attempt to keep with the likes of the R7 265 and its much larger GPU?
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
The PCPER FRAPS File
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions. So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
So, obvious first
So, obvious first question:
Any improvements in scrypt performance w/ Maxwell? 😉
There is some, but not a
There is some, but not a whole lot. We are looking at doing some testing today on the currency applications but the lack of optimization could be a hold off.
Thanks Ryan, can’t wait for
Thanks Ryan, can’t wait for your results!
Oh, the simple days when one could choose a GPU based on its game performance … I don’t miss them, not one bit.
http://cryptomining-blog.com/
http://cryptomining-blog.com/922-the-new-nvidia-geforce-gtx-750-ti-scrypt-mining-performance/
Not much in the way of graphs or pages of analysis but 265 KH/s and about 300 KH/s overclocked. Of course they were probably limited in the same way Ryan was when overclocking.
Thanks for the link, 265 kH/s
Thanks for the link, 265 kH/s at (or below) 75W don’t seem half bad!
Any chance this card supports
Any chance this card supports hdmi 2.0? is there anything coming out soon that will?
Just confirmed that the
Just confirmed that the GeForce GTX 750 Ti does NOT have HDMI 2.0. They won't talk about future products though…
Are you comparing it to a
Are you comparing it to a plain 650 Ti or a 650 Ti Boost in the article/benchmarks?
This is the NON Boost GPU.
This is the NON Boost GPU. The GeForce GTX 650 Ti Boost is EOL so I didn't think it should get in over the still available GTX 650 Ti.
Yes, but I was interested
Yes, but I was interested since I purchased the 650 Ti Boost and looking at the Non Boost versus the Boost 650 Ti it looks like the 750 Ti isn’t much over the 650 Ti Boost. (From an upgrade and price/performance perspective.)
Power consumption is nice to see, but not too much concern in a 650 Ti or 750 Ti size card, more of an interesting when the bigger cards come out.
If they did not EOL the 650 Ti Boost the comparison/benchmark charts comparing it to the 750 Ti would look a little weird I think.
Maxwell 2nd level high end
Maxwell 2nd level high end should be very interesting. I’m hoping for Titan Black performance at $500 and 200 watts.
I have a couple of
I have a couple of questions.
Is it absolutely certain that if this card had a dp output that it would support g-sync? or is that an assumption at this point?
When overclocking, does the mem or gpu clock increase affect the performance more?
Finally, do you have the power numbers on the overclock?
Thanks.
G-Sync support is confirmed
G-Sync support is confirmed yes, as long as a DP connection is present.
GPU clock definitely affects the perf more.
Ah, I didn't make a graph of power under the overclock! But power jumped from 184 watts to 202 watts (full system).
Hmm, I find it interesting
Hmm, I find it interesting that it scaled down so well when the Kepler architecture did not (with power consumption) I am really wondering how it scales into enthusiast territory considering these tweaks improved mainstream so much.
“One feature that the GTX 650
“One feature that the GTX 650 Ti card does NOT have is support for SLI which is quite disappointing.”
I believe you meant the “750”
Thanks, fixed!
Thanks, fixed!
Ryan, long time viewer here.
Ryan, long time viewer here. Great intro into maxwell. I just wish you guys still included bar graphs because the line graphs can be hard to compare one card to another if there is only say a ten or twenty percent difference .
I have had this feedback a
I have had this feedback a few times. We are going to integrate that again soon.
http://cdn.pcper.com/files/im
http://cdn.pcper.com/files/imagecache/article_max_width/review/2014-02-17/gpuz.png
Ryan, please update this with GPU-Z 0.7.7. It reports the GM107 specs correctly.
http://www.techpowerup.com/downloads/2340/techpowerup-gpu-z-v0-7-7/
Done!
Done!
http://www.geforce.com/whats-
http://www.geforce.com/whats-new/articles/nvidia-geforce-334-89-whql-drivers-released
Would this card be an
Would this card be an improvement over my 2 560Tis, or should I wait for the next Maxwell cards?
No, if you are running 560
No, if you are running 560 Ti's in SLI, I would wait.
Would this card be a good
Would this card be a good improvement over a single 560ti card?
Would I be better trying to get a second 560 for sli?
Would this card be an upgrade
Would this card be an upgrade from my GTX 260?
forgot to post my PC
forgot to post my PC specs:
Operating System:
Windows 2.6.1.7601 (Service Pack 1)
CPU Type:
Intel® Core™2 Quad CPU @ 2.66GHz
CPU Speed:
2.69 GHz
System Memory:
8.59 GB
Video Card Model:
NVIDIA GeForce GTX 260
Video Card Memory:
4.27 GB
Video Card Driver:
nvd3dum.dll
Desktop Resolution:
1680×1050
Hard Disk Size:
492.68 GB
Hard Disk Free Space:
235.47 GB (48%)
Download Speed:
1.49 MB/s (11.9 mbps)
forgot to post my PC
forgot to post my PC specs:
Operating System:
Windows 2.6.1.7601 (Service Pack 1)
CPU Type:
Intel® Core™2 Quad CPU @ 2.66GHz
CPU Speed:
2.69 GHz
System Memory:
8.59 GB
Video Card Model:
NVIDIA GeForce GTX 260
Video Card Memory:
4.27 GB
Video Card Driver:
nvd3dum.dll
Desktop Resolution:
1680×1050
Hard Disk Size:
492.68 GB
Hard Disk Free Space:
235.47 GB (48%)
Download Speed:
1.49 MB/s (11.9 mbps)
hi so im looking into a $500-
hi so im looking into a $500- $600 maybe a little more and was wondering if i should get this card or should i wait for the amd r7 265?
Ryan how do you make those
Ryan how do you make those “FPS by Percentile” charts in Excel? I’d like to do the same on my own, using fraps
Thanks
Any Cuda testing like Blender
Any Cuda testing like Blender Rendering? Would be nice to see performance improvements from Computing side.
I don’t agree with your
I don’t agree with your remarks Mr. Shrout. In the Skyrim, Metro Last Light and Bioshock Infinite graphs, 750 ti’s frame latency rises higher than 260x in the frame time variance graphs while they are at a clear tie in Crysis 3 and Battlefield 4. You also did not comment on the frame spikes in Bioshock.
So what would you say 570 or
So what would you say 570 or 750?
Looking to make a Dell
Looking to make a Dell OptiPlex 780 SFF into a Steambox. The system accepts upto 16gb ddr3 (4x4gb sticks), It is currently running a q6700 core2quad (95w). I have been looking at using this card, however the psu is 234 watts. The 12v rail though is 17A so assuming looking at 204 watts for the motherboard, ram, video card and cpu fan. The only other components hooked in are a bluray laptop drive and a 320gb 5400rpm 2.5 hdd from a chromebook underneath it. I did a live talk with dell and confirmed that they sell a hd 7750 1gb ddr3 video card for it you can order and add. So my question is, would a q6600 bsel modded to 3ghz, 2x4gb sticks of ddr3 and a gtx 750ti with that bluray drive and hdd work? If not I have a E8400 CPU I can use instead which would reduce the wattage down by 30watts since its rated 65 instead of 95.