Battlefield 4 Results

I tested Battlefield 4 at 3840×2160 (4K) and utilized the game's ability to linearly scale resolution to help me increase GPU memory allocation. In the game settings you can change that scaling option by a percentage: we went from 110% to 150% in 10% increments, increasing the load on the GPU with each step.

The first thing to note: the GTX 970 and GTX 980 allocated memory in essentially the same pattern. (We used MSI Afterburner to measure GPU memory allocation.)

At 120% resolution scaling, both the GTX 980 and GTX 970 use 3.39 – 3.40 GB of memory. At 130% that number climbs to 3.58GB (already reaching into both pools of memory), 140% is nearing the 4GB limit and only at 150% do we see a gap between the two cards. The GTX 980 is actually going slightly over the 4GB limit while the GTX 970 falls 70MB or so behind. I would consider both of these results to be within reason though.

Before even looking at performance then, BF4 has no issues utilizing more than the 3.5GB of memory of the first pool on the GTX 970. It crosses into the 500MB section at the same pace that the GTX 980 allocates its memory.

Our performance data is being broken up into two sets: the GTX 980 running at all five of our scaling settings and, separately, the GTX 970 running on the same five scaling settings. Plotting 10 sets of data on a single graph proved to be a a bit too crowded so we'll show the graphs successively to help you compare them more easily.

The real-time frame rate of the GTX 980 and GTX 970 both track pretty well as we increase in the resolution in Battlefield 4.  Obviously the frame rates are pretty low though – we are starting with a 4K resolution and going up to 1.5x that!

Average frame rates are where we expect them to be: the GTX 980 is faster than the GTX 970 by fairly regular margins:

  GTX 980 GTX 970 % Difference
1.10x Scaling 25.1 FPS 22.5 FPS -11%
1.20x Scaling 22.8 FPS 19.2 FPS -18%
1.30x Scaling 19.1 FPS 16.8 FPS -13%
1.40x Scaling 17.5 FPS 14.6 FPS -19%
1.50x Scaling 15.0 FPS 12.8 FPS -17%

If we were to see average frame rates differ more because of the memory architecture difference in the GTX 970, it would start at the 1.30x scaling step where we first cross the 3.5GB barrier. But at that level we actually see a lower performance delta than we do at 1.20x scaling. Those differences do go higher at 1.40x and 1.50x scaling, but stay under the delta we saw at 1.20x scaling.

Click to Enlarge

Things are a bit more interesting when we look at the frame times for each of these runs. Let's look at the top graph of the reference GTX 980 results. You can see that as we increase the scaling percentage that the frame times get longer (as expected) but they tend to get a bit more variable as well. Variance doesn't get really wide until we hit the 150% scaling rate.

The GTX 970 is a bit different: we start to see variance differences as early as 1.30x scaling (the blue line) and things get progressively worse from there.

Looking at the frame time variance, a measure of potential stutter, there is no denying that the data indicates the GTX 970 exhibits more. From the 1.30x scaling testing on up to 1.50x scaling, the GTX 970 shows as much as 5ms frame variance for 10% of rendered frames. Testing on the GTX 980 indicates lower than 5ms frame variance on even the 1.40x scaled result for the last 10% of frames.

The visual of the frame variance graphs might be the most telling - notice that the GTX 970 graph clearly has the 1.10x and 1.20x performance results bunched together hugging the lower spectrum of the variance axis. But starting with 1.30x, the results separate off. Looking at the GTX 980 graph, that doesn't occur to the same level until the 1.50x result.

Clearly there is a frame time variance difference between the GTX 970 and GTX 980. How much of that is attributed to the memory pool difference compared to how much is attributed to the SMM / CUDA core difference is debatable but it leaves the debate open.

« PreviousNext »