Battlefield 4 Results
I tested Battlefield 4 at 3840×2160 (4K) and utilized the game's ability to linearly scale resolution to help me increase GPU memory allocation. In the game settings you can change that scaling option by a percentage: we went from 110% to 150% in 10% increments, increasing the load on the GPU with each step.
The first thing to note: the GTX 970 and GTX 980 allocated memory in essentially the same pattern. (We used MSI Afterburner to measure GPU memory allocation.)
At 120% resolution scaling, both the GTX 980 and GTX 970 use 3.39 – 3.40 GB of memory. At 130% that number climbs to 3.58GB (already reaching into both pools of memory), 140% is nearing the 4GB limit and only at 150% do we see a gap between the two cards. The GTX 980 is actually going slightly over the 4GB limit while the GTX 970 falls 70MB or so behind. I would consider both of these results to be within reason though.
Before even looking at performance then, BF4 has no issues utilizing more than the 3.5GB of memory of the first pool on the GTX 970. It crosses into the 500MB section at the same pace that the GTX 980 allocates its memory.
Our performance data is being broken up into two sets: the GTX 980 running at all five of our scaling settings and, separately, the GTX 970 running on the same five scaling settings. Plotting 10 sets of data on a single graph proved to be a a bit too crowded so we'll show the graphs successively to help you compare them more easily.
The real-time frame rate of the GTX 980 and GTX 970 both track pretty well as we increase in the resolution in Battlefield 4. Obviously the frame rates are pretty low though – we are starting with a 4K resolution and going up to 1.5x that!
Average frame rates are where we expect them to be: the GTX 980 is faster than the GTX 970 by fairly regular margins:
GTX 980 | GTX 970 | % Difference | |
---|---|---|---|
1.10x Scaling | 25.1 FPS | 22.5 FPS | -11% |
1.20x Scaling | 22.8 FPS | 19.2 FPS | -18% |
1.30x Scaling | 19.1 FPS | 16.8 FPS | -13% |
1.40x Scaling | 17.5 FPS | 14.6 FPS | -19% |
1.50x Scaling | 15.0 FPS | 12.8 FPS | -17% |
If we were to see average frame rates differ more because of the memory architecture difference in the GTX 970, it would start at the 1.30x scaling step where we first cross the 3.5GB barrier. But at that level we actually see a lower performance delta than we do at 1.20x scaling. Those differences do go higher at 1.40x and 1.50x scaling, but stay under the delta we saw at 1.20x scaling.
Click to Enlarge
Things are a bit more interesting when we look at the frame times for each of these runs. Let's look at the top graph of the reference GTX 980 results. You can see that as we increase the scaling percentage that the frame times get longer (as expected) but they tend to get a bit more variable as well. Variance doesn't get really wide until we hit the 150% scaling rate.
The GTX 970 is a bit different: we start to see variance differences as early as 1.30x scaling (the blue line) and things get progressively worse from there.
Looking at the frame time variance, a measure of potential stutter, there is no denying that the data indicates the GTX 970 exhibits more. From the 1.30x scaling testing on up to 1.50x scaling, the GTX 970 shows as much as 5ms frame variance for 10% of rendered frames. Testing on the GTX 980 indicates lower than 5ms frame variance on even the 1.40x scaled result for the last 10% of frames.
The visual of the frame variance graphs might be the most telling - notice that the GTX 970 graph clearly has the 1.10x and 1.20x performance results bunched together hugging the lower spectrum of the variance axis. But starting with 1.30x, the results separate off. Looking at the GTX 980 graph, that doesn't occur to the same level until the 1.50x result.
Clearly there is a frame time variance difference between the GTX 970 and GTX 980. How much of that is attributed to the memory pool difference compared to how much is attributed to the SMM / CUDA core difference is debatable but it leaves the debate open.
I repeat again, videocard
I repeat again, videocard works like any other, but have 3.5 gb only available and last 0.5 gb is actually non local video memory (which is system ram) and there is no slow video memory like NVidia said. It’s lie and it can be easy prooved (i prooved by writing own tests). Just allocate blocks in vram and dump ram, search in dump the code of those “vram” blocks and you will see that last 0.5 gb is stored in ram. Is that so hard? I feel myself genious seeing noone notice obvious things.
Is this Boris, the one, the
Is this Boris, the one, the only?
One more:
Boris has been
One more:
Boris has been offering a different analysis:
https://www.facebook.com/enbfx
Has anyone else seen this?
Holy crap, so the last 512MB
Holy crap, so the last 512MB isn’t being utilized at all? WTF NVIDIA????
From the link:
English (US) · Privacy · Terms · Cookies · Advertising ·
More
Facebook © 2015
ENBSeries
23 hrs ·
Update regarding “GTX 970 memory bug”. Wrote another test to check how that slow 0.5 gb memory works and again it’s the same thing which driver do for a long time, that memory is stored in RAM instead of VRAM, that’s why it slow. Basically, this is standart behavior for the most videocards on the market (vram is physical vram + a bit ram). What it means on practice compared to another videocards? GTX 970 have 3.5 Gb of VRAM. What i see in articles with explanation from NVidia is half-lie and of course casual people are incompenent and better to not listen to them. I don’t think it’s something horrible to loose 0.5 gb, but it’s bad that NVidia hide such information (my own videocard with 2 gb or vram have access to 2.5 gb and nobody annonced it as 2.0 fast and 0.5 slow).
Nice article. Thanks.
Nice article. Thanks.
While I do believe this
While I do believe this article stands true for single GPU scenarios, the question of how bad the 0.5GB memory pool will affect SLI performance still needs to be answered.
With the current crop of games, a single 970 pushing 3.5GB+ VRAM will most likely yield unplayable FPS anyway. However for SLI users 3.5GB+ could be a daily routine.
A comparison between 980 SLI and 970 SLI can easily help us to find out the impact of the 0.5GB memory pool. You can easily remove stutters caused by SLI glitches from the picture by looking at the 980 SLI results.
I have GTX 970 SLI…have had
I have GTX 970 SLI…have had zero probles.
I play on Asus ROG Swift GSync @ 2560×1440.
You want me to try to do some testing?
Same thing happens with the
Same thing happens with the GTX 660/Ti. VRAM usage will run up to 1536MB but either stutter and go over – after which it’s mostly fine, with a very slight framerate hit and possibly more stutters – OR it will just bounce back down to about 1530MB and stay there.
Seems like the exact same thing is happening with the GTX 970 – usage up to 3584MB and then a stutter – where it either goes over or stays right at the 3.5GB limit.
Since nVidia aint doing the
Since nVidia aint doing the right thing. AMD is offering a discount if you want to return your 970 for an AMD card:
https://twitter.com/amd_roy/status/560462075193880576
More lies from nVidia, wonder
More lies from nVidia, wonder how PCper will defend them on this:
—
https://www.facebook.com/enbfx
Update regarding “GTX 970 memory bug”. Wrote another test to check how that slow 0.5 gb memory works and again it’s the same thing which driver do for a long time, that memory is stored in RAM instead of VRAM, that’s why it slow. Basically, this is standart behavior for the most videocards on the market (vram is physical vram + a bit ram). What it means on practice compared to another videocards? GTX 970 have 3.5 Gb of VRAM. What i see in articles with explanation from NVidia is half-lie and of course casual people are incompenent and better to not listen to them. I don’t think it’s something horrible to loose 0.5 gb, but it’s bad that NVidia hide such information (my own videocard with 2 gb or vram have access to 2.5 gb and nobody annonced it as 2.0 fast and 0.5 slow). So sad that all my posts on the forums were trolled, fools are always the most active and agressive, hopefully it’s their own butthurt as they won’t listen to professionals.
You could also think about it
You could also think about it this way. For the 980, the vram is also taken up by things that do not need fast ram. Windows, drivers, etc.. If Nvidia’ new driver can use the 500mb for things that do not need fast vram,and use the 3.5gb for things that do, then the gap will narrow.
Windows OS is in charge of
Windows OS is in charge of that.
Nvidia would have to “hack” its own driver for each game to do that.
People who bought the 970 and don’t play commercially well known games are left out to dry because Nvidia would have to apply that driver hack or optimization to each game that’s ever released.
We aren’t dead yet which so software isn’t self aware.
It would be great if you
It would be great if you could compare frame times to a 290 and 290x.
Frametimes aren’t looking too
Frametimes aren’t looking too good there.
i’m pretty sure this isn’t
i’m pretty sure this isn’t about the last 0.5gb its more about the principle, it feels like Nvidia was just trying to keep it a secret. 99% sure their wouldn’t be a problem if the card had 3.5gb or they told us about the slower last 0.5gb and said its performance is decrase was negligible.
Bye bye magic driver
Bye bye magic driver 😛
http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-we-will-not-boost-geforce-gtx-970-performance-with-drivers/
They may still do that, but
They may still do that, but in silence. Peter from Nvidia clearly stated they are working on a fix, why would he lie? But then he could get some drops from management to dement it, because releasing this publicly they would confess their fault – and this could be problem for them. So now they may pretend everything is fine, no problems whatsoever, but in background they can optimize their drivers for 970… And once released, we will have no issues… A miracle 🙂 Maybe BS, but everything is possible 🙂
Playing Dying light last
Playing Dying light last night; I’ve found that it was a game that stopped at 3.5GB Vram and refused to use more. Outside of the CPU patch the dev’s are working on though; I didn’t have any hiccups using the GTX 970.
Geeks3d.com seem to have found an interesting point. OpenGL apps can use all 4GB of ram. They have DirectX vram test on their site and no matter what I tried I couldn’t get it use over 3.5GB.
Also interesting note; I opened up gpuz. Sitting at my desktop with dual 1080p screens I’m using on average 300MB of Vram, and with chrome open about 430MB.
It seems possibly UI elements are just sitting in this slower part? (huge speculation)
Win 7 x64 SP1, using lastest Nvidia driver. Wonder if Windows 8 can use all the memory?
Thanks for the continued
Thanks for the continued update on the 970 mem issue guys.. much appreciated.. this would surely help people get a fair picture and help in deciding on the purchase.. cheers
Try using Star Citizen for
Try using Star Citizen for the tests, it’s a super VRAM hog.
If you are updet with NVidia
If you are updet with NVidia for this.
Sign the petition and help people to get their money back:
https://www.change.org/p/nvidia-refund-for-gtx-970
I believe that people with
I believe that people with multi-monitor and 4k monitor set ups will run into these problems today. It will only get worse in the future. Sure the average gamer games at 1920×1200. Doesn’t mean that we aren’t upgrading to 4k or ultrawide.
What about row hammer?I
What about row hammer?I assume the 500 MB part is ddr3 variant?hopefully you guys will make an article about row hammer,so far may be only Intel fixed this (they re the one with a patent mentioning row hammer(2014)