GPU Testbed – Sandy Bridge-E, X79, Games
We decided that it was high time we replaced the somewhat-dated Nehalem-based infrastructure (even though honestly, it was fast enough) with something a bit more current. Obviously, that meant going with the new Intel Sandy Bridge-E processor and X79 motherboard. By combining support for 40 PCI Express lanes and 3-4 full size GPU slots, it makes for the perfect GPU base.
Our reviews will based around the following system:
- Intel Core i7-3960X CPU
- ASUS P9X79 Pro motherboard
- Corsair DDR3-1600 4 x 4GB Vengeance memory
- 600GB Western Digital VelociRaptor HDD
- 1200 watt Corsair Professional Series power supply
- Windows 7 SP1 x64
The ASUS P9X79 Pro
The Intel Core i7-3960X gives us the fastest consumer-level CPU on the market to help eliminate the possibility of any processor-based bottlenecks in our testing (whenever possible). There are still going to be some games that could use more speed (Skyrim comes to mind) but for our purposes this is as good as you get without getting into any kind of overclocked settings. The ASUS P9X79 Pro motherboard has enough space for three dual-slot graphics cards when the time comes for testing 3-Way SLI (and CrossFire), and 8 DIMM slots should we want to go up from our current setup of 16GB of Corsair Vengeance memory.
I chose to stick with the 600GB VelociRaptor hard drive rather than an SSD as our total installation size with Windows 7 SP1 x64 and 6+ games was already hitting the 115GB range. Finally the 1200 watt power supply from Corsair offers up more than enough juice for three power hungry graphics cards while running quietly enough to not throw off our noise testing drastically.
Speaking of noise, we are re-introducing our sound level testing thanks to the Extech 407738 Sound Level Meter capable of monitor decibel ratings as low as 20db. This allows me to accurately tell you the noise levels generated by the graphics cards that make in-house at PC Perspective.
Along with the new hardware configuration comes a host of new games. For this review we will be using the following benchmarks and games for performance evaluation:
- Battlefield 3
- Elder Scrolls V: Skyrim
- DiRT 3
- Batman: Arkham City
- Metro 2033
- Deus Ex: Human Revolution
- 3DMark11
- Unigine Heaven v2.5
This collection of games is both current and takes into account several different genres as well – first person role playing, third person action, racing, first person shooting, etc. 3DMark11 and Unigine Heaven give us a way to see how the cards stack up in a more synthetic environment while the real-world gameplay testing provided by the six games completes the performance picture.
GeForce GTX 660 2GB Reference Specs
For our review we have quite a few interesting comparisons to make. First, our reference GTX 660 card will go up against the GTX 660 Ti to see how much performance you get out of that $70 increase in price. From AMD, the new GK106 GPU will be pitted against the Radeon HD 7870 GHz Edition and the Radeon HD 7850.
Our driver revisions for the AMD cards was Catalyst 12.8 and for NVIDIA a new GTX 660-ready beta of 306.23.
- NVIDIA GeForce GTX 660 2GB – $229
- NVIDIA GeForce GTX 660 Ti 2GB – $299
- AMD Radeon HD 7870 GHz Edition 2GB – $259
- AMD Radeon HD 7850 2GB – $199
We will of course take a look at the two retail cards from EVGA and MSI to see how their overclocked settings affect the performance in gaming.
And finally, we'll wrap up with a comparison of the GeForce GTX 660 2GB to the GeForce 9800 GT 1GB, the GeForce GTX 460 1GB and the GeForce GTX 560 Ti 1GB. It's a section of the review that users looking for upgrade information won't want to miss!
This is the true sweet spot
This is the true sweet spot gamers have been looking for, even NVIDIA in its own documents compares it to other greats (9800GT/ect).
I also really do like the review including legacy cards, gives a really nice comparison, and not just comparing to what is currently available on the market.
Yup, I noticed that
Yup, I noticed that comparison as well. I don't remember exactly how 9800GT compares to 8800GT (were they exactly the same, just a rebrand?) but the latter was a card that sold like hotcakes :).
Yeah, the 9800 GT was a
Yeah, the 9800 GT was a rebranded 8800 GT, most of the time all that was done was a slight clock increase, later versions included a die shrink (65nm -> 55nm).
The 8800 GT and its bigger brother 8800 GTS 512 MB (which was rebranded to be the 9800 GTX), really should never have been. I think NVIDIA launched them just in time for the holiday season that year, they were only on the market for about 3 months before they were rebranded. >.< But never the less, the 8800GT's/9800 GT's were an insanely popular gaming card, and rightfully so I think.
I loved my 8800GTS 512!
I loved my 8800GTS 512! Nvidia did some really confusing stuff back then! Remember this?
8800GTS 640gb, 8800GTS 320gb, 8800GTS 512mb (g92).
The 512mb was way faster than the other 2.
Then the 8800gts 512mb was rebranded as the 9800GTX (mentioned as ).
Then rebranded as the 9800GTX+ (was it 45nm then? cant remember)
Then rebranded as GTS 250.
What a trip!
Haha, yes I do remember the
Haha, yes I do remember the fun with GeForce 8 branding/marketing back then.
Yup, there is alot I could go into about those cards (Ive been a computer hardware geek since the GeForce 3 Days, and video cards are my fav parts, lol).
Yes the 9800 GTX was rebranded to be the 9800 GTX+ which was the die shrink from (65nm -> 55nm).
45nm was skipped on Video Cards/GPU’s mainly due to area density/manufacturing timing issues.
So that same G92 core went from 8800 GTS 512 MB -> 9800 GTX -> 9800 GTX+ -> GTS 250 ! How time flys ^.^
Thanks for the trip down
Thanks for the trip down memory lane with the 9800GT. Identical to the earlier 8800GT except for a BIOS flash for that fancy 9 Series number.
Best GTX 660 review period,
Best GTX 660 review period, thank you for your hard effort.
Thanks for your feedback
Thanks for your feedback everyone. Let me know if you have any questions or suggestions!
At the risk of asking you
At the risk of asking you guys to do more work ^.^, but in these video card reviews, can you include a few more video cards as comparison sake?
Rather than only the respective cards immidiate competition or siblings?
Say in this case, adding a GTX 670, GTX 680, HD 7970, HD 7950, GTX 580, GTX 570, ect. Seeing how cards compete across and inside generations is really nice/helpful. And, unless the testbed has changed, you could almost imput the benchmarks from previous reviews (though a “quick” re-run of the cards with updated drivers would be great).
Id be happy to help ;), lol.
No way in hell would skyrim
No way in hell would skyrim bring a gpu to 27 fps min at 1680
fix your fucking test rigs damnit.
It happens during a scene
It happens during a scene transition – going from a town to the overworld.
then your “Minimum” result is
then your “Minimum” result is fundamentally WRONG.
They aren’t wrong, they
They aren't wrong, they represent the system as a whole anyway.
are you guys using a older
are you guys using a older version of Skyrim for the tests?
Skyrim until patch 1.4 (I think) was much worse in terms of CPU performance, since all cards are doing 27 min this can only be caused by something else.
Latest Steam updates…
Latest Steam updates…
The transitions scenes appear
The transitions scenes appear to be capped at 30fps, and aren’t part of gameplay. Including them in a benchmark for a videocard doesn’t seem very illuminating.
I will be refreshing when
I will be refreshing when people learn how FRAPs works, and why there are dips in the charting as a result of cut scenes. It will do it on any system. And while…no, it isnt particularly illuminating on benchmark data…it is a necessity that all cards are shown on the same play through, with the same cut scenes, and the same corresponding dips. Please get off your high horse.
One thing that i will take issue with however, is the Anti Aliasing settings. I am discouraged that nVidia reviews consistently limit the AA to a paltry 4x. And usually its MSAA not FSAA. It is pretty common knowledge that high levels of AA on nVidia cards, particularly at higher resolutions, is a weak point of their design. And typically show steeper performance drop off as the AA and resolution goes higher then ATi cards under similar conditions would.
For the sake of fair and balanced testing, the true maximum settings should be run on the game, as a typical user would select from the start. Not cherry picking AA levels that are favorable to one vendors hardware setup that greatly skew expected real world performance.
If you are going to bother with the new WQHD resolutions, the least you could do is run 8x FSAA.