GRID 2
GRID 2 (DirectX 11)
Be fast, be first and be famous as the race returns in GRID 2, the sequel to the BAFTA-award winning, multi-million selling Race Driver: GRID.
Our Settings for GRID 2
At 1920×1080 on GRID 2, the performance delta between Sandy Bridge and Skylake is around 7% when measuring average frame rate. That's not a lot, but it is measurable, as is the small increase in frame time inconsistency.
At 2560×1440 though, that small gap disappears completely, revealing nearly identical performance on both platforms.
For our pair of GTX 980s in SLI though, the performance delta spikes up to 18% in favor of the Core i7-6700K! Frame times are consistent in both cases here, with neither platform exceeding 2ms in our testing until well past the 99th percentile.
I’d love to see this test
I’d love to see this test include a Q6600 as well, for more perspective.
It’s going to be 10 years old soon. Comparing it to the newer generation would be neat.
I appreciate the testing very
I appreciate the testing very much but at the same time I strongly disagree with your conclusion.
Do you recommend people spend $600 for a new graphics card that delivers somewhere between 7% and 25% improvement depending on scenario?
Because that’s what you’re recommending for the motherboard, CPU and RAM combination.
7-25% isn’t huge? What do you
7-25% isn’t huge? What do you want??? Geez. 50%? 100%? Ridiculous expectations out of people. This chip is great.
7-25% is pathetic for that
7-25% is pathetic for that price.
Do you buy a new $600 graphic card for that kind of pitiful improvement?
I have a feeling that the
I have a feeling that the difference wouldn’t be near as large if both were clocked the same, and that things would be far more GPU bound if they were both clocked north of 4.5ghz
This article is basically
This article is basically useless for the intended audience.
I would guess that most PCPers that are still using their SB chips are doing so because they are great OCers, with most people getting around ~4.5ghz.
To not factor that into the analysis of an article whose sub title is “Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?” is a pretty glaring oversight and turns what could have been a extremely helpful and useful article into a curiosity.
My 3930K @ 4.6 still spoils
My 3930K @ 4.6 still spoils me with insane FPS. Upgrading from this chip is a really bad idea right now. Just went to 980ti SLI @ 1440p/144hz and its so fast it just spoils me really. If Skylake-E manages to impress, maybe I’ll consider upgrading to that just for the lulz.
Looks like ill have to
Looks like ill have to scratch my upgrade itch next gen.
I’m running a 2600k@4.2 with a gtx980 at 4k and it runs
fine to me. But if I win the contest for that Gigabyte rig….GOODBYE SANDY BRIDGE!!!!
First off I’m running an i5
First off I’m running an i5 2500k running at 4.5 ghz and an AMD R9 280. If you are playing the games you want to play without any problems, smooth game play, etc. Why would you even consider upgrading any part of your computer. What is the real world difference between a stable 60fps and 120fps visually? No difference! Stop being ridiculous in recommending upgrades that don’t benefit the vast majority of gamers operating at 1080p, because they won’t see a difference worth a $600 price tag. As someone else mentioned it looks like you are being paid by intel to make these suggestions and comparisons when it has very little impact on 1080p gamers. I won’t be upgrading my processor anytime soon. The video card will be my next upgrade when my game play gets closer to 30fps with new games. Which I estimate will be at least 2 years from now according to the trends. DX12 improvements might even make that upgrade further in the future. So, let’s stick to real world comparisons in the future!
You are high, there is a huge
You are high, there is a huge difference between 60 and 120 fps 😛
I’ll keep my 4GHz 2500k with
I’ll keep my 4GHz 2500k with 16GB of DDR3 thanks.
Running both chips at the same clock speed and with the same ammount of ram would have shown the true generational improvement. Pretty disappointing that this was not done.
I definitely don’t think the 5% improvement is worth the cost of uprading to the skylake platform and DDR4 though.
The test wasn’t fair. Memory
The test wasn’t fair. Memory matters in CPU limited benchmarks and the CPUs where not OC’d. Who the hell gets a 6700k our a 2700k and does not OC? The is sort of the point of the K series.
Yes i have to agree with
Yes i have to agree with everyone here that has stated that the benchmarks/tests were not performed at Overclocked speeds or with the same amount of memory. Just to show you the difference a percentage increase with my 2600k going from your stock Apple to apples Skylake reviews shows a dramatic improvement in my SiSoft Dhrystone and Whetstone scores of my 2600k at 4750mhz 35.71% faster clockspeed with scores for Dhrystone going from a score 116.86 @ 3500 Mhz to a score of 186.59 a amazing 59.6% increase @ 4750 Mhz, as for the Whetstone it went from a score of 73.44 @ 3500 Mhz to a score of 97.41 @ 4750 Mhz a 32.64% increase which is closer to the clockspeed % increase than the Dhrystone test that amazed me with a 59.6% increase I cannot account for since I was expecting it to be close like the clockspeed % change of 35.71%. I would be greatful to anyone who can account for the dramatic percentage increase of 59.6% over the clockspeed percentage increase. Now this is just the percentage increase 3.5ghz to 4.75ghz on the 2600k. Also I am repeating what everyone else said no one buys a 2500-2600k to run it at stock speeds Heck if you give me that Skylake platform i will give you my 5.1+ ghz capable 2600k I keep cool with a nice and simple but great performing Cooler Master Nepton 140xl with the push pull fans at 40% getting air through its 38mm thick radiator which is silent for the most part. It out cools most 240mm “not 280mm” radiators with its powerful pump and very large copper Cold Plate that has more microfins than any DIY water cooling systems CPU block according to FrostyTech i believe i read that from. I Have to give you a huge Thumbs up for adding SLI to the test even though you did not OVERCLOCK the 5 year old KING of Mainstream CPU’s Good Ole SANDY BRIDGE CPU for the tests The Sandy 2600k is the best CPU I have ever purchased and I do not think I will ever get the payback that CPU has and is STILL giving me in performance on a 24/7 daily basis. Another fantastic thing about SKYLAKE is that the removed the idiotic on die VRM’s and they are back on the Motherboard nice and big. I feel Haswell’s on die VRM’s has caused more CPU failures and chip degradation problems that I rarely if ever really hear about until Intel put those Tiny VRM’s in the CPU die itself not only being too small to put too many volts through them and they add heat to the CPU die. Luckily Skylake has them back on the motherboard where thay can be cooled correctly and it can help you choose a motherboard for you….If you have 2 motherboards that have everything you need and need something to help you make you mind up you pick the motherboard WITH THE MOST VRM’S THUS BEST POWER DELIVERY SYSTEM FOR THE cpu LEADING TO LESS vDROOP ETC.
I am not going to do anymore SiSoft tests because it is making me want to clock my 2600k to 5100mhz and then clock my SLI’ed EVGA GTX 770 Classified cards to 1400mhz cores and and 8000 memory clocks and do some benchmarking which I do not really need since I am using a LG 34UM95 34″ 21/9 3440-1440 IPS monitor with 8bit to 10bit color by dithering and a 60htz cap “tried overclocking the panel with no luck” so I do not use Vsync but I do set a frame rate target of 67fps with EVGA’s Precision software ” it saves me a lot of unneeded power use it keep the cards cool since they are not pushing out every frame they can possibly put out 120+fps all the time” and I get no tearing or stuttering and every game is buttery smooth. Yes I did do a 30 minute test with the Gsync enasbled 34″ 3440-1440 Predator monitor, but nothing I had time to play ran under 60fps and my current rig runs everything above 60FPS with my main game right now being War Thunder Ground Forces that has fantastic graphics and gameplay…blows World of Tanks outta the water, plus if your into it you can fly planes also. It includes
I listen to your podcast
I listen to your podcast pretty frequently but and like you guys BUT, yep, pretty obvious you guys got paid for this one. Shame on you!!!
You got us, we each received
You got us, we each received a portion of this island chain in Dubai.
I think this test isn’t
I think this test isn’t exactly even. First off you didn’t do a test of the performance on these cpu’s in their overclocked state. As these are K processors chances are the people that would want these comparisons will be overclocking them. So a base vs base is already biased since they have vastly different base clocks.
I can speak from experience that most 2600k can reach 4.5ghz and of those most can reach 4.7k with decent cooling. I mention this because as the cpu speed goes up the less likely the cpu will be to bottleneck the gpu’s.
Next thing i noticed in your test is the difference in RAM. 8 gig vs 16 gig. While some might not think this matters I have seen some of these games eat up over 8 gigs easy when run at 1440P or higher resolutions.
So my point here is your test beds were not as similar as possible. You might not be able to use the same ram or speed of ram, but you should have atleast matched the amount of ram on both machines.
I think if you were to do the two things I suggest you would see different results. Their might still be an advantage to skylake, but I think the game will be much more minimal. 1-4% would be my guess.