Metro: Last Light and Conclusions
Metro: Last Light (DirectX 11)
Beneath the ruins of post-apocalyptic Moscow, in the tunnels of the Metro, the remnants of mankind are besieged by deadly threats from outside – and within.
Mutants stalk the catacombs beneath the desolate surface, and hunt amidst the poisoned skies above. But rather than stand united, the station-cities of the Metro are locked in a struggle for the ultimate power, a doomsday device from the military vaults of D6. A civil war is stirring that could wipe humanity from the face of the earth forever.
As Artyom, burdened by guilt but driven by hope, you hold the key to our survival – the last light in our darkest hour…
Our Settings for Metro: Last Light
At 1080p, Metro: Last Light is the only game that didn't show a difference between the Sandy Bridge and Skylake processor platforms in our testing. There is a small fraction of a difference in terms of frame time consistency, but it's imperceptible.
Huh. Here we see a small 3% advantage for the Skylake system at 2560×1440 and a similar, but very minor, change in frame times.
Even though both resolutions with a single GPU showed little difference between the system using the Core i7-6700K and the system using the 2600K, when we integrated dual GTX 980s in SLI a similar change in experience occurs to what we saw on the previous page. The Skylake-based system is 12% faster in terms of average frame rate and has several areas of much more consistent frame delivery times.
Closing Thoughts
It's not a completely comprehensive test, but the results are consistent enough that I think we can draw some very specific conclusions. First, there is absolutely a gaming performance difference between the Sandy Bridge Core i7-2600K and the Skylake Core i7-6700K. With just a single GPU, and a high end one at that, we saw measured average frame rate differences as well as frame time consistency differences at 1920×1080 in 3 out of 4 our test games. In the newest title of the bunch, Grand Theft Auto V, that gap was 25%! Other games ranged from 7-8%, which isn't enough to warrant a full platform upgrade on its own, but if you have been weighing your options for a while, this might be enough to tip the scales.
As we increased the resolution to 2560×1440, those platform differences were minimized yet again, with the Sandy Bridge and Skylake platforms showing very similar results in terms of average frame rate. There was the occasional advantage for the 6700K in terms of frame times (GTA V) but otherwise I could these two experiences being hard to differentiate between.
For SLI though, that was far from the truth. The pair of GeForce GTX 980 cards running on the Core i7-6700K and Z170 motherboard produced a much better overall gaming experience, even at 2560×1440, than the older Sandy Bridge platform. Both average frame rates and frame times proved this to be the case: if you are a gamer considering or currently running on SLI, then you should really save some cash to make that next upgrade to Haswell or Skylake!
In the end, I definitely found there to be more division between results than I expected going into the testing process. Initially this was going to be just a single page in our standard Skylake review, but I found the differences warranted a separate article on the topic.
-
And hey, if you do decide to go down the path of the upgrading your gaming PC, why not help out PC Perspective and use our Amazon affiliate code for your purchase. Thank you for your support!!
Let me know in the comments below what else you think we should test and if you think what we have demonstrated here has convinced you it's time to move up in the world!
I’d love to see this test
I’d love to see this test include a Q6600 as well, for more perspective.
It’s going to be 10 years old soon. Comparing it to the newer generation would be neat.
I appreciate the testing very
I appreciate the testing very much but at the same time I strongly disagree with your conclusion.
Do you recommend people spend $600 for a new graphics card that delivers somewhere between 7% and 25% improvement depending on scenario?
Because that’s what you’re recommending for the motherboard, CPU and RAM combination.
7-25% isn’t huge? What do you
7-25% isn’t huge? What do you want??? Geez. 50%? 100%? Ridiculous expectations out of people. This chip is great.
7-25% is pathetic for that
7-25% is pathetic for that price.
Do you buy a new $600 graphic card for that kind of pitiful improvement?
I have a feeling that the
I have a feeling that the difference wouldn’t be near as large if both were clocked the same, and that things would be far more GPU bound if they were both clocked north of 4.5ghz
This article is basically
This article is basically useless for the intended audience.
I would guess that most PCPers that are still using their SB chips are doing so because they are great OCers, with most people getting around ~4.5ghz.
To not factor that into the analysis of an article whose sub title is “Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?” is a pretty glaring oversight and turns what could have been a extremely helpful and useful article into a curiosity.
My 3930K @ 4.6 still spoils
My 3930K @ 4.6 still spoils me with insane FPS. Upgrading from this chip is a really bad idea right now. Just went to 980ti SLI @ 1440p/144hz and its so fast it just spoils me really. If Skylake-E manages to impress, maybe I’ll consider upgrading to that just for the lulz.
Looks like ill have to
Looks like ill have to scratch my upgrade itch next gen.
I’m running a 2600k@4.2 with a gtx980 at 4k and it runs
fine to me. But if I win the contest for that Gigabyte rig….GOODBYE SANDY BRIDGE!!!!
First off I’m running an i5
First off I’m running an i5 2500k running at 4.5 ghz and an AMD R9 280. If you are playing the games you want to play without any problems, smooth game play, etc. Why would you even consider upgrading any part of your computer. What is the real world difference between a stable 60fps and 120fps visually? No difference! Stop being ridiculous in recommending upgrades that don’t benefit the vast majority of gamers operating at 1080p, because they won’t see a difference worth a $600 price tag. As someone else mentioned it looks like you are being paid by intel to make these suggestions and comparisons when it has very little impact on 1080p gamers. I won’t be upgrading my processor anytime soon. The video card will be my next upgrade when my game play gets closer to 30fps with new games. Which I estimate will be at least 2 years from now according to the trends. DX12 improvements might even make that upgrade further in the future. So, let’s stick to real world comparisons in the future!
You are high, there is a huge
You are high, there is a huge difference between 60 and 120 fps 😛
I’ll keep my 4GHz 2500k with
I’ll keep my 4GHz 2500k with 16GB of DDR3 thanks.
Running both chips at the same clock speed and with the same ammount of ram would have shown the true generational improvement. Pretty disappointing that this was not done.
I definitely don’t think the 5% improvement is worth the cost of uprading to the skylake platform and DDR4 though.
The test wasn’t fair. Memory
The test wasn’t fair. Memory matters in CPU limited benchmarks and the CPUs where not OC’d. Who the hell gets a 6700k our a 2700k and does not OC? The is sort of the point of the K series.
Yes i have to agree with
Yes i have to agree with everyone here that has stated that the benchmarks/tests were not performed at Overclocked speeds or with the same amount of memory. Just to show you the difference a percentage increase with my 2600k going from your stock Apple to apples Skylake reviews shows a dramatic improvement in my SiSoft Dhrystone and Whetstone scores of my 2600k at 4750mhz 35.71% faster clockspeed with scores for Dhrystone going from a score 116.86 @ 3500 Mhz to a score of 186.59 a amazing 59.6% increase @ 4750 Mhz, as for the Whetstone it went from a score of 73.44 @ 3500 Mhz to a score of 97.41 @ 4750 Mhz a 32.64% increase which is closer to the clockspeed % increase than the Dhrystone test that amazed me with a 59.6% increase I cannot account for since I was expecting it to be close like the clockspeed % change of 35.71%. I would be greatful to anyone who can account for the dramatic percentage increase of 59.6% over the clockspeed percentage increase. Now this is just the percentage increase 3.5ghz to 4.75ghz on the 2600k. Also I am repeating what everyone else said no one buys a 2500-2600k to run it at stock speeds Heck if you give me that Skylake platform i will give you my 5.1+ ghz capable 2600k I keep cool with a nice and simple but great performing Cooler Master Nepton 140xl with the push pull fans at 40% getting air through its 38mm thick radiator which is silent for the most part. It out cools most 240mm “not 280mm” radiators with its powerful pump and very large copper Cold Plate that has more microfins than any DIY water cooling systems CPU block according to FrostyTech i believe i read that from. I Have to give you a huge Thumbs up for adding SLI to the test even though you did not OVERCLOCK the 5 year old KING of Mainstream CPU’s Good Ole SANDY BRIDGE CPU for the tests The Sandy 2600k is the best CPU I have ever purchased and I do not think I will ever get the payback that CPU has and is STILL giving me in performance on a 24/7 daily basis. Another fantastic thing about SKYLAKE is that the removed the idiotic on die VRM’s and they are back on the Motherboard nice and big. I feel Haswell’s on die VRM’s has caused more CPU failures and chip degradation problems that I rarely if ever really hear about until Intel put those Tiny VRM’s in the CPU die itself not only being too small to put too many volts through them and they add heat to the CPU die. Luckily Skylake has them back on the motherboard where thay can be cooled correctly and it can help you choose a motherboard for you….If you have 2 motherboards that have everything you need and need something to help you make you mind up you pick the motherboard WITH THE MOST VRM’S THUS BEST POWER DELIVERY SYSTEM FOR THE cpu LEADING TO LESS vDROOP ETC.
I am not going to do anymore SiSoft tests because it is making me want to clock my 2600k to 5100mhz and then clock my SLI’ed EVGA GTX 770 Classified cards to 1400mhz cores and and 8000 memory clocks and do some benchmarking which I do not really need since I am using a LG 34UM95 34″ 21/9 3440-1440 IPS monitor with 8bit to 10bit color by dithering and a 60htz cap “tried overclocking the panel with no luck” so I do not use Vsync but I do set a frame rate target of 67fps with EVGA’s Precision software ” it saves me a lot of unneeded power use it keep the cards cool since they are not pushing out every frame they can possibly put out 120+fps all the time” and I get no tearing or stuttering and every game is buttery smooth. Yes I did do a 30 minute test with the Gsync enasbled 34″ 3440-1440 Predator monitor, but nothing I had time to play ran under 60fps and my current rig runs everything above 60FPS with my main game right now being War Thunder Ground Forces that has fantastic graphics and gameplay…blows World of Tanks outta the water, plus if your into it you can fly planes also. It includes
I listen to your podcast
I listen to your podcast pretty frequently but and like you guys BUT, yep, pretty obvious you guys got paid for this one. Shame on you!!!
You got us, we each received
You got us, we each received a portion of this island chain in Dubai.
I think this test isn’t
I think this test isn’t exactly even. First off you didn’t do a test of the performance on these cpu’s in their overclocked state. As these are K processors chances are the people that would want these comparisons will be overclocking them. So a base vs base is already biased since they have vastly different base clocks.
I can speak from experience that most 2600k can reach 4.5ghz and of those most can reach 4.7k with decent cooling. I mention this because as the cpu speed goes up the less likely the cpu will be to bottleneck the gpu’s.
Next thing i noticed in your test is the difference in RAM. 8 gig vs 16 gig. While some might not think this matters I have seen some of these games eat up over 8 gigs easy when run at 1440P or higher resolutions.
So my point here is your test beds were not as similar as possible. You might not be able to use the same ram or speed of ram, but you should have atleast matched the amount of ram on both machines.
I think if you were to do the two things I suggest you would see different results. Their might still be an advantage to skylake, but I think the game will be much more minimal. 1-4% would be my guess.