Bioshock Infinite Results
Is it finally time for you to upgrade your Sandy Bridge gaming rig?
Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!
- The Intel Core i7-6700K Review – Skylake First for Enthusiasts (Video)
- Skylake vs. Sandy Bridge: Discrete GPU Showdown (Video)
- ASUS Z170-A Motherboard Preview
- Intel Skylake / Z170 Rapid Storage Technology Tested – PCIe and SATA RAID
Today marks the release of Intel's newest CPU architecture, code named Skylake. I already posted my full review of the Core i7-6700K processor so, if you are looking for CPU performance and specification details on that part, you should start there. What we are looking at in this story is the answer to a very simple, but also very important question:
Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?
I think you'll find that answer will depend on a few things, including your gaming resolution and aptitude for multi-GPU configuration, but even I was surprised by the differences I saw in testing.
Our testing scenario was quite simple. Compare the gaming performance of an Intel Core i7-6700K processor and Z170 motherboard running both a single GTX 980 and a pair of GTX 980s in SLI against an Intel Core i7-2600K and Z77 motherboard using the same GPUs. I installed both the latest NVIDIA GeForce drivers and the latest Intel system drivers for each platform.
Skylake System | Sandy Bridge System | |
---|---|---|
Processor | Intel Core i7-6700K | Intel Core i7-2600K |
Motherboard | ASUS Z170-Deluxe | Gigabyte Z68-UD3H B3 |
Memory | 16GB DDR4-2133 | 8GB DDR3-1600 |
Graphics Card | 1x GeForce GTX 980 2x GeForce GTX 980 (SLI) |
1x GeForce GTX 980 2x GeForce GTX 980 (SLI) |
OS | Windows 8.1 | Windows 8.1 |
Our testing methodology follows our Frame Rating system, which uses a capture-based system to measure frame times at the screen (rather than trusting the software's interpretation).
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
If you need some more background on how we evaluate gaming performance on PCs, just check out my most recent GPU review for a full breakdown.
I only had time to test four different PC titles:
- Bioshock Infinite
- Grand Theft Auto V
- GRID 2
- Metro: Last Light
Bioshock Infinite (DirectX 11)
BioShock Infinite is a first-person shooter like you’ve never seen. Just ask the judges from E3 2011, where the Irrational Games title won over 85 editorial awards, including the Game Critics Awards’ Best of Show. Set in 1912, players assume the role of former Pinkerton agent Booker DeWitt, sent to the flying city of Columbia on a rescue mission. His target? Elizabeth, imprisoned since childhood. During their daring escape, Booker and Elizabeth form a powerful bond -- one that lets Booker augment his own abilities with her world-altering control over the environment. Together, they fight from high-speed Sky-Lines, in the streets and houses of Columbia, on giant zeppelins, and in the clouds, all while learning to harness an expanding arsenal of weapons and abilities, and immersing players in a story that is not only steeped in profound thrills and surprises, but also invests its characters with what Game Informer called “An amazing experience from beginning to end."
Our Settings for Bioshock Infinite
Though the GTX 980 on the Core i7-6700K is 8% faster than it is on the 2600K, you can also see some differences in frame time consistency. Look at the FPS by Percentile graph where the orange line representing Sandy Bridge tails off sooner than the black line, representing Skylake.
At 2560x1440, all of this deviation between Skylake and Sandy Bridge is essentially gone, thanks to the added weight given to the GPU by the additional pixels. This isn't entirely surprising but does hint that maybe gamers with older systems that are using higher resolution screens may yet not need to upgrade.
GTX 980 SLI Results - 2560x1440
Well this is interesting - as we dive into the world of SLI and the hiccups of multi-GPU, even at 2560x1440, things start to look better for Skylake. In this case we find that the Core i7-6700K has about a 10% higher frame rate on average than Sandy Bridge and also much tighter frame time consistency.
Let's see how the other games in our testing fare.
Ryan,
In your opinion, would
Ryan,
In your opinion, would this make any difference in 4K? I’m runing Titan X in SLI at 4K, with 4K G-sync monitor smoothing the whole experience out. I am running on Sandy Bridge 3930K. Would upgrading to Skylake make any difference in my usage scenario?
Maybe. At 4K I would lean
Maybe. At 4K I would lean towards the CPU meaning even less but since you are running SLI, our 2560×1440 results show that it does in fact matter!
My advice would NOT be to
My advice would NOT be to change your CPU. Instead, for games which drop below 60FPS drop the resolution to 2560×1440 which not only may look IDENTICAL or very close but also will boost the frame rate significantly.
I’ve done a lot of testing on 4K and have found it extremely difficult to find ANY which look better to me over 1440p including CIV5 which has really small text.
(and going forward we’ll see a slow shift to DX12 which will likely eliminate much of the need to upgrade your current CPU)
That’s a friend’s PC. For myself, I won’t be considering 4K since I’ll want a high refresh rate such as the Acer Predator 1440p GSYNC monitor.
Other:
You may want to investigate how to force a frame rate cap for games so you always stay in asynchronous mode. I’m not sure how that works myself though since I’ve had no access to a GSYNC monitor. I’ve heard people say they could force (globally?) to something like 135Hz on a 144Hz panel as apparently it had to be slightly below the max refresh.
Ryan? I know this is a out of
Ryan? I know this is a out of the box comparison, but at the pricepoint you can compare a 3930k with the 6700k. Here you can run PCIE 3.0 (with a well known tiny tool) and clock the CPU at 4 / 4.2 GHz. Now your Frametimes will be a lot better than with the old-fashioned 2600k (clocking at 3.4 / 3.8 GHz). There is no need for a sidegrade from SB-E / IVY-E to Skylake. Would youtu.be please make a Benchmark like this???
Thanks for this Ryan – I have
Thanks for this Ryan – I have a 970 SLI / 2600k setup and was curious what 6700k would do. PCI-express 2.0 vs 3.0 is one difference that might matter here.. Also with the Oculus Rift in mind, it looks like Skylake gives a bit of a boost to minimum FPS — which is going to be key for the right experience there. I’d like to see more investigation on minimum FPS as VR takes off next year..
As for making a case vs. 2600k — other than the I/O portion of skylake, and larger memory capability for high end use cases with DDR4, I don’t see a compelling case for 6700k vs 2600k. Some of the difference is clock speed (4.0/4.2 vs 3.5/3.9) – and Sandy Bridge can definitely match Skylake OC or not on air easily.
In any case, I was really happy to see this 980 SLI comparison of the two chips – thanks PCPer!
Am I missing the
Am I missing the overclocking? Why not compare the overclocked performance, Sandy Bridge clocks way higher than Skylake from what I’ve heard.
Also I’m guessing not a lot of people are running the 2600k stock, hell I even run my non k 2600 overclocked and I never run into a situation where I feel I am lacking performance.
Was this a deliberate choice?
Word. All these new reviews
Word. All these new reviews are comparing unlocked processors at stock clocks. Were not idiots. I bet there is little to no performance increase if they compared the CPUs @ minimum 4.5Ghz, which my 2500k is running at.
The better test would be to
The better test would be to do it at the same speeds to make it a more apples-to-apples comparison – https://pcper.com/reviews/Processors/Intel-Core-i7-6700K-Review-Skylake-First-Enthusiasts/Clock-Clock-Skylake-Broadwel
But it does not show what it does for gaming.
The point of the article was to show the out-of-the-box experience to show readers what they get from their potential upgrade.
I am sure Ryan and company will do a clock-for-clock comparison soon. I’m quite sure they had very limited time under their NDAs to get these stories out on time.
But no one runs their 2600K
But no one runs their 2600K at stock clocks, which makes the comparison pointless as far as customers being able to tell what kind of upgrade they’d be getting.
Why would a “clock for clock”
Why would a “clock for clock” comparison matter? The video/article is clearly trying to say that it’s time to upgrade from Sandy Bridge to Skylake, but if a Sandy Bridge processor can OC to 4.5-4.8 GHz and perform at a much higher level, it makes the whole upgrading debate, clearly displayed here, irrelevant.
That’s what I need to know,
That’s what I need to know, if its worth upgrading from my 4.5ghz 2500k, if there is much performance advantage in gaming between a well OC’d 2500k or 2600k and the new Skylake offerings.
Still don’t see a definite
Still don’t see a definite reason to upgrade yet if you have a Sandy. Much better to wait for Zen (+few proper DX12 games) and then decide. If it proves worthy, go for Zen, if it doesn’t, then consider Skylake at better prices and possible discounts by then (same for DDR4 kits).
Did you see over at Tom’s
Did you see over at Tom’s Hardware? They ran some of their benchmarks on Windows 10 and it gave Skylake a 40% boost over Windows 8.1. Not sure what happened there.
Is PCPer going to do a review of W10 soon?
There really should be
There really should be NEGLIGIBLE performance differences between Windows 7, 8 and 10 except for some niche cases which have software optimization issues.
I remember Windows 8 working better than Windows 7 for Battlefield 4 with Intel processors due to a core parking issue which appeared to be TRUE.
Anyway, if there is any truth to the performance difference again it should NOT be representative of most gaming or other application scenarios, or else there is a serious bug or other issue not sorted out due to the new CPU.
I ran all 4 (8.0) and tested
I ran all 4 (8.0) and tested each with a myriad of games. 7 was great, but lots of overhead. As time went on, that overhead shrank, and now 10 is doing marvelously, until a recent update. Intel no longer supports win10 under the SandyBridge cores. http://www.intel.com/support/graphics/sb/CS-034343.htm
Run bf4 and see if you get a
Run bf4 and see if you get a memory leak with the 980’s with the new drivers for Win 10. I’m still getting a memory leak with my 780’s in sli. Disable sli works perfect.
Was playing bf4 before on win
Was playing bf4 before on win 10 with sli 980’s using the drivers win 10 installed during install (353.62) and don’t seem to have any memory leak issues. game settings were ultra @ 1440p.
Played for about 2.5 hours and havn’t rebooted since and everything looks fine still.
Be interesting to see how
Be interesting to see how close the figures are with the same amount of RAM.
I have the 2600K, and will go for a 980ti. My motherboard is almost the same, I think mine is the UD5H. The difference between the 980 and 980ti is far less than the upgrade cost of RAM+CPU+MB..
With the 16gb of 1600MHz RAM and 850pro ssd, it’s already pretty quick in Win7/64 and Linux Mint. Currently only has a 560ti card, but I’m waiting for 1440p gsync screens to become affordable…
I’m not sold on Win8/10 whatsoever.. Ugly is a fair call IMHO..
Ryan,
I’m guessing you were
Ryan,
I’m guessing you were running the 2600k at stock speed 3.4GHz? Since they can OC to the same speed it’s really not a fair comparison.
GJ on your Skylake coverage.
GJ on your Skylake coverage. IMO PCPER is by far the best tech site out there.
Who runs 2500K/2600K stock? It would have been interesting to see 2600K at 4.5-5GHz to see how it compares to a stock (and OC’d 6700K).
Most people on Sandy Bridge compare their overclocked chip with the stock speed of whatever the new i7 is. It’s hard to justify buying a new CPU/mobo/RAM when you have to overclock it just to see a decent improvement.
I’m running a 6 core Westmere chip (4GHz) on an X58 mobo and the only thing compelling me to perhaps upgrade is the feature set on the motherboards. Old versions of USB, SATA, PCIe and no UEFI make me a sad panda.
I suppose it’s a good thing my setup is still decent, ever since I moved in with my girlfriend I never seem to have any money 🙁
Ryan, was the processors
Ryan, was the processors tested at stock speeds?
Yup.
Both will overclock to
Yup.
Both will overclock to about the same frequency so I don't expect much to change in our comparison.
You don’t expect an extra 30%
You don’t expect an extra 30% CPU speed to change the CPU/GPU bottlenecks in your graphs at all? What?
You don’t expect comparing a
You don’t expect comparing a 4.8GHz SB to a 4.8Ghz SL will show different results than comparing a 3.4GHz SB to a 4.0GHz SL ?
Rly ?
Srly ?
So… you think that a 40%
So… you think that a 40% overclock on a 2600k and a 15% overclock on the 6700k (which is about what you have when you clock both at 4.8 GHz) will yield the same results? Really?
WTB Math….
Skylake has higher IPC so
Skylake has higher IPC so 4.8ghz on both means the Skylake will still be faster. That’s what Ryan’s point was. Where’s your math?
That’s not what he said
That’s not what he said though.
“Both will overclock to about
“Both will overclock to about the same frequency so I don’t expect much to change in our comparison.”
It’s going to scale linearly and if not both will reach a plateau. Impossible to know without him saying for sure. But it seems clear that’s what he meant to say (the former).
You are interpreting my
You are interpreting my comment correctly. You can look at any set of benchmarks on different CPUs and find that CPUs fairly quickly hit a “good enough” point with high end graphics cards where there is really no difference between them. I think a stock 2600k is below that point (and has been for some time), primarily due to its low clock frequency.
I’d be *extremely* surprised to see significant differences between an overclocked 2600k and an overclocked 6700k, however. I would expect to see no more than a 1-5% difference between the two at 1080p+.
I really like pcper in
I really like pcper in general, but you guys lost a lot of credibility with this article due to not comparing overclocked speeds. It comes across as you were paid off by Intel to write this article to make Skylake look better than it is.
That would only make sense if
That would only make sense if they had similar stock speeds in the first place.
Lol but they are clocked
Lol but they are clocked different at stock speeds…
They will overclock to the
They will overclock to the same frequency.. which means they would then be on a even playing field minus the ipc improvements.
A stock 3.3ghz 2600k to a 4ghz 6700k is a bit different than running the 2600k and the 6700k at 4.5-4.7 ghz.
Which is what we are trying to point out. The gap would lessen in this situation and the test conclusions would be more accurate. Not only that but you need to use the SAME amount of ram in both test benches. 8 vs 16 will also cause issues in 2k res expecially on games like gta which is a poor port.
I suggest you try the suggested things and then give us an actual comparison that doesn’t appear to just be pushing for sales.
I too would like to see
I too would like to see overclocking results from Sandy Bridge compared. To do this test without those numbers is absolutely criminal. The difference between my 4.7GHz i5 2500K and this new processor has to be fairly minimal, and besides that, I found GTA V to be extremely playable @ 2560×1440 so I don’t believe it was far exceeding 4ms anymore. Come on guys! Re-test with max overclocks, maybe include these numbers for comparison too.
I want support for ECC
I want support for ECC memory!
Then you will have to go with
Then you will have to go with the Xeon options at a higher cost. You will get no support for ECC in consumer SKUs from Intel, as that competes with the workstation part of their business. Expect to dole out more for the motherboard, and a motherboard without all the overclocking options at that for the ECC capabilities. Until AMD can begin to field its Zen based SKUs with even more integration with its GPUs, and even the future ability to directly dispatch FP computations directly to the GPU from the CPU cores, things will not improve on the consumer side of the equation. Even if Zen can just come up to Sandybridge levels, that HSA ability to send calculations to the GPU will be what puts Intel at more that just a price disadvantage even with the current generation HSA 1.0 compliance and not even the future direct dispatching of FP workloads directly to the GPU. There are plenty of Ivy bridge and Haswell parts that will be on sale, and not as may Boadwell because of the delays in 14nm, but why pay for the latest from Intel when it does not beat the previous SKUs from Intel by a wide enough margin to justify the cost of a new motherboard, just upgrade to an earlier Ivy-bridge, Haswell, or Broadwell and wait it out.
I am still on a 3770K on Z87
I am still on a 3770K on Z87 board with CrossFire, so probably not time to upgrade.
How are you running an LGA
How are you running an LGA 1151 chip on an lga 1150 mobo?
I guess if I was anxious to
I guess if I was anxious to get into 4K then, I would upgrade. But I am perfectly happy with my i7-860 and GTX 660Ti right now at 1080p resolutions. How many people are still on older hardware like mine, or even older. You’d be very surprised.
I’m on a LGA771 Xeon E5450 @
I’m on a LGA771 Xeon E5450 @ 3.85ghz in an old Asus matx lga775 board with 8gb ddr2 and Geforce 560ti and I still make it work with battlefield 4 at 1080p, although I’m about to drop the coin on a new Skylake system.
I want to see what AMD does
I want to see what AMD does with Zen (though I’m not expecting a miracle) until then I’m sticking with my sandy bridge.
I’d rather spend the money on a gsync or freesync setup at this point.
I don’t understand. Why give
I don’t understand. Why give the new processor twice as much ram? Why not do the inverse and see if there are any changes.
Yeah! What’s up with the
Yeah! What’s up with the Sandy Lake system having twice as much RAM as the Sandy Bridge system?
I would be interested to see
I would be interested to see how much of the differences are due to memory speed and PCI-e speed. It isn’t really relevant to a purchasing decision, since you can’t separate the processor from the rest of the platform improvements. It may be interesting to turn the memory clock down and run a few test, if you have the time. I don’t know if it is possible explicitly set PCI-e 2.0 mode though. Some off the platform power consumption differences may be due to lower DDR4 power consumption also.
I need to see GTA V tested
I need to see GTA V tested with an equal 16GB of RAM before I can trust those results. That game chews through memory.
I would also like to echo the people asking for testing with Sandybridge overclocked. The whole point of the K parts is to overclock them so it seems odd not to test that(although I completely understand the time constraints you had).
I couldn’t make a buying decision without having these questions answered to be honest.
Come on guys… why no clock
Come on guys… why no clock for clock testing? I don’t know a single person who ran a 2600k at stock speeds, and it’s clocked quite a bit lower at stock than the 6700k. Fail review is fail… nothing but useless info here.
Run them both at 4.7+ GHz and then you’d have some meaningful information.
For what it’s worth, I am running heavily overclocked Titan X SLI at 4k. I upgraded from a 4.8 GHz 2600k to a 4.6 GHz 5930k and the differences were minimal (I did have a PLX Z68 motherboard however). Crysis 3 got a little higher FPS in the very CPU intensive sections, and GTA V got a little smoother at the same FPS, and that was about it.
I second overclocking as
I second overclocking as well. Should just be typical recognized aircooled overclock for all K processors comparison.
and thank you for your articles since i don’t think we say that enough.