An intelligent recharge rate
Things are even more complicated that this though and much more interesting. The Turbo Boost technology actually saves load/thermal data to know when and how often it can apply this higher-than-standard frequency boost. While the processor cores are running at under the top TDP and at less than 100% load, they “recharge” their ability to run higher than the TDP for that extra boost in frequency.
At each length of time labeled ‘x’,
the processor knows that it is running idle and thus the temperature of
the CPU is lowering or has met its lowest idle point. And even when
the processor is not completely idle it can “recharge” at a slightly
lower rate, indicated as the ‘x/2’ label above. To be fair, Intel has
not divulged the specifics about how quickly it can regenerate the
ability to run above TDP but this will be something we are going to be
spending a lot of time on with the Sandy Bridge samples when we get our
hands on them.
When I asked Intel how much variance there is going to be on this feature and in particular how reviewer’s like myself are going to be able to reliably test and evaluate performance on Sandy Bridge platforms, it seemed I wasn’t the first to ask. Intel said that their partners, both in the OEM and platform areas, were very adamant that performance be repeatable and predictable in order to avoid users “binning” systems themselves based on trying to find the fastest system available. In the real world, the performance WILL vary based on your usage patterns on a day to day basis but for all intents and purposes, Intel assured us that simply pausing the system for a minute or so will give it more than enough time to “reset” and make reasonable similar performance and benchmark runs.
And even though this “recharge” time is still in question, I assume that is shorter than you might imagine. Intel mentioned that the even during continuous benchmarking the extra-speed of the new Turbo Boost should be instigated fairly often.
Still, this complicates the testing and evaluation job we have with processors quite a bit going forward. If the introduction of HyperThreading and “how a system feels” debates made things more interesting, the next-generation Turbo Boost technology has the potential to add quite a bit more.
This is still NOT temperature based
While it might sound like Intel’s engineers have decided to use the temperature of the CPU in a more efficient and effective way, Intel is actually NOT using temperature sensors to determine Turbo Boost speeds. Rather, the CPU logic is using power draw (amperage) as its metric for this and is using their standard cooling solutions as the reference point for all of these algorithms.
This is interesting for a couple of reasons, the first of which is that the default behavior of Turbo on Sandy Bridge processors will not change whether you have the most basic air cooling or the highest end water cooling setup. For enthusiasts that is no doubt a letdown as they will not be able extend this “extra boost” time beyond the time period that Intel has decided it should run at.
Secondly, this has some interesting implications for motherboard vendors and overclocking that we will discuss below.
Overclocking is Going to be Changed
The Sandy Bridge processors will not have an easily adjustable base clock as mentioned by Anand in his recent architecture write up though Intel has said they are planning K-SKUs that have unlocked multipliers at a lower price than the ones we currently have for sale on the Lynnfield/Clarkdale cores today. However, if with that assumption, there are some serious limitations to what overclocking is going to evolve to because of the next-generation Turbo technology.
As I am told by several people in the platform side of things, the updated power controller on Sandy Bridge is monitoring CPU current (not temperature remember) and as such simply pushing more voltage to a processor will not produce results you we are used to seeing. The ability to increase the Vcore on Nehalem today is a simple BIOS setting but with Sandy Bridge there are likely going to be limits as the current exceeds the limits that Turbo Boost allows for a period of time.
As far as I can tell motherboard vendors are planning on unique solutions to these problems that require much more engineering than any previous generation. Imagine a motherboard that tries to “trick” the CPU into thinking it is receiving a current that is different than is actually being sent so that the “extra” Turbo frequencies would run for longer than that somewhat arbitrary (or rather, temperature independent) 25 seconds mentioned above. Or maybe BIOS options that allow the user to raise that secondary thermal limit so that the natural algorithm in use on Turbo Boost would do the overclocking for you but still allow a safety net of sorts.
There are unfortunately today a lot of questions and options but no solid answers.
Final Thoughts
We have a lot more to talk about when it comes to the new Sandy Bridge architecture but for now our deep dive into the world of complication on the new version of Turbo Boost reveals some very interesting components of the whole picture. With this additional increase in functionality with Intel’s next-generation of Core architecture the consumer will get an improvement in performance that might be hard to describe, but will be easy to feel.
When I asked Intel how much variance there is going to be on this feature and in particular how reviewer’s like myself are going to be able to reliably test and evaluate performance on Sandy Bridge platforms, it seemed I wasn’t the first to ask. Intel said that their partners, both in the OEM and platform areas, were very adamant that performance be repeatable and predictable in order to avoid users “binning” systems themselves based on trying to find the fastest system available. In the real world, the performance WILL vary based on your usage patterns on a day to day basis but for all intents and purposes, Intel assured us that simply pausing the system for a minute or so will give it more than enough time to “reset” and make reasonable similar performance and benchmark runs.
And even though this “recharge” time is still in question, I assume that is shorter than you might imagine. Intel mentioned that the even during continuous benchmarking the extra-speed of the new Turbo Boost should be instigated fairly often.
Still, this complicates the testing and evaluation job we have with processors quite a bit going forward. If the introduction of HyperThreading and “how a system feels” debates made things more interesting, the next-generation Turbo Boost technology has the potential to add quite a bit more.
This is still NOT temperature based
While it might sound like Intel’s engineers have decided to use the temperature of the CPU in a more efficient and effective way, Intel is actually NOT using temperature sensors to determine Turbo Boost speeds. Rather, the CPU logic is using power draw (amperage) as its metric for this and is using their standard cooling solutions as the reference point for all of these algorithms.
This is interesting for a couple of reasons, the first of which is that the default behavior of Turbo on Sandy Bridge processors will not change whether you have the most basic air cooling or the highest end water cooling setup. For enthusiasts that is no doubt a letdown as they will not be able extend this “extra boost” time beyond the time period that Intel has decided it should run at.
Secondly, this has some interesting implications for motherboard vendors and overclocking that we will discuss below.
Overclocking is Going to be Changed
The Sandy Bridge processors will not have an easily adjustable base clock as mentioned by Anand in his recent architecture write up though Intel has said they are planning K-SKUs that have unlocked multipliers at a lower price than the ones we currently have for sale on the Lynnfield/Clarkdale cores today. However, if with that assumption, there are some serious limitations to what overclocking is going to evolve to because of the next-generation Turbo technology.
As I am told by several people in the platform side of things, the updated power controller on Sandy Bridge is monitoring CPU current (not temperature remember) and as such simply pushing more voltage to a processor will not produce results you we are used to seeing. The ability to increase the Vcore on Nehalem today is a simple BIOS setting but with Sandy Bridge there are likely going to be limits as the current exceeds the limits that Turbo Boost allows for a period of time.

As far as I can tell motherboard vendors are planning on unique solutions to these problems that require much more engineering than any previous generation. Imagine a motherboard that tries to “trick” the CPU into thinking it is receiving a current that is different than is actually being sent so that the “extra” Turbo frequencies would run for longer than that somewhat arbitrary (or rather, temperature independent) 25 seconds mentioned above. Or maybe BIOS options that allow the user to raise that secondary thermal limit so that the natural algorithm in use on Turbo Boost would do the overclocking for you but still allow a safety net of sorts.
There are unfortunately today a lot of questions and options but no solid answers.
Final Thoughts
We have a lot more to talk about when it comes to the new Sandy Bridge architecture but for now our deep dive into the world of complication on the new version of Turbo Boost reveals some very interesting components of the whole picture. With this additional increase in functionality with Intel’s next-generation of Core architecture the consumer will get an improvement in performance that might be hard to describe, but will be easy to feel.