A question that has been asked about the new Radeon VII is how undervolting will change the performance of the card and now [H]ard|OCP has the answer. Making use of AMD's two tools for this, Wattman and Chill, and the 19.2.2 driver they tested clockspeed and temperature when running Far Cry 5. As it turns out, undervolting the Radeon VII has a noticeable impact on performance, increasing the average FPS to 105.7 from 101.5, while enabling Chill drops that number to 80fps.
Check out the full review to see what happened to the performance in other games as well as the effect on temperatures.
"Is Radeon Chill or GPU undervolting the answer? We run the Radeon VII through some real world gaming and show you exactly what Chill and Undervolting will do to, or for your gameplay."
Here are some more Graphics Card articles from around the web:
- Nvidia GeForce RTX 2060 (Laptop GPU) @ TechSpot
- Ryzen Mobile Gets Better Drivers, Finally @ TechSpot
- Zotac RTX 2060 AMP @ Modders-Inc
- GTX 1660 Ti 4-way/40 game OC Shootout @ BabelTechReviews
- GeForce GTX 1660 Ti Mega Benchmark @ Techspot
- MSI Gaming X Geforce GTX 1660 TI @ Modders-Inc
- Palit GeForce GTX 1660 Ti GamingPRO OC @ Guru of 3D
- It's Nvidia GTX 1660 Ti time: All the cards currently available @ The Tech Report
- NVIDIA GeForce GTX 1660 Ti Linux Gaming Benchmarks @ Phoronix
Many of the folks posting
Many of the folks posting about the Tempature on the GPU DIE rising as a result undervolting Radeon VII are not understanding the counterintutave nature of undervolting as it applies to the thermal loads/thermal headroom on a processor with automated overclocking/thermal management functionality in place and enabled.
Lower voltages lead to less heat generated but that feeds back into the automated OC control IP resulting in more thermal headroom being detected which leads to higher clocks being able to be applied! And thus because of higher average clock speeds there will be higher temps on the GPU DIE if Radeon chill is not also enabled.
So Radeon Chill is for limiting energy usage and thermals via its overriding any direct tempature feedback to automatic overclocking that’s enabled in Wattman. Radeon Chill will always dial back the Wattman auto-OC feedback control loop to save power at the cost of performance.
Wattman and automated overclocking will respond to undervolting by increasing the GPU’s average clock speeds over unit time and the GPU will produce heat as the square of the transistor’s switching speed times the voltage applied, up to the same heat threshold allowed before the heat sensors on the GPU begin to reach unacceptable thermal levels. Better cooling helps also if you want even higher average clock speeds, undervolting or not!
Certianly less voltage is going to cause the transistors to release less heat for each clock cycle at the lower voltage but when automated overclocking in enabled the control software/firmware will boost the GPU’s clocks until the same heat levels are obtained that require throttling the clocks.
With auto overclocking enabled, undervolting only causes the clock throttling to occur less often over a unit of time but that equates to higher average clocks over the same unit of time and more switching can negate out the heat savings from the lower voltage if the automated overclocking software is enabled and Radeon Chill is not enabled.
Say hello to Wattman’s auto Overclocking/Undervolting option it works that way if enabled! Give it the thermal headroom via some stable undervolting and it will take all that thermal headroom back via some auto overclocking and better gaming performance if that’s what the gamer wants. Enable Radeon ->Chill<- for the tempature/power savings to pull the reins in on Whattman's auto-OC. Woah Horsey! P.S. Radeon VII's transistor density per mm^2 at 7nm is a lot higher than at 14nm/12nm and that smaller die size for Vega 20 with some millions more transistors than Vega 10 is not going to result in a cooler die. More transistors packed into a smaller and smaller area means more heat per mm^2 even if the overall GPU power usage goes down. The smaller area of the Vega 20 die paired with more transisors per mm^2 means the TDP dissipated per mm^2 by the cooling solution has to get better or the GPU will throttle.
“P.S. Radeon VII’s transistor
“P.S. Radeon VII’s transistor density per mm^2 at 7nm is a lot higher than at 14nm/12nm and that smaller die size for Vega 20 with some millions more transistors than Vega 10 is not going to result in a cooler die. More transistors packed into a smaller and smaller area means more heat per mm^2 even if the overall GPU power usage goes down. The smaller area of the Vega 20 die paired with more transisors per mm^2 means the TDP dissipated per mm^2 by the cooling solution has to get better or the GPU will throttle.”
What a poor excuse for the fake improved power efficiency from TSMC’s 7 nm process…
Actually there’s no more relevant performance gains to get with nodes significantly lower than 20 nm and AMD’s GCN architecture is quite weary.
Really you F-ing idiot look
Really you F-ing idiot look at the amount of 64 bit FP units on Vega 20 and that’s where the extra power usage on Vega 20 goes. Did you take the time to even read up on Vega 20 and compute. Look at the God Damn F-ing Shader core count on Vega 10 and Vega 20 has more DP FP units on its 4096 F-ing Shader cores and other tweaks that require more transistors than even the Vega 10 tapeout.
That’s on AMD’s Vega 20 Tapeout and not TSMC’s 7nm process node and God F-ing F-ing Damn you are the most egregious example of a stupid inbred peckerwood as there can ever be, chipman!
No one can be that Daft, are you actually for real? Really can you even be considered sentient, chipman!
You can not measure a Process Node’s Power efficiency using anyones GPU/CPU tapeout as if the processor is designed for more compute and has more cores/Shaders well that’s really nothing to do with the efficiency of the process node that the chip is taped out on. Nvidia’s Gaming GPUs are stripped of all the excess compute and are tuned for gaming workloads while AMD uses Vega 20 for compute/AI mainly and gaming as a secondary usage.
You set yourself up for abuse every F-ing time you ignorant GIT! You Goddamn F-ing moron!
Have you got a spare half billion dollars, chipman, to give to AMD to spend on some gaming only focued GPU tapeouts? Because if you do not then STFU and get to stepping! AMD is going to make loads more billions off of its Epyc Server CPU SKUs than anyone makes off of GPUs so AMD does not have to give 1/10th of a Rat’s Shiny Red A$$ about gaming GPUs. AMD could do just fine selling it’s GPU for Professional Compute/AI usage alongside its Epyc/Naples and Epyc/Rome server CPU SKUs and not have to worry one iota about winning any flagship gaming crown.
It’s just that the flagship gaming market is an excellent way to get some revenues off of those non-performant Pro Market GPU Die Bins that do not make the grade for Pro Market usage. So that’s what AMD’s Flagship Gaming GPU are about unless chipman has half a billion dollars to just give to AMD.
And Chipman STFU about TSMC’s power efficiency with respect to anyones CPU/GPU tapeouts because that’s more on the Processor designer and not on the Fab’s process node. No one can be that F-ing daft!