Update: Did you miss the live stream? Watch the on-demand replay below and learn all about the Frame Rating system, FCAT, input latency and more!!
I know, based solely on the amount of traffic and forum discussion, that our readers have really adopted and accepted our Frame Rating graphics testing methodology. Based on direct capture of GPU output via an external system and a high end capture card, our new systems have helped users see GPU performance a in more "real-world" light that previous benchmarks would not allow.
I also know that there are lots of questions about the process, the technology and the results we have shown. In order to try and address these questions and to facilitate new ideas from the community, we are hosting a PC Perspective Live Stream on Thursday afternoon.
Joining me will be NVIDIA's Tom Petersen, a favorite of the community, to talk about NVIDIA's stance on FCAT and Frame Rating, as well as just talk about the science of animation and input.
The primary part of this live stream will be about education – not about bashing one particular product line or talking up another. And part of that education is your ability to interact with us live, ask questions and give feedback. During the stream we'll be monitoring the chat room embedded on https://pcper.com/live and I'll be watching my Twitter feed for questions from the audience. The easiest way to get your question addressed though will be to leave a comment or inquiry here in this post below. It doesn't require registration and this will allow us to think about the questions before hand, giving it a better chance of being answered during the stream.
Frame Rating and FCAT Live Stream
11am PT / 2pm ET – May 9th
PC Perspective Live! Page
So, stop by at 2pm ET on Thursday, May 9th to discuss the future of graphics performance and benchmarking!
Does your new office studio
Does your new office studio allow for a live audience?
Two questions for Ryan and
Two questions for Ryan and Tom.
#1: I want to get the visually smoothest performance possible out of my SLI’ed Titans. What settings should I sent in the NVidia Control Panel and GeForce Experience program?
#2: Are frame-rate caps (at say 120hz) a good alternative for avoiding the problems with v-sync? Can you set caps with any of the publicly available NVidia tools for games that don’t offer them as an option?
Hi!
I’d like you guys
Hi!
I’d like you guys explored a bit on how frame rating benchmarks can be somewhat misleading when considering real gaming scenarios. One can, for example, contrapose the illustration pic of this article with this one: http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-11.html
http://www.radeonpro.info/features/dynamic-frame-rate-control/
Can DFC effectively deliver a much smoother experience? If so, isn’t the AMD Crossfire issue heavily exacerbated?
Thanks! 🙂
1) Nvidia has clearly been
1) Nvidia has clearly been looking at and managing frame times for years now. The question is, why didn’t they raise the issue to the public, only developing or releasing FCAT now that benchmarking websites have realized the importance of frame times?
2) There seems to be a certain amount of subjectivity regarding which is considered smoother: a locked low framerate with no frametime deviation, or a substantially higher yet more variable framerate. Given this, isn’t it deceptive to imply that lower frametime deviation inherently means ‘smoother’ animation? Perhaps the issue isn’t deception, but simply ambiguity in the meaning of ‘smoothness’. Some subjective research is needed to determine how much jitter is perceivable, and a weighting is needed to determine the relative importance of framerate and jitter.
3) Looking at a frametime percentile graph, which corresponds to smoother animation: a plot that’s horizontal for the first 90% then curves drastically upwards near the end; or a plot that has a slow linear increase in frametime?
4) Analysis of frametimes with vsync on have focused on 60Hz displays. Do things change significantly with higher refresh-rate displays, like 120Hz?
5) How do frametimes and jitter affect quality of experience for the Oculus Rift? I know alot of PC gamers are eager to get into virtual reality and wonder which metric most affects the subjective experience, particularly in regards to motion sickness.
Just as a note, I’d like to see frametiming of integrated graphics.
Glad to see Tom outside of a
Glad to see Tom outside of a product launch. He is a lot of fun.
1) I am curious to know when NVIDIA discovered how important frametimes are, and how long did it take to implement?
2) As others have asked, why hasn’t NVIDIA moved to educate users on the importance of frame latency?
All BS aside why hasn’t NVIDIA pushed this as some kind of open standard to improve the overall gaming experience? Regardless of the vendor. This is the exact kind of thing we need to share, in order to progress the industry as a whole.
Timing from the game
Timing from the game engine?
Game engines holding back and pushing timing of the simulation. Refilling the queue.
The cause of the results does not always fall on the shoulders of the GPU but the game engine?
Can developers of the game engines change FRAC?
How much does the diver of these cards affect the overall results of FRAC? Game engine internal timing of the game engine is affected by the drivers installed on that machine affecting the results of your testing?
How accurate are these results or understanding the results because of not seeing the entire rendering pipeline?
With games that that need a server to run like WOW and other MMOs, how does this affect testing and results?
Are you really getting a true result or latency also from waiting on a server, ping, game engine, and GPU?
I would expect alot of variation in results?
Is Multi GPU really that much faster?
AFR, each GPU taking turns rendering an image.
I am thinking it goes back and forth.
Giving each GPU a turn to do one frame, hopefully well the other one retrieving more data to show another frame.
But is this faster? Less jitters?
And with that, I believe it been said that more memory on a GPU helps to smooth out game animation?
Nvidia frame metering technology for SLI puts delays in delivery of frame rates to make it smooth that are shown.
Is the lag between the user and visual response a big deal when smoothing out animations?
Going forward accepting such latency how will it affect the future, example Oculus Rift?
Does AMD need frame metering stay competitive or is their a better solution for them to go down?
Is it better to have a 60fps lock in a game?
Nvidia has an option i believe that filters out runt frames and dropped frames. Do you recommend this? What is your opinion on FCAT filters and Nvidia FRACT tools?
Do these tools favor their own products or is it because they have a head start on implementing this into their hardware compared to AMD?
Intel and Microsoft also
provide tools which makes me question software from these companies. Will FutureMark be getting involved in this benchmarking?
Example of steps until we see a result on the screen.
User input, Windows kernel, Driver, GPU, then display.
To get a complete understanding of were the delays or latency is happening you would need to test every step of the rendering pipeline.
Are their going to be tools to do such testing and they going to be anytime soon?
I guess FRAC is enough of a jump for now.
Do GPU companies know from start to finish what is going on in that rendering pipeline between the user and their hardware? They do talk with the developers that make the software so they can make drivers, etc?
They must have some clue or testing that we are not aware of, dont they?
What is the difference in FCAT and Fraps?
Is AMD more interested in FCAT results or Fraps?
What does it mean when FCAT and Fraps results do not relate to each other?
What I have read is that FRACT shows more frame to frame fluctuation and that Fraps do not capture the frame rate differences as well when using multi GPU configuration?
I think FRACT is closer to the end result and that Fraps is at the beginning.
Fixing runts, jitters and over all smoothness in graphics in games, how does this relate to other applications, like scientific computing and supercomputers?
Being able to have multi GPU supercomputers that don’t step on each other’s toes when working through a problem and would use less electricity to get results.
Less work on the GPU and smooth out latency in GPU accelerated programs big or small scale I would imagine and not just in video games, but all things related, right.
Lets say braking a code or protein folding.
Sorry my thoughts went all over the place but i hope that it can be understood and not to confusing.
will the driver improvements
will the driver improvements AMD is working on mean that the video cards will not work as hard since they’ll be sort of pacing themselves? will this reduce noise and temperatures? will the driver improvements trickle to older crossfire generations?
I have a 6950 and a 6990 and I swore I didn’t see a difference before even though fps was higher with the 6990.
are 3 gpu configurations affected by frametime issues as much? and do 3 gpu configurations present an easier or tougher challenge to manage the frame times?
how come Nvidia didn’t bring up frametimes before and how long have they been managing animation quality?
Are colorful bars
Are colorful bars automatically translated to and saved as frame times in ms or do you have to do it manually by studying each frame separately?
The bars are extracted from
The bars are extracted from the captured AVI using simple DSP program written by NVIDIA. Those raw dumps can further processed using the FCAT scripts to get meaning information about performance automatically.
Ryan was great on TWIT.
He
Ryan was great on TWIT.
He corrected those nimrods like Laporte/Stevens that think PC/Console gaming is dead.
Yes; it was great to hear him
Yes; it was great to hear him on TWIT. Leo seems like a good guy but he and other panel members appear clueless on the reality of mobile gaming.
Mobile Gaming Revenue is roughly 4 or 5% of the global video game market. PC Gaming alone earned roughly 700% more revenue. What many pundits forget is that mobile games only cost around $5 a copy.
Ryan is great…so well
Ryan is great…so well spoken and backs things up with facts/science.
does every Kepler GPU with
does every Kepler GPU with SLI support hardware frame rate metering like the 690?
How different is the Kepler GPU with framerate metering compared to a Fermi GPU SLI, like the 590?
can you quantify the additional input lag? is it like 10, 20, 50, 100ms? compared to single GPU.
—
Do you have any data about slower CPUs vs SLI, and single GPUs in terms of frame times?
How many parts of different frames you will normally see on a single refresh of let’s say 80 FPS at 60Hz, or it varies?
would a 60Hz screen running a game at 240Hz present newer information than a 60Hz screen with a game at 120Hz (always without vsync)?
how much frame latency variation would be enough to cause an unpleasant experience? like from 10 to 15ms for every different frame or something?
is some sort of frame rate metering adopted for single GPU? to protect the smoothness from other bottlenecks like CPU, I/O in general…
—
will nvidia add an option to disable framerate metering on their drivers control panel?
does it have something to do with the “Max Frames to Render ahead”?
what about a dynamic framerate target (with multiple targets, let’s say if enough performance lock at 100, if not lock at 60 or something) would it work?
thanks
SPBHM said:
“does it have
SPBHM said:
“does it have something to do with the “Max Frames to Render ahead”?”
Yes. V-sync and GPU-limitations “back pressure the entire render pipeline” (quoting Tom there). You can think of maximum pre-rendered frames as a “buffer” or “queue” of frames building up in the pipeline. The more frames allowed to build up, the higher the input lag.
Question: Is NVIDIA making
Question: Is NVIDIA making any headway on improving their Overclocking tools (such as Boost) as well as their Adaptive VSync tools?
If so what are the goals NVIDIA has and what headway have they made?
Thanks for a fantastic
Thanks for a fantastic explanation of input latency. I can now finally explain to my console friends why I feel like I am playing in slow motion on my PS3 on the rare occasions that I game on it.
This was very interesting to
This was very interesting to watch! I love the new metrics. I’m wondering why all the technical v-sync information is so hard to get by?
I still use CRTs (I have 2 Compaq P1220) and I’m wondering when enabling V-Sync it doesn’t switch between 50 fps (half my refresh rate) and 100 fps, running 100hz at 1600×1200 when it dips below 100 fps in games, this is different than LCD displays. I suppose that this is because I’m using VGA (analog) as opposed to DVI (digital)?
Would like to know the technical explanation of why this is and what the difference there is between V-Sync for CRTs and LCDs.
n2k said:
“I’m wondering when
n2k said:
“I’m wondering when enabling V-Sync it doesn’t switch between 50 fps (half my refresh rate) and 100 fps . . . this is different than LCD displays.”
It’s really no different, because this is hardly dependent on the display technology itself (e.g. CRT/VGA, LCD/DVI, etc.). Here’s what really happens:
The video card renders a new frame into a “back buffer” while an already completed frame is being sent to the display (from the “front buffer”). If the game only makes use of one back buffer and FPS falls below refresh rate with v-sync enabled, it must now wait every other display refresh cycle to send a new frame to the display (which divides FPS in half). Triple buffering (i.e. multiple back buffers) is one solution to that problem.
In other words V-sync and FPS behave pretty much the same between CRT and LCD monitors. LCD monitors generally create more latency though, for several reasons (buffering frames internally, etc.).