Capture Hardware and Extracting the Data
Capture Hardware
Once we have the overlay working, the next step in our Frame Rating process is to actually capture the video for later analysis. While that sounds easy, it is far from the case. Capturing the video on the testing system is out of the question. Instead we are using an external capture system built around an Ivy Bridge platform and a high-end capture card from Datapath, the VisionDVI-DL.
This card is capable of capturing video at 2560×1600 at 60 Hz with throughput as high as 650 MB/s. We were actually the first customer for this card in the US and have spent quite a few days on the phone with driver engineers to make sure this kind of testing process was successful. Meant mainly for single frame captures, getting the VisionDVI-DL to really meet the specifications turned out to be a more complicated process than we’d hoped. The good news is we have it all figured out now and can reliably capture the necessary video.
The VisionDVI-DL essentially acts like and shows up as a monitor to the system. It reports an EDID (extended display identification data) to the graphics card that is set through the driver on the capture system. I am able to emulate a 1920×1080 monitor at 60 Hz refresh and up to a 2560×1440 @ 60 Hz as well. Using a Gefen dual-link DVI splitter we can connect a single display output to the NVIDIA or AMD GPU (to avoid the issues of power consumption and GPU utilization with multiple monitors attached) and split the signal to our capture card and to a monitor for us to actually play the games.
With a data rate of nearly 500 MB/s when capturing video at 2560×1440 @ 60 Hz, some serious storage speed was needed for this capture system. I decided to use our Pegasus R4 Thunderbolt array coupled with a set of four Corsair Force GS 240GB SSDs running in an RAID 0 array, giving us about 900 MB/s of write capability.
I cannot overestimate the importance of a completely correct capture in this step of the Frame Rating process. If you deviate from the 60 Hz record rate even a single frame the data becomes basically useless. Dropping a frame on the capture side would show up as dropped frames from the graphics card so avoiding that is paramount. The software used for this capture will likely be VirtualDub on other sites that you see, and that was the program we used originally too. However, the inability to automate the settings and having to re-adjust the options each time you load the program became a hassle so we developed our own that uses DirectShow filters and the Windows Platform SDK.
Extracting the Data
Once you have the recorded raw AVI file, (that is about 25GB – yes GB – from a 60 second capture at 2560×1440) the next step is to actually read the colored bars and generate frame time data, runt data and drop data. Using a basic DSP application, the extractor reads in each frame of the video sequentially and determines the number of scanlines in height that each color on the overlay bar has and records that information in an XLS file.
This is probably the most fool-proof part of the process. I have uploaded a few example XLS files right here for you to see the output.
What would a game look like
What would a game look like if
it got a smooth 120+ fps,
was on a 60Hz display,
and in addition to the regular ‘vsync’ spot, it would ‘vsync’ at the ‘1/2’ way spot?
aka update the display at the top and middle, updating at these same two spots every time, and only updating at these two spots.
Would the middle ‘vsync’ spot be annoying? helpful? noticed? informative? etc…? (This sounds like a good way to see how important fps is)
What’s a good name for this?
1/2 x vsync, 2 x vsync, vsync 1/2 x, vsync 2 x, or something else?
What’s the logic behind your pick(s)?
(forward note: i have bad
(forward note: i have bad english.)
1) is there a diff in observer fps between cards with more ram?
i.e. sli of 2x gtx770 2g vs sli of 2x gtx770 4g?
2) can you publish a min/max/var of partial frame per frame?
insted of runt i wanna know how many different “color” are per frame, and if they are evenly spread.
Nice review. I’m interested
Nice review. I’m interested as to how this tech is evolving.
But now I’m curious- I’ve read some of your test methods- but I may have missed something. I’ve seen mostly games that are more single player/first-person. Is that part of your methodology? I’m thinking of more intensive object rendering titles like Rome Total War II that has to render myriads of objects and stress memory more. Have you considered something like that?