Introduction of LCD Monitors

CRT displays were used on computers since the very earliest days as they replaced the arrays of light bulbs used for information output.  In the 1980s though the CRT went through a revolution as the capabilities were improved from monochrome to 16 colors and resolutions of 640×540 (EGA) were reached. 

LCD monitors (liquid crystal display) were first introduced to consumers in the mid-1990s at high prices and relatively low image quality compared to the CRT displays at the time.  LCD manufacturers were forced to integrate the completely new display technology on the existing display controller infrastructure that was dominated by the CRT.  Part of this integration was the adoption of a refresh rate even though from a technical point of few, LCDs functioned very differently.

LCDs produce an image on the screen with the combination of a persistent back light (CCFL, LED) and by direct voltage application.  These pixels can be updated without interference from or with other segments of the display, in any order or at any rate.  There is also no need to refresh the pixels in an LCD due to the CRT limitation called phosphor persistence as the pixels do not go dark between refreshes.  This means that there is effectively no minimum refresh rate for LCD panels.  (Though, in extreme cases of sub-15 FPS instances, a refresh might be necessary to avoid visible light variance.)

ASUS VG248QE – a 144 Hz Refresh LCD Monitor

Because LCD monitors don’t have a collection of electron beams that “paint” the screen and don’t need to reset back to a starting position, they do not have a technical reliance on a fixed update rate.  Instead, LCD monitors are limited by the update rate of the crystal and silicon used in manufacturing.  Today we call that period the response time.  There are lots of debates currently about exactly how to measure it (grey to grey, etc.) but this rate is completely independent of the refresh rate as it stands today.

A display, LCD or CRT, at the time required the use of an analog to digital converter that was responsible for taking in the signal from the graphics system of a computer and translating that into the digital data that would be painted on the display.  The controllers inside these displays were receiving data from the computer at a fixed rate that was related to the refresh rate of the CRT.  Early LCDs used very similar electronics for the analog conversion and adopted the refresh rate along with them.  It wasn’t until 2003 that LCD displays first outsold their CRT competitors and by that time refresh rates on LCDs were fully integrated.

 

Integration of Refresh Rates Today

Those of us old enough to remember CRTs as the dominant display type will remember the idea of having different refresh rate options for your monitor.  High end monitors could run at 60 Hz, 75 Hz, 100 Hz, etc. to improve visual quality all while running at different resolutions (CRTs had no “native” resolution).  A technology called EDID (extended display information data) was created in 1994 to allow a monitor to pass information to the computer system to indicate the available resolutions and refresh rates the monitor supported. 

With that information, it is up to the graphics card to set and indicate the refresh rate back to the display.  (This explains how you can create custom resolutions and timings in a GPU driver that may not be accepted by your monitor.)  In fact, the scan rate is controlled by a signal called “vertical blanking” that is sent to the monitor, used with CRTs to give the electron guns time to move from their finishing position (bottom right) back to the starting position (top left), ready to draw another frame on the screen.

LCDs today do not need this signal as there are no electron gun to reposition. So why have LCD manufacturers continued to implement static refresh rates of 60 Hz, 120 Hz or even 144 Hz?

The answer lies in the connectivity options and controllers powering displays that are available today.  As the analog signals of VGA were replaced with the likes of DVI and HDMI, these higher speed, digital connection options had to conform to existing interface patterns to maintain compatibility with older output types.  How many DVI-I to VGA passive adapters do you have laying around with your graphics card? 

DVI, HDMI and early versions of DisplayPort transmit display data in multiples of a refresh rate.  A pixel clock is simply a timing circuit provided to divide an incoming scan line of video into individual pixels.  The simplicity of the math might surprise you:

2560×1600 Resolution
2720 pixels * 1646 pixels * 60 Hz = 268.628 MHz pixel clock

Breakdown of current Display Timing Variables

The extra pixels on each dimension (160 on horizontal, 46 on the vertical) are actually part of the legacy CRT standard that include room for sync width and front/back porch which are meant to space out signal and synchronization communications.  This gives time for the electron guns to reposition. 

If you look at DVI and HDMI specifications you will often see the performance rated by maximum pixel clock.  DVI is limited to 165 MHz per link or resolution equivalents of 1920×1200 @ 60 Hz and 1280×1024 @ 85 Hz.  Recent updates to HDMI 2.0 have enabled 600 MHz total dual-link pixel clocks for resolutions of 3840×2160 @ 60 Hz.

DisplayPort introduced a self-clock feature while also supporting HDMI and DVI pixel clocks for backwards compatibility but readers of PC Perspective are also aware some active adapters are required for conversion to DVI/HDMI connectivity.  These adapters communicate with the graphics controller on the computer to negotiate speeds and convert the protocol and signal.

So even though DisplayPort and HDMI can technically support some impressive bandwidth rates (17.28 Gbps for DP 1.2) they are working with legacy logic that makes them slow and “lazy”.  The connectivity is there to push much higher screen refresh rates than we are accustomed to without some of the complications that occur with modern graphics cards and monitors.

The question you should be asking yourself now is: what happens if you completely re-write the logic of communications between a GPU and a monitor?

« PreviousNext »