Understanding the FX Chipset
This content was originally featured on Amdmb.com and has been converted to PC Perspective’s website. Some color changes and flaws may appear.
Before beginning our investigation into this video card, we need to better understand what NVIDIA is attempting to achieve, so we can gain better perspective in our evaluations. From what I researched on the web, I have determined three main features and changes from the previous GeForce 4 series:

Single Optimized Shader
NVIDIA redesigned their shader from scratch. Instead of two parallel shaders as seen on the GeForce 4, there is a single shader that is “optimized”. But what exactly is being optimized here? From what I can gather, the shader has been tweaked to run specifications like Pixel Shader 2.0 and Vertex Shader 2.0 faster – which means the FX chipset’s shader is optimized for DirectX 9.0. This special optimized shader is what NVIDIA refers to the “CineFX Engine”.
Given the optimized nature of the shader, it is clear that NVIDIA is focusing a lot on future DirectX 9 performance. But will the current FX chipset be obsolete by the time DirectX 9 games hit the market? Only time will tell.
Eight 128 Bit Floating Point Rendering Pipelines
Rendering pipelines are responsible for feeding the renderer the information it needs to display a 3D scene including color information. With 128bits of bandwidth in the rendering pipe, NVIDIA is able to allocate 32bits per red, green, blue, and alpha channels (or 4,294,967,296 values per channel). The ability to render such range of colors is important in lighting and special effects since you want subtlety and realism.

Intellisample
The term Intellisample is what NVIDIA has come up with to encompass the technology developed for color compression, gamma correction, texture filtering, and new anti-aliasing modes.
- Color compression:
NVIDIA is now implementing an algorithm that compresses color information four times its size before heading to the card’s memory. So effectively, you have 128bits of color compressed into 32bits. This effectively quadruples your memory bandwidth because your color information now only occupies 32bits instead of the full 128bits. However, this technology only applies to color information only, how it can benefit other types of data heading into memory I am not sure. - Gamma Correction
From what I can gather about this feature, images to be output by the hardware is re-sampled to account for gamma differences between the human eye and the hardware. Since the human eye interprets hardware-generated colors differently, a gamma correction function is used to adjust the output so that what we see on our monitor is what was intended. - Adaptive Texture Filtering
According to various sources, NVIDIA has implemented a new technique for sampling textures per pixel. This algorithm is reported to have no negative effect on performance when the user allows the hardware to do the sampling. The hardware intelligently chooses between different sampling levels in anisotropic or trilinear filtering modes. From what I can deduce from the documentation I’ve seen online, it appears that this technique is implemented automatically when choosing anisotropic filtering or trilinear filtering. None of the documentation is very clear on this aspect, so I’m assuming it’s all done automatically. - New anti-aliasing modes
As a result of these improvements, NVIDIA is able to implement two new anti-aliasing modes that are based on this technology – 4XS and 6XS anti-aliasing (I guess “XS” meaning “eXtended Sampling”?). Apparently XS sampling is supposed to improve image quality while maintaining the appropriate anti-aliasing level. Exactly what this is and how it’s accomplished is pretty obscured. I suspect it’s all trade secrets and we consumers just have to take these official statements for their face value.