Dronez Tests

This content was originally featured on Amdmb.com and has been converted to PC Perspective’s website. Some color changes and flaws may appear.

Visiontek GeForce 3 Review - Graphics Cards 22
Click to Enlarge

DroneZ GeForce 3 Bump vs. GeForce 2 Bump
If you run a DX8 benchmark on a GeForce 2 the system will EMULATE the nFiniteFX engine in software for the very same reason: the GeForce 2 doesn’t support Vertex Shaders in hardware. Now, what happens is that, when, in DroneZ benchmark, you run the “GeForce 2 Bump” and “GeForce 3 Bump” configurations, you get exactly the same visuals, but with different implementation of the illumination system. In the first case it’s done on the combination of video board and CPU. In the second case it’s done through Vertex Programs that are executed in hardware if the nFiniteFX is there, otherwise they are emulated by your main CPU! In fact the DX8 drivers and OpenGL drivers include an nFiniteFX emulator to perform Vertex Shaders/Programs emulation.

Referencing DroneZ benchmark, undoubtedly demonstrates the performance advantage of the optimized (nFiniteFX) code over the GeForce 2 CPU code. In fact, the performance advantage is approximately 150% in some areas. This translates into higher frame-rates at higher resolutions.

Visiontek GeForce 3 Review - Graphics Cards 23
Click to Enlarge

DroneZ GeForce 2 Bump and GeForce 3 Bump on the GeForce 3
The above DroneZ benchmark illustrates the differences in performance when running GeForce 2 Bump and GeForce 3 Bump on the GeForce 3.

Running the GeForce 3 in GeForce 2 Bump mode leaves the nFiniteFX engine unused. This method represents the likely performance when playing older games such as Quake 3. Again, running the GeForce 3 in GeForce 3 Bump mode unleashes the performance of the nFiniteFX (Vertex and Pixel Shaders), engine.

Visiontek GeForce 3 Review - Graphics Cards 24
Click to Enlarge

DroneZ GeForce 2 Bump and GeForce 3 Bump on the GeForce 2
The above DroneZ benchmark shows the differences in performance between the GeForce 2 Bump and GeForce 3 Bump running on a GeForce 2. The difference in frame rate between “GeForce 2 Bump” and “GeForce 3 Bump” on a GeForce 2 system is the overhead of the emulation. As you can see it would not be a fair comparison to do the test using nFiniteFX software emulator: it’s the CPU to handle the code in both cases only that in the first case its optimized code and in the second case its (slower) emulation code! This is not because the emulation code is bad, it’s just because it’s an emulation of something else: something else, which your CPU is not! Then, what happens if you try and run the “GeForce 3 Bump” configuration on the GeForce 2 system? The same thing that happens when you run any DX8-benchmark: The nFiniteFX will be emulated! The visuals will be the same, but the speed will be much slower, as shown!

« PreviousNext »