The 8800 GTX and 8800 GTS
After talking about architecture and technology for what seems like forever, let’s get down to the actual product you can plug into your system. NVIDIA is releasing two seperate graphics cards today: the GeForce 8800 GTX and the 8800 GTS.
The new flagship is the 8800 GTX card, coming in at an expected MSRP of $599 with a hard launch; you should be able to find these cards for sale today. The clock speed on the card is 575 MHz, but remember that the 128 stream processors run at 1.35 GHz, and they are labeled as the “shader” clock rate here. The GDDR3 memory is clocked at 900 MHz, and you’ll be getting 768MB of it, thanks to the memory configuration issue we talked about before. There are dual dual-link DVI ports and an HDTV output as well.
The runner up for today is the 8800 GTS, though it is still a damn fast card. Expected to sell for $449 and should be ready for launch today, the 8800 GTS runs at a core clock speed of 500 MHz and has 96 SPs that run at 1.2 GHz, compared to the 1.35 GHz on the GTX model. The 640MB of GDDR3 memory runs at 800 MHz and the same dual dual-link DVI ports and HDTV output connections are included.
Those of you looking for HDCP support will be glad to know that it is built into the chip, but an external CryptoROM is still necessary (and needs to be included by the board partner) for HDCP to actually function. At first glance, it does look like most of the first run of 8800 GTX cards are going to support HDCP though; GTS cards as well.
The Reference Sample
The NVIDIA GeForce 8800 GTX card is big; no getting around that. It’s about an inch and a half longer than the 7900 GTX cards, though its not really noticeably heavier; the X1950 XTX definitely has it beat there.
Click to Enlarge
The heatsink on the card is big as well, leaving most of the PCB hidden behind it, save for the far right edge of it. The fan on the dual-slot cooler is pretty quiet, and I didn’t have any issues with fan noise like I did with the X1900-series of cards last year.
All 768MB of GDDR3 memory is located on the front of the card, and the rear of the PCB is pretty empty.
Here you can see the gun-metal color of the case bracket on the card, though I don’t know if this is going to be carried over to the retail cards. There are two dual-link DVI connections here, so you can support two of the Dell 30″ monitors if you are so included! The TV output port there supports HDTV as well with a dongle that most retailers will include.
Yeah, you’ll notice that there are TWO PCIe power connectors on the GeForce 8800 GTX card (though the 8800 GTS will only require one). NVIDIA says that since the card can pull more than the 150 watts that a single PCIe power connection would technically allow (75 watts from the PCIe port and 75 watts from the single 6-pin connector), they erred on the side of caution by including two. They did allude to another graphics card that pulled more than 150 watts but did NOT have two connections for power…I think that card starts with “X19” and ends with “50 XTX”. You are required to connect two power cables though, as leaving one empty will cause the system to beep incessently.
The 8800 GTX is not without an SLI connection, though NVIDIA wasn’t ready for SLI at the time of this launch; they are working on final drivers. But, we did notice that there are two SLI connections here, much like we saw on the ATI X1950 Pro cards last month. This is probably going to be used for chaining more than two cards together in a system.
Click to Enlarge
Removing the heatsink from the card shows this mammoth chip under the hood…simply…huge. The G80 die is surrounded by a heatsink leveling plate for safety reasons and then by the 12 64MB GDDR3 memory chips.
Using the quarter test, I can officially say the G80 is the biggest chip EVAR!!! Or maybe just the biggest I have tested, but you get the point. How else did you expect 681 million transistors to look? For comparison, below is the ATI X1950 XTX chip next to the SAME quarter.
No camera tricks here!!!
Finally, this little chip off to the side is the NVIDIA TMDS display logic put into a custom ASIC. NVIDIA claims this was done to “simplify package and board routing as well as for manufacturing efficiencies”; sounds like 681 million was the limit to me!