Another retail card reveals the results
It looks like AMD might still have some issues on their hands with the R9 290 series of cards
Since the release of the new AMD Radeon R9 290X and R9 290 graphics cards, we have been very curious about the latest implementation of AMD's PowerTune technology and its scaling of clock frequency as a result of the thermal levels of each graphics card. In the first article covering this topic, I addressed the questions from AMD's point of view – is this really a "configurable" GPU as AMD claims or are there issues that need to be addressed by the company?
The biggest problems I found were in the highly variable clock speeds from game to game and from a "cold" GPU to a "hot" GPU. This affects the way many people in the industry test and benchmark graphics cards as running a game for just a couple of minutes could result in average and reported frame rates that are much higher than what you see 10-20 minutes into gameplay. This was rarely something that had to be dealt with before (especially on AMD graphics cards) so to many it caught them off-guard.
Because of the new PowerTune technology, as I have discussed several times before, clock speeds are starting off quite high on the R9 290X (at or near the 1000 MHz quoted speed) and then slowly drifting down over time.
Another wrinkle occurred when Tom's Hardware reported that retail graphics cards they had seen were showing markedly lower performance than the reference samples sent to reviewers. As a result, AMD quickly released a new driver that attempted to address the problem by normalizing to fan speeds (RPM) rather than fan voltage (percentage). The result was consistent fan speeds on different cards and thus much closer performance.
However, with all that being said, I was still testing retail AMD Radeon R9 290X and R9 290 cards that were PURCHASED rather than sampled, to keep tabs on the situation.
After picking up a retail, off the shelf Sapphire branded Radeon R9 290X, I set out to do more testing. This time though, rather than simply game for a 5 minute window, I decided to loop gameplay in Metro: Last Light for 25 minutes at a resolution of 2560×1440 with Very High quality settings. The results are you'll see are pretty interesting. The "reference" card labeled here is the original R9 290X sampled to me from AMD directly.
Our first set of tests show the default, Quiet mode on the R9 290X.
Click to Enlarge
For the first nearly 3 minutes of game play, both cards are performing identically and are able to stick near the 1.0 GHz clock speed advertised by AMD and partners. At that point though the blue line, representing the Sapphire R9 290X retail card, starts to drop its clock speed, settling somewhere in the 860 MHz mark.
The green line lasts a bit longer at 1000 MHz until around 250 seconds (just over 4 minute) have elapsed then it too starts to drop in clock speeds. But, the decrease is not nearly as dramatic – clocks seem to hover in the mid-930 MHz.
Click to Enlarge
In fact, over the entire 25 minute period (1500 seconds) shown here, the retail R9 290X card averaged 869 MHz (including the time at the beginning at 1.0 GHz) while the reference card sent to us from AMD averaged 930 MHz. That results in a 6.5% drop in clock speed delta which should almost perfectly match performance differences in games that are GPU limited (most of them.)
Click to Enlarge
The fan speed adjustment made by AMD with the 13.11 V9.2 driver was functioning as planned though – both cards were running at the expected 2200 RPM levels and ramped up nearly identically as well.
But what changes if we switch over the Uber mode on the R9 290X? The setting that enabled 55% fan speeds, and with it more noise?
Click to Enlarge
Click to Enlarge
You only see the blue line here from the Sapphire results because it is overwriting the green line of the reference card – both are running at essentially the same performance levels and nearly keep the 1000 MHz frequency across the entirety of the 25 minute gaming period. The retail card averages 996 MHz while the reference card averages 999 MHz – pretty damn close.
Click to Enlarge
However, what I found very interesting is that these cards did this at different fans speeds. It would appear that the 13.11 V9.2 driver did NOT normalize the fan speeds for Uber mode as 55% reported on both cards results in fan speeds that differ by about 200 RPM. That means the blue line, representing our retail card, is going to run louder than the reference card, and not by a tiny margin.
The Saga Continues…
As we approach the holiday season, I am once again left with information that paints a bad light on the retail versus sampled R9 290X cards but without enough data to really make any kind of definitive conclusions. In reality, I would need dozens of R9 290X or R9 290 cards to make a concrete statement on the methods that AMD is employing, but unfortunately my credit card wouldn't appreciate that.
Even though we are only showing a single retail card against a single sampled R9 290X from AMD directly, these reports continue to pile up. The 6.5% clock speed difference we are seeing seems large enough to warrant concern, but not large enough to start a full-on battle over it.
My stance on the Hawaii architecture and the new PowerTune technology remains the same even after this new data: AMD needs to define a "base" clock and a "typical" clock that users can expect. Otherwise, we will continue to see reports and reporting on the variance that exists between retail units. The quick fix that AMD's driver team implemented to normalize the fan speed on RPM rather than percentage clearly helped, but it has not addressed the issue in total.
Here's hoping AMD comes back from the holiday with some new ideas in mind!
- AMD Radeon R9 290X 4GB – $549 (Newegg.com)
- AMD Radeon R9 290 4GB – $399 (Newegg.com)
- AMD Radeon R9 280X 3GB – $299 (Newegg.com)
- NVIDIA GeForce GTX TITAN 6GB – $999 (Newegg.com)
- NVIDIA GeForce GTX 780 Ti 3GB – $699 (Newegg.com)
- NVIDIA GeForce GTX 780 3GB – $499 (Newegg.com)
- NVIDIA GeForce GTX 770 2GB – $329 (Newegg.com)
The thing is, it’s the same
The thing is, it’s the same with GK110. Depending on the quality of your chip, you will get different amounts of throttling (which I’m calling the drop from max boost because of temp/power limits). Since no two chip are ever equal, both companies’ power balancing technologies will unavoidably result in variance in sustainable clock rates.
True, at least Nvidia isn’t selling their cards by shouting out their max boost clocks, but if the review card has a better chip than what one ends up buying, the effect is the same – worse performance than expected. Both companies are selling their cards with reviews and now both companies have a “review boost” technology. It’s not cool and it makes objective GPU testing a pain, but I suppose it is what it is from now on.
While you are correct that
While you are correct that variance occurs on the GeForce GPUs, in my experience it isn't this extreme.
You just don’t test this with
You just don’t test this with hundreds of cards. That is what I’m doing day by day so I can tell you that I found some GTX780Ti that 8% slower then the defined reference speed. In my database the average variance is 5% for the GTX780Ti and 3% for the R9-290X. But this is normal, this is how these card designed to work.
Oddly, I have done similar
Oddly, I have done similar work and I get exactly the opposite. The 780Ti gets no more than 2% variance (I define variance as the full spread in measured frequency about the mean, divided by the mean) – averaged over 72 cards. On The R9-290X I am seeing well over exactly 13.4% … this is averaged 56 cards.
The R9-290X is clearly running too hot and run smack dab at the very limit of the junction temperature. This card will likely run much slower in Phoenix Arizona that in does in Quebec Canada.
AMD should dial back the specs on the card, whether this is intentional or not, it is not putting them in a good light.
Just another Nvidia sponsored
Just another Nvidia sponsored article. The sales are down so will see few of this from sites like PCP that can be bought to repeat the same old story. over and over again.
It’s very simple if the ambient room temp are not the same in Quebec or Arizona, just adjust the fan speed. Most enthusiast buying $400 and up graphic card knows that and actually very few bought the card with the intention to use the reference cooler if anybody.
Facts cannot be bought…
Facts cannot be bought… dear AMD fanboy who’s unable to handle facts.
Can you explain why it is
Can you explain why it is that you have had so many of these video cards in your hands for testing? That’s over $50,000 of video cards.
I have to commend you on a
I have to commend you on a through and honest evaluation of the ATI graphics card. It’s extremely hard to find unbiased reviews anymore. Your ability to translate hi-tech information into easy to understand language is quite high! It’s extremely important to me, especially in this economy, to only support major manufactures who go out of their way to show integrity in their product.
Thanks for your work!
That’s not really true.
That’s not really true. Kepler has a base clock for every model, and the cards will NEVER go below the base clock (unless you take the cooler of them, or your PSU is faulty). NVIDIA said from day 1 that some cards will BOOST a bit better then others.
But since they don’t set the clocks so close to the limit of the GPU, absolutely all cards will achieve the preset BOOST clocks, actually go well beyond that. Some will go a step higher, but it’s a small step and performance is about the same and well within the margin of error for the benchmarking process.
AMD on the other hand says the cards have an “up to 1GHz” clockspeed. The retailers sell the cards as having a “1GHz” clockspeed, loosing the “up to” moniker. And in reality the cards will ALWAYS clock lower then 1GHz, in some cases A LOT LOWER, around 650-750MHz, because they throttle so heavily.
And the fact that it’s so easy to find retail cards underperforming review cards, does raise an important aspect. Because it points out to the fact that it’s not just a small batch problem. And it also raises the question if AMD sent reviewers HAND PICKED cards that performed better then what consumers will get in retail.
But to summarize, NVIDIA sells you a baseline performance that ALL cards will achieve and that’s guaranteed, and maybe your store bought one will be a bit faster if you’re lucky, while AMD sells you the dream of a top performing card that, it seems, will NEVER be the case of your store bought card because, well, your unlucky.
And of course they didn’t send golden samples to reviewers, because they’re AMD and only NVIDIA is capable of bad things. And in case you didn’t detect it, this was sarcasm.
I agree on the clock rate
I agree on the clock rate advertisement aspect, as I already wrote. What I don’t wholeheartedly agree on is the extent to which advertised clock rates make a difference for sales. I’d say that for such high-end GPUs, the review performance is way more important than stated clock rates on the box. Both Nvidia and AMD have set up their new cards (GPU Boost 2.0 and PowerTune 2.0) so that they perform better on a short benchmark than on a sustained gaming session.
IMO, the biggest issue those technologies raise is in regards to their customizable nature. Based on user choices, one can get wildly different performance levels for these cards. For example, I think it’s unfair to test the 290X on über mode (or even worse, with uncapped fan speed) and leave the GK110 cards on their stock power and temperature limits. One can make the choice to get sustainable full performance of the Hawaii cards by making the fan spin faster to keep them from throttling. One can also get the GK110 cards to sustain their maximum boost clock constantly by upping their power and temp limits. So for a comparison test to get the baseline performance differences for these chips on even terms is a really big challenge.
Ryan
I assume you took the
Ryan
I assume you took the sample card apart for pictures? I am also assuming you reapplied TIM?
Did you do that with the retail card? The stop TIM application on gpus usually isn’t good.
I didn’t do any of that on
I didn't do any of that on any cards we have tested.
There are 2 big questions I
There are 2 big questions I think are on everybody’s mind,it’s certainly on mine, 1)are these truly flaws in the core or a simple hiccup on the first iteration, it this the kind of thing that (much like the first bulldozer llano and FX chips) shows promise now but by the time of the r9-390x or whatever the next one will be called, all the problems will be worked out? 2)are we expecting to see this hawaii architecture used in the rest of the next generation amd gpu’s, like the r7-340, r9-370x etc.
Unfortunately, cards such as
Unfortunately, cards such as Sapphire VaporX should have been available from day one; not the crappy reference cooler with TIM that looks like it was literally thrown on the GPU …
Lack of attention to detail and quality indicates that AMD does NOT care about it’s customers [and I have been one for years].
Sorry to spam the
Sorry to spam the comments
But what is the ASIC quality on the gpus?
I’m not sure I understand the
I'm not sure I understand the question?
GPUZ has an ASIC Quality
GPUZ has an ASIC Quality option, but it’s not really telling anything about the quality of the chip. And nobody actually knows what it means. I have two identical 680’s, they have consecutive serial numbers. One has 70-ish ASIC Quality and one has a 90-ish ASIC Quality and the one with a lower number overclocks a lot better.
ASIC quality, AFAICT, as
ASIC quality, AFAICT, as reported by GPU-z, deals mostly with impedance of the circuitry.
There are minor differences in the thickness of the metal layers and completeness of the doping of the silicon as well as other factors that alter resistance, conductivity, and capacitance. Lower resistance generates lower heat (of course) and requires lower voltage to operate. This makes for better overclocking – at least at room temperature… Higher maximum conductivity can mean higher resistance at lower voltages, and capacitance can really be ignored – but the relationship of them together is often referred to as ‘voltage leakage.’
Higher resistance, as mentioned, can be a sign that the chip can carry higher current and handle higher voltages. This is great for being able to push the chip to the max when cooling isn’t a factor… BUT, if cooling is a factor (which it is in all cases – except LN2 or Helium…), then you want lower resistance – and LOWER ASIC quality…
However, if you are using sub-ambient cooling, you want higher ASIC quality – so you can push more voltage and current and achieve higher clocks.
Sorry, when I said You wanted
Sorry, when I said You wanted “LOWER ASIC quality” for air and HIGHER for LN2, I got those backwards, LOL!
Higher ASIC quality = lower resistance, thus lower voltage, less heat, higher clocks on air, but less voltage & current tolerance.
This is further complicated
This is further complicated by chip binning. For example 7970 has four known bins that have different default voltages, based on their ASIC quality number.
up to 2F90 (up to 75% quality) – 1.1750V
up to 34D0 (up to 80% quality) – 1.1125V
up to 3820 (up to 85% quality) – 1.0500V
up to 3A90 (up to 90% quality) – 1.0250V
Pure ASIC quality number is useless if you don’t know the bins, at least for non xtreme oc.
So what are the ASIC numbers?
So what are the ASIC numbers?
Hey Ryan Thanks for all the
Hey Ryan Thanks for all the Great info about all these cards if you have the time you might like to check out two of my videos demonstrating my setup and how my R9 290x (BF4 Limited edition from sapphire its the same one that you are using in your article)is running. Thanks again for all the hard work and info.
Here are the links to my videos sorry for them being kind of ruff I don’t make alot of videos lol.
1. https://www.youtube.com/watch?v=VicPlkKdRGk
2. https://www.youtube.com/watch?v=Ht7cDpu_PwE
strange, we can all be sure
strange, we can all be sure NV just randomly grabs GPUs off a self/production line and sends them to reviewers ;/
variations are too wide for sure.
I’m sure that NVIDIA does
I'm sure that NVIDIA does some testing before sending out review samples but its all about how they react to that data. Do they send out 'average' cards, 'above average' or 'f'ing amazing' cards?
I look forward to your retail
I look forward to your retail vs press sample AMD/Nvidia shoot out review.
fing amazing. if not they
fing amazing. if not they would be extremely foolish for not doing so.
they aren’t saints certainly.
thats why consumers can never
thats why consumers can never trust reviewers OC results.
Since PC Per and other tech
Since PC Per and other tech sites don’t want to be labeled as Nvidia leaning fanboys I’m just gonna call this what it is.
AMD, despite delivering a good product, has apparently pulled some shady shit in order to paint their performance numbers better than what can be expected in retail by delivering hand cherry-picked gpu’s for preview and review.
I think Gordon Maung at Max PC brought up a very damning point about all of this.
>>>>>>>>To his knowledge, there are numerous reports about retail cards having variance but NOT A SINGLE REPORT of any variance in any of the handpicked cards shipped for review from AMD.<<<<<<<< If this turns out to be true, AMD better own up to this crap, fire the people responsible and be whole hardheartedly transparent with expected performance for the average user. I like AMD, but if this is the way they are going to do business, they can kiss my a.. I know someone at AMD is reading this. Wise up, own up, and be transparent. You have a lot more on the line than you think.
I don’t believe AMD is doing
I don’t believe AMD is doing anything untoward in this case. Many people in the owner’s forum have reported that they have much lower throttling than even the review samples.
I just think, if anything, AMD may have been more closely monitoring the first produced cards for production issues, then as production ramped up stopped enough attention – something that usually isn’t a problem… a card running 5-10C hotter, but still well within tolerances, isn’t a problem on other cards…
I think one solution would be change the TIM they’re using and try to weed out the variability in the application better. Others have done this with surprisingly good results. A little higher quality TIM has allowed some to not throttle at all with stock fan settings – but most just gain 50MHz or so on the average.
Lets not forget he went on a
Lets not forget he went on a tyraid about how Uber Mode was invalid because its not a “Default” out of the box setting.
He probably making out with Tom Peterson in exchange for the Nvidia adds on his site.
Use chapstick don’t want your lips dry’n up.
I’m certain Ryan is trembling
I’m certain Ryan is trembling at the thought of more your viscous insults! Are you seriously 12 years old or something? That’s pretty weak and pathetic.
Nvidia has the right idea here. They advertise a base clock and a typical boost clock that most cards should be able to attain. I have a factory overclocked 780 and without any action on my part the card regularly exceeds the advertised boost speeds by around 100MHz. This means, to me at least, that Nvidia is being reasonably cautious with the boost clocks they choose to advertise. AMD only advertises the maximum clocks that their cards can achieve. They also need to define a base clock so that people can be certain of at least a certain level of performance.
As far as reviewers I only
As far as reviewers I only see Ryan bitching about the base clock clarification.
Its simple to find out. Once the card initiates 3D clocks that’s your base. Duh!!!
What other reviewers are cry’n like babies like Ryan is about, Oh I don’t understand what up to 1GHz means.
At the end of the day its about what FPS it can push not if I can understand that ghz its operating at.
Other reviewers have pointed out AMD issues but It seams Ryan is full board with Nvidia smear campaing
“Nvidia is misrepresenting the issue. Nvidia is pushing this as a GPU frequency problem where users “expect” certain clocks. What users actually expect is certain levels of performance. Offering a switch that puts the GPU in a mode where it trades performance and temperature is not the problem. The problem is failing to offer that both on both cards. The GPU clock speed oscillates back and forth when the R9 290X is in Quiet mode because that’s what the GPU is designed to do.”
There are several other review sites that aren’t blowing Nvidia horn and are professional enough to explain both sides without whinning about base clocks.
wake me up when there is 3rd
wake me up when there is 3rd party after market coolers.
maybe u should get another retail card and write another article rehashing this “fan gate” issue every week.
I eagerly await the retail
I eagerly await the retail cards as well. But AMD is in fact selling these cards with reference coolers, so the points are still 100% valid.
Futuremark and Unigine
Futuremark and Unigine should make a 30-minutes anti-boost anti-powertune mode in their benchmarks
The figure AMD should be
The figure AMD should be advertising is the absolute minimum that there card can produce and an average figure. Any card a consumer receives should be able to beat the minimum, end of story.
The variance in the performance of their chips isn’t up to standard.
For the record I just purchased an MSI N770 but have purchased AMD in the past. I was building a HTPC so noise was a major factor considering a have spent a large amount on a surround sound system..
ExtremeTech
AMD’s Radeon R9
ExtremeTech
AMD’s Radeon R9 290 has a problem, but Nvidia’s smear attack is heavy-handed
http://www.extremetech.com/gaming/170542-amds-radeon-r9-290-has-a-problem-but-nvidias-smear-attack-is-heavy-handed
Seams Ryan is repeating Nvidia PR word for word to me.
And I’m left once again
And I’m left once again scratching my head why, with Nvidia having counterpunched and with the gtx 780 ti released, even with aftermarket cooler versions, AMD still isn’t allowing 3rd party models with aftermarket coolers to be sold. Which would effectively solve the damn problem.
I mean, why not get it over with ASAP?
I really wonder…
AMD could have largely dodged
AMD could have largely dodged all the criticism over the high temps, the noise levels and these frequency variations if the AIB partners had aftermarket cards at or sooner after launch.
Instead they released a product that has these variations, is much louder and hotter and is up to ~10% slower than what it could have been.
Tom’s Hardware has shown what a decent cooler can do to the 290 and it’ll be a similar story with the 290x. While remaining much quieter than the stock cards the GPU didn’t exceed 60C, hence the chip will run at 100% stock frequencies all the time and can be overvolted and overclocked without throttling. They won’t be running at “upto 1Ghz” anymore, they will just run “at 1Ghz”.
It’s all very well showing
It’s all very well showing graphs of clockspeed but without showing the real impact on performance this whole exercise is rather pointless.
Most consumers wouldn’t really care (or notice) if their clockspeeds were slightly lower than the maximum advertised. They only care about whether their new card can play the latest games with smooth gameplay.
I agree entirely, the lack of
I agree entirely, the lack of actual performance data here puzzles me
I guess it’s all gone a bit quiet on the review front hardware wise, so may I suggest reviewers test tri/quadfire on these 290s (I am yet to see any FCAT data on this)instead of reporting every defective card that’s sent to them
I presume that you got
I presume that you got mistake on that fan speed graf. Should be rpm:s not MHz on y-scale.
Did you check Asics quality of those cards(can gpu-z even get them from hawaii)?
PCper is most assuredly a
PCper is most assuredly a pro-nvidia shill. He’s shown time and time again that he will pick up anything remotely damaging to AMD and run with it.
And for the anonymous comment : “I like AMD, but if this is the way they are going to do business, they can kiss my a..”
If you had such morals, you’d have said that to nvidia years and years ago. Whatever you think AMD has done, multiply that by 100 and that’s what nvidia is. A predatory, monopolist anti-competitive a**hole corporation. Whatever technical assets they have, are entirely overshadowed by their unethical business methods. nvidia has been on my sh*tlist for over a decade and that goes double for intel.
Stop supporting these bastard corps that will not compete fairly and ethically. Your money and mindshare is paying for this garbage.
Yo mates, just stop waiting
Yo mates, just stop waiting your nerves with that flames about AMD. You have absolutely to admit that the r9 290 is an amazing CPU chip. Spoken in price/performance. You cannot tell that visit has graphic cards that are as cheap as AMD has. But the reason you are flaming AMD is one where you are right. They absolutely failed in designing that card. The actual cool solution is just crap. No wonder the chip is going down in mhz. My opinion is just to wait for the partner cards and their cooling solutions and I can promise everyone you won’t get disappointed in this card.
greets!
Looks like meaningful
Looks like meaningful discussion is only happening here. So I ported my thread from https://pcper.com/news/Graphics-Cards/Controversy-continues-erupt-over-AMDs-new-GPU to here.
=======
Well, I am just wondering why put so much pressure on AMD? Look at Nvidia too, please.
Yes, Nvidia only promises base/boost speed, but not the upper bound. However, remember when you do your benchmark, you are effectively running at whatever actual speed your card provides. Let’s call it “real” frequency. Your review is based on this “real” frequency. It will stabilize at some point after warming for certain application. It’s same as AMD’s card, and the only difference is how they market their speed.
Now if you want to argue that AMD is a bit shady or inconsistent about “real” frequency, take a look at what I have on hand.
Card 0: EVGA GTX780 SC (941/993MHz, 69.4%)
Card 1: EVGA GTX780 non-SC (863/902MHz, 70.5%)
Card 2: EVGA GTX780 non-SC (863/902MHz, 63.7%)
All from newegg, marked with (base/boots, ASIC quality)
At stock settings, running Heaven benchmark for several minutes, the “real” freqs are stuck at 862/862/1019-1045, with fan speed 51%/60%/63%, and temperature 79/83/84. I picked Heaven because it’s easier to run, and I’ve observed similar results across games.
What do you see?
1) The non-SC versions are effectively running at their base clock while SC version are running at much higher than boost under all stock settings?
2) All review sites I saw all stated that GTX780 stay at 80C, with some boost while running games. This is definitely not my case. The regular GTX780 has to hit 83/84C to maintain base clock with stock fan profile.
Now, let me change my perspective. Sure I don’t want SLI to run at different speeds, and introduce shuttering and all other crap. So how much overclock do I need to make them equal? I unlock fan speed with my own curve, and thermal to 106%/85C. Also OC memory +300MHz. Here is the result:
Card 0: +0MHz, 79% fan, 67C, 1071MHz
Card 1: +79MHz, 97% fan, 78C, 1071MHz
Card 2: +129MHz, 97% fan, 78C, 1071MHz
I need to OC 79MHz for card 1 and 129MHz for card 2 so that all three cards reach same 1071MHz “real” freq during gaming. That’s consistent across all games I’ve played so far, including Dirt 3, Bioshock Infinite, BF4, FC3, Heaven benchmark, 3DMarks, etc.
Again you can argue that overclocking tweaks the characteristics a bit, but Nvidia and AMD have different architectures, and running at 1GHz for R9 might as well be similar points where GTX780 are at 1071MHz, on same process. If not convinced, just compare my stock setting data with what you guys get during your review.
The variation is definitely there for Nvidia, and it’s equally HUGE, if not bigger. Especially if you consider the OC version is just a fact of binning. Sure they have different BIOS, but when overclocking to equal grounds, the difference between cards show up immediately. Nvidia might as well shipped you a better bin for review, or at least filter the worst ones.
If you have a problem with AMD stating max frequency instead of base frequency, which might mislead consumers, fair. However, if your point is inconsistency on retail vs review samples, make sure you check Nvidia too. It’s no better from my limited sample. The fact they stated base/boost speed doesn’t make them immune to hand-pick review sample for you guys.
I am a huge Nvidia fan, but at this point, I do think you are biased on this matter, unless you guys do a similar study on Nvidia cards. Similar sample size, warm them up, and check on actual speed during benchmarks.
This is my data and thoughts. Now please show yours. You have much more resource than me after all.
Edit: My cards are all bought on early June, so must be their old stepping instead of the new one. GPU-z says revision A1.