GP106 Specifications
NVIDIA’s GeForce GXT 1060 6GB card is here and we start our testing with the Founders Edition!
Twelve days ago, NVIDIA announced its competitor to the AMD Radeon RX 480, the GeForce GTX 1060, based on a new Pascal GPU; GP 106. Though that story was just a brief preview of the product, and a pictorial of the GTX 1060 Founders Edition card we were initially sent, it set the community ablaze with discussion around which mainstream enthusiast platform was going to be the best for gamers this summer.
Today we are allowed to show you our full review: benchmarks of the new GeForce GTX 1060 against the likes of the Radeon RX 480, the GTX 970 and GTX 980, and more. Starting at $250, the GTX 1060 has the potential to be the best bargain in the market today, though much of that will be decided based on product availability and our results on the following pages.
Does NVIDIA’s third consumer product based on Pascal make enough of an impact to dissuade gamers from buying into AMD Polaris?
All signs point to a bloody battle this July and August and the retail cards based on the GTX 1060 are making their way to our offices sooner than even those based around the RX 480. It is those cards, and not the reference/Founders Edition option, that will be the real competition that AMD has to go up against.
First, however, it’s important to find our baseline: where does the GeForce GTX 1060 find itself in the wide range of GPUs?
GeForce GTX 1060 Specifications
Let’s start with a dive into the rated / reference specifications of the GTX 1060 provided by NVIDIA.
GTX 1060 | RX 480 | R9 390 | R9 380 | GTX 980 | GTX 970 | GTX 960 | R9 Nano | GTX 1070 | |
---|---|---|---|---|---|---|---|---|---|
GPU | GP106 | Polaris 10 | Grenada | Tonga | GM204 | GM204 | GM206 | Fiji XT | GP104 |
GPU Cores | 1280 | 2304 | 2560 | 1792 | 2048 | 1664 | 1024 | 4096 | 1920 |
Rated Clock | 1506 MHz | 1120 MHz | 1000 MHz | 970 MHz | 1126 MHz | 1050 MHz | 1126 MHz | up to 1000 MHz | 1506 MHz |
Texture Units | 80 | 144 | 160 | 112 | 128 | 104 | 64 | 256 | 120 |
ROP Units | 48 | 32 | 64 | 32 | 64 | 56 | 32 | 64 | 64 |
Memory | 6GB | 4GB 8GB |
8GB | 4GB | 4GB | 4GB | 2GB | 4GB | 8GB |
Memory Clock | 8000 MHz | 7000 MHz 8000 MHz |
6000 MHz | 5700 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 500 MHz | 8000 MHz |
Memory Interface | 192-bit | 256-bit | 512-bit | 256-bit | 256-bit | 256-bit | 128-bit | 4096-bit (HBM) | 256-bit |
Memory Bandwidth | 192 GB/s | 224 GB/s 256 GB/s |
384 GB/s | 182.4 GB/s | 224 GB/s | 196 GB/s | 112 GB/s | 512 GB/s | 256 GB/s |
TDP | 120 watts | 150 watts | 275 watts | 190 watts | 165 watts | 145 watts | 120 watts | 275 watts | 150 watts |
Peak Compute | 3.85 TFLOPS | 5.1 TFLOPS | 5.1 TFLOPS | 3.48 TFLOPS | 4.61 TFLOPS | 3.4 TFLOPS | 2.3 TFLOPS | 8.19 TFLOPS | 5.7 TFLOPS |
Transistor Count | 4.4B | 5.7B | 6.2B | 5.0B | 5.2B | 5.2B | 2.94B | 8.9B | 7.2B |
Process Tech | 16nm | 14nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 16nm |
MSRP (current) | $249 | $199 | $299 | $199 | $379 | $329 | $279 | $499 | $379 |
Much of the commentary on this page is duplicated from our GTX 1060 preview posted last week.
The GeForce GTX 1060 will sport 1280 CUDA cores with a GPU base clock of 1506 MHz Boost clock speed rated at 1708 MHz. Though the card will (initially) be available in only 6GB varieties, the reference / Founders Edition will ship with 6GB of GDDR5 memory running at 8.0 GHz / 8 Gbps. With 1280 CUDA cores, the GP106 GPU is essentially one half of a GP104 in terms of compute capability. NVIDIA decided not to cut the memory interface in half though, instead going with a 192-bit design compared to the GP104 and its 256-bit option.
The rated GPU clock speeds paint an interesting picture for peak performance of the new card. At the rated boost clock speed, the GeForce GTX 1070 produces 6.46 TFLOPS of performance. The GTX 1060 by comparison will hit 4.35 TFLOPS, a 48% difference. The GTX 1080 offers nearly the same delta of performance above the GTX 1070; clearly NVIDIA has set the scale Pascal and product deviation.
NVIDIA wants us to compare the new GeForce GTX 1060 to the GeForce GTX 980 in gaming performance, but the peak theoretical performance results don’t really match up. The GeForce GTX 980 is rated at 4.61 TFLOPS at BASE clock speed, while the GTX 1060 doesn’t hit that number at its Boost clock. Pascal improves on performance with memory compression advancements, but the 192-bit memory bus is only able to run at 192 GB/s, compared to the 224 GB/s of the GTX 980.
The GTX 1060 Founders Edition card has a TDP of just 120 watts, and will have a single 6-pin power connection. With all of the controversy and debate surrounding the Radeon RX 480 and its power delivery system, this is going to be looked at closer than ever. NVIDIA has set the TDP 30 watts lower than the 6-pin + PCI Express slot power is rated, so this definitely gives them room for overclocking and slight power target adjustment within those boundaries. In recent history NVIDIA has tended to be less aggressive on its power targets; I expect the GTX 1060 to fall well within the 120 watt level at stock settings.
The starting MSRP for the GeForce GTX 1060 partner cards will be $249. The Founders Edition card, designed by NVIDIA and the one we were sent for our initial reviews, will cost $299 and will be available ONLY at NVIDIA.com. NVIDIA is listing this one as “limited edition” so I would assume that means we will not see the Founders Edition throughout the entirety of the life of the GTX 1060.
At $249, the GTX 1060 partner cards, available and shipping today, will compete very well with the 8GB variant of the Radeon RX 480, which at reference prices is only $10 less expensive. NVIDIA itself proclaims the GTX 1060 is “on average 15 percent faster and over 75 percent more power efficient than the closest competitive product” which obviously refers to aforementioned RX 480.
DOOM
DOOM Numbers
http://www.hardocp.com/images/articles/1468921254mrv4f5CHZE_4_3.gif
1060 64 fps
480 80 fps
980 71 fps
hypocrite. cries about
hypocrite. cries about hardocp being nv biased then posts link from their site showing good results for amd. LMFAO get over yourself loser
Actually the irony both ways
Actually the irony both ways is great, because even the by the Nvidia benchmarking manual folks at hardocp cannot fudge the Doom results. GCN has arrived, and even for the Older GCN SKUs. So Maxwell is showing Max-Less improvements under DX12/Vulkan compared to GCN/Polaris and GCN/older SKUs!
So now Nvidia will try its hardest to do the Pascal two step and throw some gimp-works into the benchmarks, but this time around it’s not going to work with the developers getting access to more of the GPU’s metal! There are less places in the drivers for Nvidia to Gimpvidia with the newer graphics APIs, as the games developers will be more in charge of the optimizations for their games.
I think the true definition
I think the true definition of irony is thinking that its ironic that HardOCP can’t fake the doom results to help Nvidia rather than just realizing that they were never biased in the first place.
Exactly.
And it might work
Exactly.
And it might work even this time for Nvidia. Time Spy show that there is a way to make Pascal cards look better and AMD cards less great.
You’re still digging, huh?
You’re still digging, huh? That tin-foil hat gotta come off sometimes.
Then stop wearing it.
Then stop wearing it.
Yes I do think that hardocp
Yes I do think that hardocp is Nvidia biased and that’s why I believe that those numbers are NOT AMD biased.
AMD trolls strike
AMD trolls strike again!!
Seriously, keep a level head.
The amount of confirmation
The amount of confirmation bias in these comments is astounding. You’ve got AMD fans calling Nvidia fans biased for calling AMD fans biased for calling Nvidia fans biased. The circle is never ending and I can’t imagine how frustrated Ryan and Allyn must be with the state of their comment section.
Believe me, when you write
Believe me, when you write articles there is nothing more frustrating than having no to very few comments.
Dude this whole community is uses cheap marketing tricks.. ” etc..
Dude this whole community is dysfunctional.. I cannot for the life of me understand why people spend some much time and energy crying and whining about “but, but, but, but… launch day prices, availability, drivers will make it faster,
Why don’t you all just take a breath, relax, and WAIT for a month or two for everything to settle.
Then you can make a more informed decision on which card…
is more appropriate…
for…
YOUR PERSONAL REQUIREMENTS.
I dont know who is worse GPU
I dont know who is worse GPU fan boys fighting or truck fan boys pissing on each other.
truck fan boys with laptops?
truck fan boys with laptops?
The fanboys are starting to
The fanboys are starting to heat up. consoles are better anyway Kappa
True, you can always rely on
True, you can always rely on consoles providing *consistent* and *predictable* crappy performance and sub-standard graphics.
=)
True but consoles get the
True but consoles get the magical up to 30% benefit from async. It’s the AMD “holy grail” of async. 30% added to crap frame rates is still crap frame rates. Needs to get 100% benefit from async to be viable at even 1080p at decent settings.
Im seriously thinking of
Im seriously thinking of getting an AMD card to put alongside my 980Ti just for DX12 games so get the best of both worlds. If you think about it, its just like buying another console 🙂
http://www.hardwarecanucks.co
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/73040-nvidia-gtx-1060-6gb-review-16.html
Tomb Raider showing ~19% NV victory, while Quantum Break about 13% AMD. Depends on the game IMHO and I don’t really think NV has concentrated much on DX12 yet since 85% of the market is DX11 or less. I would rather win dx11 for the next year or two as that is what devs will concentrate on. People are drawing conclusions based on <5 games while AMD moved to DX12 pretty much totally just because of a lack of funding.
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/73040-nvidia-gtx-1060-6gb-review-15.html
Ashes basically a statistical tie and hitman ~15% AMD. Hitman appears to depend on what benchmark you’re doing:
https://www.techpowerup.com/reviews/MSI/GTX_1060_Gaming_X/16.html
NV win.
One more point, NV’s chips all hit 2050+ (many reviews hitting 2100), giving another ~10-14% to all benchmarks, while AMD gets a meager 50mhz on some 480 chips. I’d rather be NV, and doing it with far less watts even if OC’ed.
http://www.techspot.com/review/1209-nvidia-geforce-gtx-1060/page7.html
12%
https://www.techpowerup.com/reviews/MSI/GTX_1060_Gaming_X/27.html
15.1% WOW partner cards rock already, note they got 14% from founders.
For people wanting Doom:
http://www.pcgameshardware.de/Doom-2016-Spiel-56369/Specials/Benchmark-Test-1195242/
With 980ti smashing 1070, it’s clearly not optimized yet for pascal. Also note the odd library behavior shown at hardocp:
http://www.hardocp.com/article/2016/07/19/nvidia_geforce_gtx_1060_founders_edition_review/4
“We found that the AMD Radeon RX 480 uses the newest 1.0.17.0 Vulkan API libraries in-game. When we brought up the console (shown above) on RX 480, it showed “1.0.17.0” next to Vulkan API where NVIDIA GPUs show 1.0.8. Just something worth noting.”
Yeah and just as the german site shows, pascal is not working right yet. 980ti should certainly not be beating a 1070 by 16%! Clearly Doom isn’t seeing pascal with the latest library, which is probably just a flag setting or something that got missed.
Good write up. I probably
Good write up. I probably couldn’t have done a better job myself.
AMD fanboys are always in a rush because AMD gets better support out of totally designed for AMD architecture dx12 and Vulkan. They know the clock is ticking. Eventually Nvidia gets a handle on new APIs and then they start proving their dominance.
Happened with dx11 and it will happen again. AMD doesn’t make bad cards. Nvidia ones are just better. I’m with you even if Nvidia does lose by a few frames in special coded optimally for AMD games, they still use at times way less wattage doing it.
Notice all these AMD trolls never bring up the designed for VR AMD cards performance. Because Nvidia is by far ahead in VR. Higher fidelity and smooth as silk frame rates even on Maxwell.
This does not even take into account the new VR features of Pascal which would probably put performance way off the scale.
Can’t wait for the mobile
Can’t wait for the mobile versions! Just imagine $700 laptops with 980 perf and 100w envelopes, OMG!
So I bought a 980 for $260, 1080p Gsync, SLI capability and an RM750i PSU made that choice the best for me.
I wonder how the performance delta will be with both the 1060 and this 980 on max OC.
From what I gather, the 1060
From what I gather, the 1060 is way faster than the 480 in most DX11 stuff, and slightly slower than the 480 in DX12 stuff. With the current drivers. Add in the lower power requirements and promised VR improvements, and it seems like a no brainer. An extra 2GB would be nice, but 6GB should be sufficient.
I usually never quote
I usually never quote hardocp, but their GTX 1060 review is telling:
The 1060 is about 25% to 32% slower in Doom (Vulkan)
18% slower in DX12 hitman
This is also using the reference that is power/thermally limited.
I would expect custom RX 480 to be 10 to 20% faster then the ref all web sited uses for reviews.
So its very possible that even the GTX 1060 gain in dx11 will evaporate. And in dx12/vulkan, the rx 480 will be untouchable.
So its hard to think the GTX 1060 will deliver better performance in any titles release later this year and next year.
http://www.hardocp.com/articl
http://www.hardocp.com/article/2016/07/19/nvidia_geforce_gtx_1060_founders_edition_review/4
“We found that the AMD Radeon RX 480 uses the newest 1.0.17.0 Vulkan API libraries in-game. When we brought up the console (shown above) on RX 480, it showed “1.0.17.0” next to Vulkan API where NVIDIA GPUs show 1.0.8. Just something worth noting. ”
http://www.pcgameshardware.de/Doom-2016-Spiel-56369/Specials/Benchmark-Test-1195242/
980ti beating 1070 by 16%-20% (~20% over the 4 res tested), showing the same as hardocp. Pascal isn’t being identified correctly by the latest library yet. You apparently didn’t READ the article at hardocp or you should have seen this. 980ti should not be beating a 1070, and no way by such a large margin. It’s easy to see something is wrong currently. People seem to be forgetting Khronos is headed by Nvidia’s guy…LOL. Your comment is comic. Based one one game and a first driver release company X will never get better in any title this year…ROFLMAO. Yeah…OK. Neil Trevett was a senior VP at 3d labs for a decade, and is VP of NV’s mobile group for over a decade now. Also created OpenGL ES working group and chairs the opencl group both at khronos, was president of webgl consortion, and has been top dog at khronos since 2001. The idea that NV will be outdone when their guy LEADS Khronos, is comic.
You are judging a brand new gpu, misidentified on a brand new game, and the single game that is vulkan based and even at that basically a beta library just getting off the ground. We’re in the 1st or 2nd inning of game 1 literally. Even at hardocp they are showing they had 3 different vulkan library versions…LOL. How much more beta can you get? 1.0.11.1 installed by NV, hardocp updated them manually to 1.0.17.0, but the game id’s the card with 1.0.8..ROFL. Back the truck up a bit pal. Also note in tomb raider dx12 AMD 480 is already smacked around by 1060 so even there you can see once NV spends time on dx12 like AMD has (totally ignoring dx11 BTW), at worst things will end up wins for each side depending on the game.
https://community.bethesda.net/thread/54585?tstart=0
Bethesda thread:
“Does DOOM support asynchronous compute when running on the Vulkan API?
Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set.
Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.”
So as I said, work is in progress and not done. Vulkan by bethesda’s own words, is only partially working on NV right now. I think we’re done here 😉 That took two searches to find…jeez. You can see from their faq there are still a LOT of issues even with this single game so far for a lot of cards etc.
Ok, so lots of aggression
Ok, so lots of aggression here…Anywho…Nice review Ryan. I think you did a nice job of demonstrating the abilities of Nvidia’s new mid range powerhouse. I also appreciate that you commented on the fact that the RX 480 may have more potential in the future. It feels like an unbiased review despite the agitation in the comments. I think at this point, for best performance TODAY, it would have to be the 1060 over the 480. That said, i think the cards are actually better matched that how it appears in this review. As you put it, the higher bandwidth and larger RAM capacity make the RX 480 more “future proof”. Anyway, thanks for the review!
PS: currently rocking an EVGA GTX 770 SC…
Buy the best Perf/$….
I think your GTA V bench is
I think your GTA V bench is broken or is with older drivers????
https://www.computerbase.de/2016-07/geforce-gtx-1060-test/3/#diagramm-gta-v-1920-1080
Why is the RX 480 at base
Why is the RX 480 at base clock of 1120 and not the normal boost????Is this way is the GTX 1060 so far away in some games???Why you are not testing all the gards on the same way?
There is no boost clock for RX 480 in any benchmark why?Thats disappointing that you are bench the gards this way,i knew you are Nvidia based site but not by so much…
Oh, shut the hell up you
Oh, shut the hell up you blithering pseudo-science loving idiot.
Seriously, get better at intelligent trolling or get out.
ALL your comments are
ALL your comments are personal attacks. Clear sign of intelligence.
Oh, yes, because every one of
Oh, yes, because every one of your comments ISN’T a swing at Nvidia or PCPer.
Your pathetic blindness is matched only by your conspiracy seeking mammalian brain.
Hey now, that’s a clear
Hey now, that’s a clear insult to mammals.
Do you own shares in
Do you own shares in nvidia?
Do you work at pcper?
More proofs of intelligence.
More proofs of intelligence.
Proof*.
So much for
Proof*.
So much for “intelligence”.
Technically there are
Technically there are multiple posts …
Either way, you both need to be more creative in your insults or more compelling in your arguments lest I start finding the deletion of posts more entertaining than reading them.
You have to admit that one
You have to admit that one line posts like these, are better for the moderator than posts, 10 paragraphs each, full of “arguments” and “facts”, only to conclude that the other person is a moron. 😀
Not the one I was looking for
Not the one I was looking for but …
@Cyclops nice language thank
@Cyclops nice language thank you for you constructive critique….What a nice person!!!!Thank you so much!!!
And an otherone GTA V 10%
And an otherone GTA V 10% difference between GTX 1060 vs RX 480
http://www.gamersnexus.net/hwreviews/2518-nvidia-gtx-1060-review-and-benchmark-vs-rx-480/page-5
Sadly the reference rx480 is
Sadly the reference rx480 is hurting from its low TDP cooler / factory set high voltage. But its what AMD set by defaults, so thats what review will show 🙁
Because of that you rarely sustain “boost” of 1266mhz
Right now I settle for 1205mhz at 930mv, and everything I bench so far is faster. I dont think I ever saw lower then 1.2ghz, even in time spy or valley.
just sharing some tips:
– Dont set the memory voltage on auto if you raise clock
– The cooler gets less effective as the RPM goes up
For me 2800 vs 4000 rpm got me no better performance, just lots of noise. (the fan at high speed also use amperage)
– the cooler is most effective at higher temp (more heat removed for the same airflow)
– The memory system uses a big chunk of the power limit, lower voltage can actually be better for performance. So its worth trying to sweet spot and not much the mhz/voltage
I want to get a GTX 1060 on my machines to compare, because so far I have been super impressed with this $199 RX480
But right now the cheapest model with a blower is $299 (founder edition) and that doesn’t make sense.
But I will say this , quote from hardocp (a place I see as biased toward nvidia)
“At 1440p using the same playable settings found on the GeForce GTX 1060 we find the AMD Radeon RX 480 simply blows the GeForce GTX 1060 out of the water in terms of performance”
25% faster.. and thats with the ref 480. the upcoming model will be 10 to 20% faster on top of that.
all that to say, the GTX 1060 seem to be an excellent Dx11 card,
but is loosing steam with dx12/vulkan.
Can you source this comment
Can you source this comment please?
“the upcoming model will be 10 to 20% faster on top of that.”
Number is mainly from the
Number is mainly from the sustain boost clock from the better cooling and higher power limit.
Example : in furmark I get ~85fps at default settings.
The card just cant do 1266mhz. Heat and power limit seem to push core to the ~800mhz
But when you LOWER the boost clock and set the voltage lower
You get less eat and more power headroom… voila, rock steady 120FPS with 1.2ghz sustained clock.
This is exactly what custom cards will be able to achieve by default.
8pin vs 6pin (power limit), and much better heatsink/fan (no thermal throttling)
I love crossfire/sli too much
I love crossfire/sli too much to ever consider getting a card that does not support such technology. If this card supported sli it would have cut the legs from underneath those high high HIGH margin 1070/1080 parts, they had a business decision to make and they made it. Our loss.
I think that might be part of
I think that might be part of the reason they are losing SLI, but I think the main reason is relatively small demand and issues that arise when trying to use SLI in games. Personally, I buy the best bang for the buck single card I can get, usually in the ~$350 price range, and SLI is irrelevant to me.
I also want to throw this out
I also want to throw this out there with regard to Ryan and/or pcper being NVIDIA fanboys. If you actually spend enough time listening/watching their podcasts, you know that they want AMD to succeed and I feel like they genuinely like and respect Raja Koduri. They want AMD to succeed because that means more competition and that is better for PC hardware, PC gaming, consumers, and the known universe. That being said, I think if AMD were to release a product they blew them away, they would say exactly that and be super happy about it.
Why would you spread such
Why would you spread such horrible lies about us?!?!
Thanks, always nice to hear. 😉
Anyone want to comment on
Anyone want to comment on this quote from an amd employee (in response to someone saying that polaris is a technical failure and pascal is far superior):
https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/885824-nvidia-geforce-gtx-1060-offers-great-performance-on-linux?p=885857#post885857
Its a failure in the sense
Its a failure in the sense that they where not able to get past 1.4ghz… and even at 1.3ghz the card demand so much voltage.
in contrast nvidia is able to clock at 2+ghz, this is a huge advantage. If nvidia was using 14nm and cap at 1.2ghz, they would need a chip almost 50% bigger.
Can you imagine if Polaris could run at 1.8ghz and 1.1v !!!
It would be 42% faster, almost GTX 1080 class.
But even with those downfall of the 14nm process and AMD lack of refine power optimization. At $199 the RX 480 is a great deal.
I think nvidia would have priced the GTX 1060 at $329 if AMD was not around.
now that is what I have been
now that is what I have been waiting for , goingto snach one of these up soon 😀
Posting about Rx 480
Posting about Rx 480 reference sucks doesn’t do any good. All of AMD reviews go against also sucking worse Nvidia reference cooler compared to AMD’s reference cooler. FE is better than their old cooler but costs $100 more. All AIB cards are going to get you more performance than reference ones. That’s a given. Comparing a AIB 480 against reference 1060 is comparing apples to limes. Compare stock to stock and AIB to AIB.
The Doom Vulkan is not working right now for Pascal due to new architecture but trolls are hard at work pointing out we win. Yes you do at this moment but once Doom is patched properly; probably not.
I love Pcper and usually read
I love Pcper and usually read their reviews first, but this one smells a little. The card has one big massive problem, the missing sli connector. Their explanation for not including it is laughable at best. I think it pretty clear that if this card had sli the 1070 would be null and void and the 1080 would look really bad from a value position.
Additionally if the future is truly DirectX 12 and Vulkan, the 10XX series will be paperweight in the next year or so. Most other reviews were not as slanted as this as they a least tried to mix in a few benchmarks with these new API’s. The fact Doom was left out, FCAT or not is very telling here.
I am getting two rx 480’s because I believe that they will be more future proof and I guess I am one of the 2 people left on the planet that love muti-gpu setups.
Odd review guys, odd review.
For years and years almost
For years and years almost every tech site has been advocating getting the single fastest card you can afford because most support for SLI and Crossfire is either lackluster or slow in coming. Each game has to have profile for it to be utilized. I hope you have good luck with your 2 480s and the massive amount of heat exhausted and power you’ll need for them compared to like priced 1070.
Before Sli usage stats
Before Sli usage stats stopped being collected a few years ago, we knew that less than 0.5% of gamers used SLI. Most of them being high end (think 1080) type of gamers. The average X60 gamer will almost never utilize SLI, and when he does he wont see a difference because most games dont support it properly (or at all). SLI is not needed for a midrange card and thats not really a downside of the card itself.
It is questionable if the future is DX12 and Vulcan. It has been 9 years since DX10 came out and there are still games coming out with DX9. The market adjusts slower than future-oriented gamers would like. Majority of AAA games coming out are still utilizing DX11. By the time most will use DX12/Vulcan there is going to be 11XX and 5XX cards out that may or may not improve on it.
Also even if we assume DX12 now and only DX12 thats still around same performance with some games going slower and some faster for AMD.
Doom was left out because it does not play nice with custom overlays that PCPer is using. Blame Bethesda for this one.
Good luck with your 2 480s. I hope you have a lot of nerves to spare for malfunctioning SLI profiles.
As much as I love this
As much as I love this website I just can’t continue reading when there are so many idiots commenting in threads like these. Why isn’t someone pruning all the fanboy bullshit and leaving the actual sensible comments, it really does spoil the whole thing unfortunately.
The more time they spend
The more time they spend deleting fanboy comments, the less time they have reviewing hardware. :/
My hat goes off to Ryan for
My hat goes off to Ryan for acknowledging Nvidia’s assbag marketing tactics in his conclusion.
I love the comment section on
I love the comment section on this site it’s pure comedy.