Take these numbers with a grain of salt, but WCCFTech has published what they claim is leaked GeForce GTX 1170 benchmarks, found “on Polish hardware forums”. If true, the results show that the graphics card, which would be below the GTX 1180 in performance, is still above the enthusiast-tier GTX 1080 Ti (at least on 3DMark FireStrike). It also suggests that both the GPU core and 16GB of memory are running at ~2.5 GHz.
Image Credit: “Polish Hardware Forums” via WCCFTech
So not only would the GTX 1180 be above the GTX 1080 Ti… but the GTX 1170 apparently is too? Also… 16GB on the second-tier card? Yikes.
Beyond the raw performance, new architectures also give NVIDIA the chance to add new features directly to the silicon. That said, FireStrike is an old-enough benchmark that it won’t take advantage of tweaks for new features, like NVIDIA RTX, so those should be above-and-beyond the increase seen in the score.
Don’t trust every screenshot you see…
Again, if this is true. The source is a picture of a computer monitor, which begs the question, “Why didn’t they just screenshot it?” Beyond that, it’s easy to make a website say whatever you want with the F12 developer tools of any mainstream web browser these days… as I’ve demonstrated in the image above.
So the GTX 1080Ti has: GPU
So the GTX 1080Ti has: GPU Clock: 1481 MHz, Boost Clock:1582 MHz Memory Clock:1376 MHz(11008 MHz effective) according to TPU’s GPU database, while this new 1170, as the article suggests, has the GPU core and 16GB of memory that are running at ~2.5 GHz.
So if the GTX 1070 where able to clock at 2.5 GHZ and had 16GB of GDDR6 memory what would its performance be? Is Nvidia able to get the majority of its performance from higher clocks or are there more new hardware features also compared to Pascal. That 2.5 GHZ on the preported 1170 is that the boost clock. As far as Nvidia’s RTX(ray tracing) That may just be software and DX12 has DXR and Vulkan is getting some DXR like extentions also.
Ray Tracing needs FP math so AMD’s Shader Heavy GPU designs can not be overlooked unless Nvidia has added some actual in hardware Ray Tracing specific units on its upcoming 1100 series GPU Micro-Arch. Ray Tracing normally has been a CPU workload what with all the programming logic needed to simulate rays/ray interactions but OpenCL/CUDA ray tracing acceleration on the GPU has been around for a good while used for non gaming Profssional Graphics/Amnimation rendering where FPS is not necessary and the ray tracing sample rates turned up all the way.
Nvidia appears to be performing some Ray Tracing mixed with standard rasterization methods because full on realtime ray tracing for Games/At High FPS is still not achievable. AMD’s Radeon GPUs with their excess shader cores have no problems with Ray Tracing acceleration on the GPU for Professional graphics workloads. So it’s just a matter of support for DXR and whatever Ray Tracing extentions that Vulkan will get so AMD’s GPUs can also do limited Ray Tracing mixed with rasterization for games.
To Date the only GPU/Graphics with actual dedicated Ray Tracing hardware is IT’s PowerVR GPUs.
It sure looks like any Software/Graphics API(DXR, Vulkan ray tracing) based Ray Tracing may just benifit from more VRAM also. Really all this Ray Tracing that Nvidia is talking about is going to mean that Nvidia is going to either have to increase its Clocks or its shader core counts on its 1100 series gaming GPUs. AMD is ready for Ray Tracing with its lead in Shader core counts on it’s current Vega SKUs.
Nvidia’s current GP102 based GTX1080Ti is not based on a Nvidia gaming only base die tapeout as GP102 is used for the Quadro line also with the GP104 based GTX 1080/1070 actually being based on that more gaming only focused GP104 tapeout. GP102 has the lead in Pixel fill rates for sure with 96 available ROPs and the GTX 1080Ti making use of 88 out of 96 available ROPs.
Nvidia’s Titan V GV100 based card offers 96 ROPs so it looks like Nvidia has not increased those avalable ROP numbers over what Pascal/GP100 offers. That Titan V sure has the shader core count up there at 5120, so sure 16GB of VRAM may be needed to service that number of shader cores.
It looks like Nvidia is trying to do more with higher clocks than with only some more on die compute resources on its upcoming GPUs and that custom TSMC process that Nvidia is using may just be why the clocks can be above 2GHZ.
I would say the photo of a
I would say the photo of a screen rather than a screenshot is done for the very reason you demonstrate. It is harder to edit an image without detection than to just rewrite the original.
I’m not sure i understand
I’m not sure i understand your argument. You can edit the webpage and either take a photo or a screenshot.
I would guess you take a photo when you don’t have access to that computer, maybe?
Yeah you’re right, I didn’t
Yeah you’re right, I didn’t think it through. Thanks for pointing it out 🙂
The 2.5Ghz isn’t outside the
The 2.5Ghz isn’t outside the realm of possibility. Pascal got over 2gz and close to 2.2 with exotic cooling. 2.5Ghz for a die shrink and exotic cooling is viable.
16GB for memory on midrange is outside the realm of believability. We aren’t even close to maxing out the 8GB 1080 let alone needing 16GB. That makes no sense, especially at midrange. This could be a glitch in the benchmark due to prerelease hardware.
70 range beating the previous gen 80ti range is unlikely but certainly possible in certain circumstances.
The overall picture is suspect and I await the actual announcement.
Pascal got close to 2.2Ghz
Pascal got close to 2.2Ghz on air or/and watercooling , that’s not what i would call exotic cooling ! Under LN2 ( wich is exotic cooling ) Pascal whent as high as 3GHz . So honestly on 12nm , 2.5Ghz might be possible on golden samples even on air.
On the memory side i would agree that 16GB seems a hell of alot for a “mid range ” card but then again maybe it has something to do with ray tracing so who knows .
Play Rise of the Tomb Raider
Play Rise of the Tomb Raider at 4K with the highest res textures on two 1080s in SLI. You will constantly be getting hitching and stuttering due to running out of vram. I dropped down to “high” textures instead for this reason. If Nvidia and AMD are serious about high res gaming next gen they have to be serious about vram too.
SLI. You will constantly be
I don't think the VRAM is to blame here. Remember, the VRAM usage scales with texture size FAR more than it scales with resolution (which only grows the framebuffer).
Aw feck I forgot it was still
Aw feck I forgot it was still HTML rather than BBcode over here.
fixed for you
fixed for you
I wonder what Josh thinks the
I wonder what Josh thinks the price will be.
The allegedly leaked
The allegedly leaked benchmark was, according to the picture, from an EVGA card… not an ASUS one. :p
Nvidia is using TSMC’s 12nm
Nvidia is using TSMC’s 12nm node but maybe these VRAM numbers are not registering properly with the benchmarking software. Vega has that Virtual VRAM addressing ability(HBCC/HBC) so maybe Nvidia will be using somthing similar instead of actually having that much physical VRAM.
Scott what do you make of MS’s DXR ray tracing API and whatever Vulkan will have to match MS’s DXR capabilities.
According to Anandtech:
“In conjunction with Microsoft’s new DirectX Raytracing (DXR) API announcement, today NVIDIA is unveiling their RTX technology, providing ray tracing acceleration for Volta and later GPUs. Intended to enable real-time ray tracing for games and other applications, RTX is essentially NVIDIA’s DXR backend implementation. For this NVIDIA is utilizing a mix of software and hardware – including new microarchitectural features – though the company is not disclosing further details. Alongside RTX, NVIDIA is also announcing their new GameWorks ray tracing tools, currently in early access to select development partners.
With NVIDIA working with Microsoft, RTX is fully supported by DXR, meaning that all RTX functionality is exposed through the API. And while only Volta and newer architectures have the specific hardware features required for hardware acceleration of DXR/RTX, DXR’s compatibility mode means that a DirectCompute path will be available for non-Volta hardware. Beyond Microsoft, a number of developers and game engines are supporting RTX, with DXR and RTX tech demos at GDC 2018.” (1)
So there appears to be some hardware in Volta and later to support Ray Tracing acceleration on Nvidia’s newest GPUs but there still is a software path for ray tracing acceleration on older GPUs. I’m thinking that AMD’s Vega nCUs and that GCN based async compute that has been around for a while on more than Vega GPUs is probably already allowing for Ray Tracing Workloads to be done more on the hardware of AMD’s GPUs. So it’s just a matter of what level of DXR support that AMD says its GPUs are compatable with.
Testing done by Techgage(1) is showing Vega 64(8GB) on some non gaming graphics intensive workloads outperforming Titan XP on the Blender 2.79b BMW render and also beating the GTX1080Ti too on all but one of the other Blender test renders. And these are all heavily Ray Traced renders with higher ray tracing settings than would be use for gaming.
(1)
“NVIDIA Announces RTX Technology: Real Time Ray Tracing Acceleration for Volta GPUs and Later”
https://www.anandtech.com/show/12546/nvidia-unveils-rtx-technology-real-time-ray-tracing-acceleration-for-volta-gpus-and-later
(2)
“Workstation GPU Performance Testing: Redshift, Blender & MAGIX Vegas”
https://techgage.com/article/performance-testing-redshift-blender-magix-vegas/
Edit: Testing done by
Edit: Testing done by Techgage(1)
To: Testing done by Techgage(2)
I’d take anything from
I’d take anything from WCCFTech with a huge grain of salt.
I think PCPer is, hence the
I think PCPer is, hence the article title started with RUMOR:
Nothing here is presented as fact.
Wccftech was 24 hours late on
Wccftech was 24 hours late on this rumor fyi they are not the source. Ive seen it on my google news feed and YouTube channels way before.
I’m gonna say 100% fake more
I’m gonna say 100% fake more so because of that is the screen shot it just does not look right.
Besides that a 1170 being faster than a 1080Ti I am going to say probably not because that would be a huge jump for the lowest end high end card form the past 1070 and would not leave to much room to go up form there. I know Nvidia has had 2 years to work on this but even still it seems to far fetched.
My best guess would be a 1170 will be above a 1080 lower than a 1080Ti by 3%-4%. A 1180 will be faster than a 1080Ti but not by much maybe 3%-5%. The big gun will be the 1180Ti which will be probably 35%-40% percent faster than a 1180 regular stock card non partner board. The only thing that seems about right here would be the 2.5GHz core speed mainly because Nvidia CEO said on Pascal launch that the chips could reach up to 2.5GHz but even still that was for Pascal regular chips (1070,1080 cards).
The 16GB memory also seems a bit far fetched for a regular card we do not need that now or any time soon and would drive up the cost of the cards in the main stream cards. My best guess would be 8GB or 12GB GDDR6 cards for the main stream and 16GB for the 1180Ti high end cards but hey Nvidia could be stupid and release all of their eggs to the basket out of the gate who knows.
True but lets compare the
True but lets compare the 1070 vs 980ti, and 970 vs 780ti. This X070 in the past trades performance with previous gen X080ti cards. Only time will tell.
You actually make a valid
You actually make a valid point about past x070 cards so this may be close to being right.
nvidia, has no reason to
nvidia, has no reason to release a 16gb card until the next gen consoles are announced
I have yet to see vram stay
I have yet to see vram stay stagnated from generation to generation. Every generation thus far had more vram than the last. Why should this gen be any different?
16 gigs does seem overkill just like it would be an overkill on an 1080ti. 12 gigs vram makes more sense.
i agree. increase, but save
i agree. increase, but save room to charge people more when the developers have access to more memory and more incentive. then make users buy the cards again. Plus with memory prices where they are at, they have an excuse to screw people over so they have to buy twice. $$$
As the saying goes…. Only Buy Computer Hardware when you absolutely need it or ELSE
They could have an 8 GB and a
They could have an 8 GB and a 16 GB versions. I suspect they want the upcoming series to be a big jump to get people to upgrade even though there might be a lot of last generation cards floating around from crypto currency mining. I don’t really care too much at this point. I am more waiting for a laptop with an HBM AMD gpu and preferably an 8 core Ryzen professor.
More current rumor is the
More current rumor is the launche schedules for 1160, 1170, 1180, and mysterious 1180+ August-October30th.
The same rumour also went in
The same rumour also went in on the “OEMs returning GPUs” sillynes (cards still sell for well above RRP let alone the normal sale price, returns would just be leaving money on the table), so I wouldn’t rate it as anything of substance.
Well now that GDDR5/5X memory
Well now that GDDR5/5X memory prices are not as costly owing to the bottom dropping out of demand for GPUs Nvidia has no excuses for keeping Pascal prices as high. So for all those GPU dies that the AIBs sent back, Nvidia could source its own GDDR5/5X memory and start selling a new Pascal based FSE(Fire Sale Edition) with non blower style cooling in order to get rid of all those unsold stocks of Pascal based GPU dies.
I’m sure that some may be looking to buid dual GPU systems if there where some more affordable 1070/1080 options.
“Nvidia has no excuses for
“Nvidia has no excuses for keeping Pascal prices as high.”
Nvidia don’t set the price, retailers do (and so some extend OEMs). Nvidia aren’t buying the GDDR dies, after all.
Then what about the Founders
Then what about the Founders Edition? And there has to be a whole lot of unsold GDDR5/5X that was to be paired to each of those returned to Nvidia Pascal Dies that the AIB board makers have returned.
The retailers are now having excess stocks of Nvidia Pascal SKUs so folks are not buying as much as before. Nvidia can lower its MSRP and then customers will expect that the retailers will have to lower the Pascal GPU pricing or they will not be selling as much.
Nvidia sets its wholesale prices and Retailers price on demand in order to not be holding any excess stocks of Pascal SKUs. And now more than eariler there is the awaited release of Pascal’s gaimng successor and retailers will be stuck with loads of Pascal SKUs that they can not move at even wholesale pricing. That supply and demand pricing cuts both ways and retailers can find themselves bleeding red.
Nvidia has more control over its AIB partners than AMD does owing to the volume of gaming GPU sales that has Nvidia in control of that market. If Nvidia lowers its MSRP and the wholesale pricing then AIB’s had better respond or they will find their ability to get Pascal’s successor very difficult. Nvidia has ways to make the AIBs dance to whatever toon Nvidia desires!
I’ll bet the AIB’s may have some excess stocks of GDDR5/5X and that the GPU VRAM memory makers will be willing to cut some deals or be stuck with some writedowns come accounting time. Pascal represents such a large chunk of GPU sales that there are lots of suppliers affected by such a drastic drop in demand for such a large volume of remaining parts inventory of GPU dies, GDDR5/5X dies and ather PCIe gaming card parts.
There is a large supply chain of parts for each of those unsold Pascal DIEs that will be unsold also, all sorts of components are affected!
make sure yall have a back up
make sure yall have a back up plan to pay your rent or mortgage on time. seeing people “blow” $500+ on a gpu for some simple eye candy.
So if people decide to “blow”
So if people decide to “blow” 2-3 days wages every 10 months for their hobby is a bad thing?