We are expecting news of the next NVIDIA graphics card this spring, and as usual whenever an announcement is imminent we have started seeing some rumors about the next GeForce card.
(Image credit: NVIDIA)
Pascal is the name we've all being hearing about, and along with this next-gen core we've been expecting HBM2 (second-gen High Bandwidth Memory). This makes today's rumor all the more interesting, as VideoCardz is reporting (via BenchLife) that a card called either the GTX 1080 or GTX 1800 will be announced, using the GP104 GPU core with 8GB of GDDR5X – and not HBM2.
The report also claims that NVIDIA CEO Jen-Hsun Huang will have an announcement for Pascal in April, which leads us to believe a shipping product based on Pascal is finally in the works. Taking in all of the information from the BenchLife report, VideoCardz has created this list to summarize the rumors (taken directly from the source link):
- Pascal launch in April
- GTX 1080/1800 launch in May 27th
- GTX 1080/1800 has GP104 Pascal GPU
- GTX 1080/1800 has 8GB GDDR5X memory
- GTX 1080/1800 has one 8pin power connector
- GTX 1080/1800 has 1x DVI, 1x HDMI, 2x DisplayPort
- First Pascal board with HBM would be GP100 (Big Pascal)
Rumored GTX 1080 Specs (Credit: VideoCardz)
The alleged single 8-pin power connector with this GTX 1080 would place the power limit at 225W, though it could very well require less power. The GTX 980 is only a 165W part, with the GTX 980 Ti rated at 250W.
As always, only time will tell how accurate these rumors are; though VideoCardz points out "BenchLife stories are usually correct", though they are skeptical of the report based on the name GTX 1080 (though this would follow the current naming scheme of GeForce cards).
GTX Ten-Eighty Snowboarding
GTX Ten-Eighty Snowboarding
I couldn’t get it out of my
I couldn't get it out of my head, so I made it real.
i.imgur.com/3KAT2jv.png
Haha nice one! =D
Haha nice one! =D
If correct, I think this is a
If correct, I think this is a bit earlier than expected?
Also – it will make an interesting contrast against the (mostly rumoured?) AMD strategy of a strong mid range push rather than at the high end for 14nm.
With 14 nm being such a big
With 14 nm being such a big jump, mid-range sized parts may compete with large die 28 nm GPUs. The R9 390X is 438 square mm and the R9 Fury X is 596 square mm. I suspect good yeilds on 14 or 16 nm at even the 400 square mm die size range may be hard to reach. It will be interesting to see how big the large die part ends up being.
Is it confirmed that Pascal
Is it confirmed that Pascal is going to be using 16nm FinFET?
I thought it was a true 16nm manufacturing process?
I also think it’s a crock that anyone thinks Pascal is going to ship any sooner than mid-year without being a total disaster, unless the first GPUs are just 28nm rebrands.
The fact that Nvidia hasn’t hinted or teased it, and the only time they “showed” Pascal GPU, it was actually just a dual GTX 980M.
If Nvidia isn’t flaunting by now I get the feeling something has took a wrong turn.
I’m probably wrong but I’m feeling wood screws 2.0
Is it confirmed that Pascal
Sorry, but are you implying that 16nm FF is not a “true 16nm process,” whatever that is supposed to mean? There are no <20nm processes that aren't FF processes, and if they were, they would be inherently less efficient than any equivalent FF process. Intel, the market leader in IC manufacturing, has been using a FF process since 22nm. Samsung's 14nm process is FF.
I’ve heard people refer to
I’ve heard people refer to 14nm FF process as not a ‘true 14nm process’ and claimed that Intel was the only company to have a true 14nm manufacturing process, and that Samsung’s 14nm FinFET manufacturing process was more similar to a 20nm manufacturing process.
the only time I’ve heard people use the term “FinFET” was specifically with the Samsung 14nm FF process, I wasn’t aware that all sub-20nm processes are FinFET.
Intel did not invent
Intel did not invent Finfets!
“The term FinFET was coined by University of California, Berkeley researchers (Profs. Chenming Hu, Tsu-Jae King-Liu and Jeffrey Bokor) to describe a nonplanar, double-gate transistor built on an SOI substrate,[8] based on the earlier DELTA (single-gate) transistor design.[9] The distinguishing characteristic of the FinFET is that the conducting channel is wrapped by a thin silicon “fin”, which forms the body of the device. The thickness of the fin (measured in the direction from source to drain) determines the effective channel length of the device. The Wrap-around gate structure provides a better electrical control over the channel and thus helps in reducing the leakage current and overcoming other short-channel effects.”(+)
Intel’s process is a little more mature, and the Gate Pitch(distance between gates) is different between different Fab processes, there is no “True” Intel Finfet relative to the others fabs! Intel just had more money at the time to license the technology and pay for the expensive third party chip fab equipment to make/tweak and use the FINFET process.
(+)
https://en.wikipedia.org/wiki/Multigate_device
Ah, okay.
Thank you for the
Ah, okay.
Thank you for the insight.
He wasn’t really talking
He wasn’t really talking about the fins, but the actual lack of a shrink between TSMC’s 20nm process, and their 16nm FF process.
TSMC claims that 20nm is 1.9x more dense than the 28nm process, and the 16nm process is 2.0x more dense than the 28nm process, so pretty much the same as 20nm.
You are correct the 16nm FF
You are correct the 16nm FF process that TSMC is using is based on the 20nm process, it uses the new FEOL, but the same BEOL as the 20nm process, so it has almost no shrink relative to the 20nm process. Intel’s 14nm process was indeed a true shrink from their 22nm, new FEOL and BEOL, and everything did scale down. I am not 100% sure about GF/Samsung’s 14nm process, it is a bit smaller than TSMC’s 16nm process, but I do not think it achieves the same transistor density as Intel’s 14nm process.
The rumours I’ve seen is that
The rumours I’ve seen is that the launch will be at Computex (31’st May), so I don’t expect availability to be reasonable until mid June.
A “reveal” could take place at the GTC (GPU Tech conf) in April.
Makes sense, launch a GP 104
Makes sense, launch a GP 104 in may, then follow up with Big Daddy pascal in the fall. Think 780 launch followed by 780ti/titan black in the fall of 2013. Depends on pricing, I could see if the 1080 or whatever it is called hits 980TI peformance and beyond for a cheaper price. THink 980 vs 780ti, they are close but 980 uses less power, overclocks better. All depends on price. If they have a 980ti perf for similar to 970 price, then we have a winner for everyone.
1080 seems like a slightly
1080 seems like a slightly outdated name… This is the era of 4K 😀
GeForce 4K UHD would be more apt. 🙂
You forgot “VR”.
The GeForce
You forgot "VR".
The GeForce VR UHD 4K 2K7, Featuring Derek Jeter (c) SEGA Sports
if it doesn’t bundle Lee
if it doesn’t bundle Lee Carvallo’s Putting Challenge then I’m not interested.
Micron confirmed that GDDR5X
Micron confirmed that GDDR5X isn’t going into mass production until the summer, so I highly doubt we’ll see any cards using the memory until at least Q3, more likely Q4 – if at all this year. If GP104 is going to launch in may, it will use standard GDDR5. The BenchLife article that Videocardz used as their source actually clearly states that GP104 could be either GDDR5 OR GDDR5X, so I’m not sure why Videocardz neglected to mention the older memory in their article.
I doubt that it will be
I doubt that it will be GDDR5X. The announcement of GDDR5X seemed to indicate that it was not available for this design cycle. Perhaps they are using it for slightly higher clock speeds, but it will be operating in some kind of GDDR5 compatibility mode. The significantly higher clock speed interface will not be possible without the memory controller being designed for it.
Actually that arcticle says
Actually that arcticle says it could be GDDR5 or GDDR5x, not that it is a guarantee that gddr5x will be used
I used let google translate the benchlife page
https://benchlife.info/gp104-aka-nvidia-geforce-gtx-1080-will-ship-in-may-and-no-hbm2-031112016/
” … Although the news has been mentioned, Pascal will import HBM2 memory, but the latest data show, GP104 is the GeForce GTX 1080 This card will remain GDDR5, or a faster GDDR5X memory, and memory capacity is 8GB in size. Of course, we can expect higher order GP100 appear, and this is expected to bring HBM2 GPU memory….”
Confirmed for April 1 release
Confirmed for April 1 release date
I think waiting for the next
I think waiting for the next **80 series will be more worth it. But this 8GB memory is nice. Sad it’s not HBM. But the most impressive is the single 8-pin. That is amazing.
Nvidia has been releasing the
Nvidia has been releasing the small 104 chip first since they completely reatructured their products with the 680. The 680 would normally have been the successor to the 560ti, not the 580.
They did the same thing with the 980 and 980ti.
I dont know why anyone would expect this to be different?
GP100 will probably be out this year as a Titan, Quadro P6000 and P20, 40 or whatever accelerator for supercomputers. I dont expect a GP100 Geforce until 2017.
I don’t know if there will be
I don’t know if there will be that much cross over between desktop and HPC devices anymore. The recent consumer level GPUs have all had 64-bit capabilities cut out. It doesn’t make sense to include that much 64-bit hardware on consumer GPUs. It is a waste of power and die space. I suspect we will be seeing separate consumer and HPC parts going forward.
There will be the same level
There will be the same level of crossover. GM200 had no FP64 in the Quadro and M40 because the chip had no FP64 at all. GM200 was never supposed to be 28nm but when they had to make it 28nm they stripped all the FP64 out to make room for more FP32 cores. Even then it required new color compression to get a 50% better perfrormance in games compared to GK110. GF110 to GK110 was more like 2-3x the performance, making GM a big disappointment in reality.
The entire GM200 architecture was a compromise and none of the products using it have FP64 and Nvidia rebranded it as being for “machine learning”. Nvidia still sells the GK110 and 210 based K40, K80 and K6000 as its REAL highest end chips.
Nvidia also released the GK210 based K80 AFTER GM architecture GPUs were out. Why? Because people who need FP64 were stuck with the option of buying an ancient GK110 accelerator or switching to an Intel Xeon Phi and changing their software.
In order to keep customers happy until GP100, they modified GK110 into GK210.
With the process shrink and new architecture, GP100 will work for ALL marketslike GK110 did. It can have 3 TFLOPS DP and 6-8 TFLOPS SP with its 17 billion transistor budget and be used in 32GB accelerators, Quadros, Titans and Geforces just like GK110 without the issues of not having DP cores like GM200.
Expect a GP100 Titan this year and a Geforce next year.
I’ll be waiting on the GTX
I’ll be waiting on the GTX 1440 since I’m buying a QHD gaming monitor this year.
I was looking forward to HBM,
I was looking forward to HBM, we will see what it bring, maybe on the TI editions
My predictions nvidia will
My predictions nvidia will launch this to compete with Polaris 11, but will likely fall short, then go all in with an overpriced titan line GPU, and then launch a ti version a few months later. Just like kepler refresh and maxwell 2.0.
Nvidia’s async compute
Nvidia’s async compute deficiency better be improved by the time Pascal is released as both Vulkan and DX12 games will be using async more and more! Nvidia has been mostly wood-screws and Elmer’s Glue lately. VR is just about here and doing more compute on the GPU will reduce the game’s overall latency for AMD’s GCN based GPUs and the new graphics APIs. Nvidia’s has only some early Linux based driver advantages that AMD had better not ignore, least there will be no AMD Steam OS/Linux offerings from OEMs!
AMD had better realise that Many will want to go towards Linux based gaming and AMD can not afford to ignore the Linux market any longer for both Server, and consumer Linux(Steam OS, Mint/other) based systems!
Those DX11 bemchmarks are not going to hold much weight against the DX12 and Vulkan benchmarks at the gaming market begins to move over to the newest graphics APIs, Hopefully Vulkan will allow many to remain on windows 7, and begin the process of allowing users to migrate over to Linux for their gaming needs. I’m willing to hold out for more Linux based gaming systems, and I really want to avoid Intel for my next system, and I’ll wait and see about the Pascal/Polaris benchmarks once there are more Linux based benchmarks available. The windows UWP based benchmarking software is not going to be trusted by me this early in the game, and I really want to migrate over to Linux gaming and move away from M$ based gaming Lock-IN!
Let’s hope that the number of
Let’s hope that the number of CUDA cores is zero and that instead they have a large number of general purpose ARM processors running in parallel each with a wide vector math processor. That would allow ray tracing to finally take off not to mention that some cores could be used to run the game itself and possibly Linux or Android. The GPU would then be fully self contained. You could connect it to your desktop or laptop (even a Mac) with a Thunderbolt 2 cable. I think this is too much to ask for Pascal but at some point, I hope it happens.
The only reason to have any
The only reason to have any CPU cores would be for hosting a dedicated gaming OS on the PCIe card itself, and that would most definitely be possible for both AMD and Nvidia to offer a complete gaming platform on a PCI card. AMD’s current console APU SKUs have APU based boards, it would not be hard for someone/OEM to Get one of those AMD workstation APUs on an interposer and make a PCI based variant for the consumer market, AMD will be making a 16 Zen core APU on an interposer with a very powerful Polaris/other GPU, so a third party card maker could make such a SKU. Nvidia could do up an Interposer based SOC with its Denver cores and pair that with its Pascal GPU products, and some Linux Kernel based OS running on Nvidia’s custom Denver IP.
With the direction M$ is taking with windows 10, I’d much rather the entire gaming/GPU/APU/SOC market move towards using a standardized and industry supported Linux Gaming Build, and forget about M$ and its new and shiny hoops to jump through with every new OS release. Linux and Vulkan will probably be the only truly open OS/API option, so why suffer with M$’s shenanigans any more! It’s that interposer technology that both AMD and Nvidia are using that will probably make it easy for AMD and Nvidia to create APUs/SOCs based systems on an interposer and remove the need for any expensive motherboard bound CPU/APU/SOC for gaming. Nvidia may need to get a power8 license and use those core designs instead of its Denver cores, but Nvidia or anyone else can get a power8 license from openpower, the x86 licensees are limited to AMD. Intel, and a few others.
HBM2 vs GDDR5X
If we look at
HBM2 vs GDDR5X
If we look at HBM vs GDDR5 then we saw a performance gain at higher resolutions due to the extra bandwidth.
(While it was 10% gain from 2560×1440 to 4K versus GTX980 which also had 4GB VRAM for apples-to-apples much of that was probably unplayably low frame rates so the real-world benefit was mostly not there)
However, with GDDR5X having more bandwidth than GDDR5 it’s difficult to say how much difference there would be (depending on the GPU of course).
My poor reading skill suggest GDDR5X can get almost 2X the bandwidth as GDDR5. Even with a faster GPU, wouldn’t this likely close the gap with HBM for practical purposes for the near-top-tier GPU’s?
HBM2 is likely going to be the very top couple cards (similar to current GTX980Ti and Titan series).