A delivery of GPUs and related test equipment from Taiwan to Banglore has led to speculation about NVIDIA's upcoming GP104 Pascal GPU.
Image via Zauba.com
How much information can be gleaned from an import shipping manifest (linked here)? The data indicates a chip with a 37.5 x 37.5 mm package and 2152 pins, which is being attributed to the GP104 based on knowledge of “earlier, similar deliveries” (or possible inside information). This has prompted members of the 3dcenter.org forums (German language) to speculate on the use of GDDR5 or GDDR5X memory based on the likelihood of HBM being implemented on a die of this size.
Of course, NVIDIA has stated that Pascal will implement 3D memory, and the upcoming GP100 will reportedly be on a 55 x 55 mm package using HBM2. Could this be a new, lower-cost part using the existing GDDR5 standard or the faster GDDR5X instead? VideoCardz and WCCFtech have posted stories based on the 3DCenter report, and to quote directly from the VideoCardz post on the subject:
"3DCenter has a theory that GP104 could actually not use HBM, but GDDR5(X) instead. This would rather be a very strange decision, but could NVIDIA possibly make smaller GPU (than GM204) and still accommodate 4 HBM modules? This theory is not taken from the thin air. The GP100 aka the Big Pascal, would supposedly come in 55x55mm BGA package. That’s 10mm more than GM200, which were probably required for additional HBM modules. Of course those numbers are for the whole package (with interposer), not just the GPU."
All of this is a lot to take from a shipping record that might not even be related to an NVIDIA product, but the report has made the rounds at this point so now we’ll just have to wait for new information.
i doubt we will see any HBM2
i doubt we will see any HBM2 GPU anytime soon, at best Q4, this year is starting to look disapointing, only news coming out is low-mid range
There’s nothing disappointing
There’s nothing disappointing about HBM1 – it provides more bandwidth than any GPU can use.
Also add’s a large cost on to
Also add’s a large cost on to gpu while giving almost no performance gains and being locked to 4gb memory on a gpu.
Same with HBM-2 but Nvidia
Same with HBM-2 but Nvidia can deploy GameWorks on games to eat up all that memory while decreasing performance.
Problem solved!!!
I can actually see them doing
I can actually see them doing that. Hell, I could see any company in a monopoly or near-monopoly position doing that, actually.
It’s why I’m buying AMD right now, to try to get them to 50% marketshare. Not to say that AMD wouldn’t do that if they had 80% or more marketshare, just that they don’t have that marketshare right now, so they can’t and won’t do that.
Anyhow, it’s why I vote with my dollars for the company that’s not almost a monopoly whenever I can. In PC gaming, that’s AMD for GPU’s and CPU’s.
Well that’s dumb. I buy
Well that’s dumb. I buy Coca-Cola because I like it more than Pepsi, without regard to which is more profitable or leads in sales. This dedication to corporate structures and products beyond “what can it do for me” is just dumb.
AMD is in the situation it is in due entirely to its own business decisions, not because you bought an AMD card or NVIDIA card. If you bought an AMD card for any reason other than “it was the best GPU for the money for the games I play”, then you’re an idiot.
Yeah, AMD hasn’t been
Yeah, AMD hasn’t been affected by anticompetitive practices by their opponents…
What bullshit. What makes his
What bullshit. What makes his preference less meaningful than yours? Buying a product as an endorsement of that company’s business practices is just as valid a reason to act on as buying a product for its performance or aesthetics.
What’s idiotic is for you to assume everyone should have the same criteria as you do.
I try to convince people to buy Radeon GPUs for a multitude of reasons (with the multitude of reasons included) but if they’re adamant that they want or need to stick with Nvidia for whatever reason then I direct them to EVGA. I do THAT because of EVGA’s business practices. Suddenly in a pro-Nvidia light it becomes an acceptable basis for recommendation huh?
You can’t apply universal principles to this kind of thing. This isn’t the realm of the philosophy of ethics – these are just plain old preferences for personal use. There is no need to seek to control that with insults.
Well look at you, trying to
Well look at you, trying to use logic and reason with a fanboy.
I applaud the effort, I only regret that it will be wasted.
without getting nasty, we are
without getting nasty, we are just discussing point of view, be open.
personally i prefere AMD products, their GPUs are fine, their CPUs were fine at the athlon period, and were ok when 48nm was new.
ione of the main reasons i support AMD, is that i like to have an alternative when buying product, if AMD dies, all you get is a monopoly on cpu by intel, and gpu by nvidia, why would any user in his right mind want that, competition is the key for advancement and affordability and even accountability, i wouldn’t suggest to anyone to buy FX CPU now because it’s crap, but i can strongly suggest a Radeon GPU, because it’s a good product, some even better than the competition.
Please tell me what kind of
Please tell me what kind of car you drive and how much you approve of their shitty business practices.
AMD needs to fix up a few
AMD needs to fix up a few things – like reliable stable drivers and better game support.
Showing your age a
Showing your age a bit?
Anyway, with that kind of thinking you’re likely to finance your own demise one day.
Watch them deliver some GDDR5
Watch them deliver some GDDR5 variant and try to PR spin it as 3D Enhanced Nforce memory or something.
Yeah so when is PCPerspective
Yeah so when is PCPerspective going to do a follow up on the 120hz issue they said they were going to solve but haven’t nor the multi-monitor higher frequency issue that’s taking over a year now.
How about this latest one.
How Nvidia breaks Chrome Incognito
https://charliehorse55.wordpress.com/2016/01/09/how-nvidia-breaks-chrome-incognito/
Nvidia GPU driver bug could expose your Chrome Incognito porn browsing
http://betanews.com/2016/01/10/nvidia-gpu-driver-bug-could-expose-your-chrome-incognito-porn-browsing/
Seams they Nvidia always keeps issuing driver fixes only to cause worse ones.
And AMD keeps not releasing
And AMD keeps not releasing good drivers.
“Yeah so when is
“Yeah so when is PCPerspective going to do a follow up on the 120hz issue they said they were going to solve…”
NVIDIA solved that with the 361.43 driver. Go back to Herping and Derping.
That was for 144hz, doh!!!
That was for 144hz, doh!!! and only on Maxwell mid-range cards.
Atleast read the driver release notes.
Since that fix I have not had
Since that fix I have not had that issue on my 144hz G-sync moniotor and GTX980 Ti
Was solved pretty quick IMO that is why they get my money.
Have fun waiting almost a year or longer on anything AMD GPU related 🙂
Funny how Nvidia hasn’t fixed
Funny how Nvidia hasn’t fixed and acknowledges it hasn’t fixed but the troll warriors swear they did.
Baffling!!!
It’s fixed so stop your
It’s fixed so stop your trolling…
Not fixed. Check GeForce
Not fixed. Check GeForce Forums.
No. You’re wrong. Batismul
No. You’re wrong. Batismul says so and he knows everything.
Lol the majority of people
Lol the majority of people there don’t even know how to uninstall and install drivers let alone report anything back properly, nice try though lmfao
This comment tells you
This comment tells you everything you’ll ever need to know about Nvidia fanboys.
AMD devs don’t work with game
AMD devs don’t work with game developers to get the best performance out of their cards, while nVidia works with any devs that request it.
Besides AMD’s terrible driver support.
What’s the point of a card’s specs when the whole part to get it to work with anything isn’t worth turd?
One other thing to add to
One other thing to add to this story, it’s now being reported that the so called Pascal Drive Px2 was running Maxwell GTX 980 MXM instead.
http://wccftech.com/nvidia-pascal-trouble/
I heard about that. I dunno.
I heard about that. I dunno. I didn’t see the presentation so I don’t know what was said. But if it’s true, there’s still two ways to look at it. If that part was a working prototype, using 980 MXM chips for development while Pascal is still getting on its wheels, that’s totally fine. I expect that. If he’s waving it around saying, “This is it, the final working product, the exact thing we’ll be shipping soon,” well that’s completely different. That’s the Fermi unveil all over again.
The 14 nm process node
The 14 nm process node doesn’t seem to be going that smoothly, even for Intel. Skylake is actually a rather small part on 14 nm; a CPU core is only 10 to 12 square mm or so. High clocked Skylake processors seem to still be in short supply. I have to wonder if we will get small die GPUs for low to midrange parts, but have to wait quite a while for the large die parts.
If this is true, and it does
If this is true, and it does seem to be, Nvidia could be facing some *SERIOUS* fines for LYING to journalists and investors.
CEO outright lying… Wow… just… wow…
It’s not the first time
It’s not the first time they’ve done this. And the last few times, they got away with it. Everybody forgave them. Because Nvidia.
http://www.xbitlabs.com/news/graphics/display/20091002130844_Nvidia_Admits_Showing_Dummy_Fermi_Card_at_GTC_Claims_First_Graphics_Cards_on_Track_for_Q4_2009.html
I’m not sure, and I’m happy
I’m not sure, and I’m happy to admit that I could be wrong, but I don’t think this is an Nvidia GPU. I mean, unless I’m missing something, I don’t see anything in that Zauba chart that suggests any affiliation with Nvidia at all. (The VideoCardz page also showed alleged AMD Polaris listings in the Zauba database, but they at least pointed out how the part numbers and descriptions followed previous AMD product part numbers. They didn’t do that with the purported Nvidia listings.)
But even then, these listings don’t sound right. There’s no actual chip or board listed. There’s undoubtedly a chip involved, the whole BGA37.5×37.5 gives that away. But everything here are things that are FOR that chip. A socket base FOR it. A retainer FOR it and a guide plate FOR it. Socket and support – and watercooling even – FOR the chip? Sure.
But we still don’t know what the chip is, or whose chip it is.
But assuming it’s Nvidia’s chip, take a look at the very first item. “650 WATTS TEC” for this chip. WTF GPU is Nvidia building that needs a 650W Peltier cooler?
I don’t think it’s what we think it is.
(edit to add) It does occur to me after I hit Submit, that if this is in fact an Nvidia chip, it could very well be a hefty TEC assembly for, say, the Drive PX 2? I think the Signal probe and the Power and Ground probe likely could point to an automotive application.
https://pcper.com/news/General-Tech/CES-2016-NVIDIA-Launches-DRIVE-PX-2-Dual-Pascal-GPUs-Driving-Deep-Neural-Network
What a bunch of mad cunts you
What a bunch of mad cunts you all are. sheesshhhh…
We love you, too, princess.
We love you, too, princess.
Regardless of what this
Regardless of what this mystery chip is, has anyone seen pin count numbers for AMD Fury GPUs? Without an external memory interface, it should just be the PCIe link, the video outputs, and power/ground. It would be interesting to know the actual pin counts.
The first 14 nm GPUs are almost certainly going to be the smaller die parts. Making a large die part will not happen until they get yields up to acceptable levels. Also, the first small die part will almost certainly not have HBM. The small die part will not be able to make use of the bandwidth offered by HBM so there is little point in using it for a desktop part. In mobile, where size and power constraints are more important than price, it may make more sense. They could use a smaller number of stacks for a lower performance GPU. Even HBM1 offers 128 GB/s per stack. One or two stacks would be sufficient for low end and mid range mobile parts.
As for the comparison with Xeon parts (from one of the podcast discussions), Intel can make these at much lower yeilds and still make a profit. They can get thousands of dollars for a high end Xeon while a high end GPU is much cheaper; less than $1000 for the whole card. Profit margins are lower for GPUs. Nvidia can get really high Xeon-like margins on the Tesla compute cards, so I could see them actually making those on 14 nm before the consumer large die parts.
Intel also can get much more money for salvage parts. When they make an 18 core part, they can sell it with defective cores with a varying number of cores active, varying clock speeds, and varying multi-processor support. Going from a 16 core to 14 core only changes the price by a small amount. Newegg list 79 haswell parts under server processors, and that isn’t even all of the available products. Nvidia or AMD can’t have 80 different GPUs between $500 and $1000. Most GPUs are only sold in a small number of variants, maybe 2 or 3, which means any salvage parts are priced much lower than the full part. This reduces the profit margin significantly. Even if they could sell salvage parts with greater granularity, they would still need to design the GPU with that level of granularity in mind, which would have its own cost.
Not a mystery…
Not a mystery… Semiaccurate explained VERY well what this chip is :
http://semiaccurate.com/2016/01/11/nvidia-pascal-over-a-year-ahead-of-1416nm-competition/
Like or hate Charlie, he is BANG ON this one.
His “bang on” assessment is
His “bang on” assessment is based on the assumption that the GPUs are 16nm Pascal – yet he offers zero proof of that assumption, other than that Huang said so and it would be illegal for him to lie about it. But there’s way more evidence out there to suggest that they’re 980 MXM modules.
If it turns out Huang was holding up a development model with 980 MXM modules instead of Pascal, do you think Charlie will issue a clarification/retraction?
Charlie’s entire argument, which you claim is “bang on”, hinges on Huang not lying about what he was holding. But he’s done that before, hasn’t he? (Fermi.)
Just because you agree with what he says doesn’t mean he’s correct.
Either it is a Pascal GPU or
Either it is a Pascal GPU or it isn’t. Either the CEO of a multinational corporation blatantly and purposefully lied, or be didn’t. Very simple, time will tell.
Normally I wouldn’t accuse
Normally I wouldn’t accuse the CEO of a multinational corporation of blatantly and purposefully lying, but this one has a history.
You wouldnt? Normally i think
You wouldnt? Normally i think thats what CEOs.do best.
Afaik using gddr5x as a
Afaik using gddr5x as a midterm solution is not a bad idea.
I.e. 1750MHz gddr5x with 256bit bus has 14Gbps*256/8 = 448GB/s bandwidth, that shy from hbm1(512GB/s) but plenty for midrange gpu. And high bandwidth is kind of useless if card isn’t powerful enough to use it.
Of course there are benefits using hbm/hbm2 like lower power usage and smaller package. But using gddr5x you have easier implementation, probably cheaper chips and probably better chip supply.
I will be buying AMD. I don’t
I will be buying AMD. I don’t like how nvidia and Intel treat there customers.
I just think AMD fans need to start buying rather than talking trash on forums.
I will be buying AMD from now
I will be buying AMD from now on too. Not because of being a fan, but because of the terrible experiences I’ve had with my 970s over the past year. I’ve had more driver crashes in the past year (even after doing a clean install of windows) than I did for all the years I had GTX 280s and 660tis. My laptop with an AMD apu and discrete AMD gpu hasn’t had any in two years. I’m also pissed that (in my opinion) SLI support with new titles now is way worse than it was from 2008-2014.
I just think Intel fans and
I just think Intel fans and Nvidia fans need to go play their games rather than talking trash on forums and comment threads. Even in the most civilized of tech sites, the comments are awash with Nvidia fans trying to troll AMD fans, and AMD fans fighting back.
And that’s what most of them
And that’s what most of them doing actually. Only recently you see more nvidiot in action. In the past for every nvidiot you probably going to see 2 or 3 more amdiot in one thread. The rest are pretty much neutral. And i have seen few amdiot with propoganda effort. They post their preach in several forums.
You must not see the same
You must not see the same forums and tech site comment threads that I see, because I see an overwhelming majority of Nvidia bandwagon fanboys. Everywhere. Every article about Nvidia stuff is flooded with Nvidia fanboys bashing AMD. Every article about AMD stuff is flooded with Nvidia fanboys bashing AMD. Every article about Intel stuff is flooded with Intel and Nvidia fanboys bashing AMD.
I don’t see why so many people think it’s their business what other people buy for their computers, but I’ve never seen AMD fans tell Nvidia fans that they don’t deserve to breathe and that they should go commit suicide rather than admit to owning something made by Nvidia. I’ve seen that from lots of Nvidia fans though.
You know when this will
You know when this will finally end?
When there is just one of them companies standing. Then the fanboys will look for something else to fanboy over …..
lol. AMD fan are ‘better’
lol. AMD fan are ‘better’ than nvidia fan? there is no such thing. both are retard. go to wccftech and see it for yourself. plus just because some people pointing out the negative things about AMD then they are intel/nvidia fanboy.
I stopped going to WCCF
I stopped going to WCCF because of how bad the fanboying gets there. But even there, the Nvidia fanboys are way worse. Like I said, I’ve never seen an AMD fanboy tell someone that they deserve to be dead and should go commit suicide because it’s better than admitting to spending money on Nvidia products. Nvidia fanboys, especially at WCCF, do that daily.
As if they are any different
As if they are any different with those AMDiot. There is one time that one nvidiot use lisa su as his avatar. Those AMDiot get angry about it and even ask Wccftech mod to ban that user from posting while for years those AMDiot has been using JHH as their avatar while using names like nvidiot and evilhuang. When they did the same no one complaining.
A, I remember that, they were
A, I remember that, they were asking for a ban because of stuff that person was posting, not because he used Lisa Su for his avatar. Nobody cared about that avatar.
B, funny that you don’t seem to mind when Nvidia fanboys beg for certain AMD fanboys to be banned.
C, doesn’t really matter either way, Hassan has said in the comments multiple times that he’s the only one who moderates the comments and he chooses not to moderate the comments at all. He said he likes the bickering, it amuses him. Some time later he deleted a BUNCH of comments and threw down some bans that I thought were unfair (on both sides of the divide) and I called him out on it, and THEN his story changed to something about avoiding censorship and preserving freedom of speech. There’s an argument to be made that comments = clicks and having an article about an AMD graphics card get 6000+ comments (5800 of them being people bashing each other, 175 of them being people trying to get people to stop bashing each other, and 25 of them being about the story) is a LOT of ad revenue clicks, but of course Hassan would deny that has anything to do with his refusal to moderate.
D, nothing at all that you just posted addresses the fact that I’ve never seen an AMD fanboy tell an Nvidia fanboy to go commit suicide because they don’t deserve to live, based on their brand preference, and I’ve seen Nvidia fanboys do it LOTS AND LOTS OF TIMES. And not just on WCCF. The fact that you had nothing to say about that suggests that YOU’VE never seen an AMD fanboy do that either.
“funny that you don’t seem to
“funny that you don’t seem to mind when Nvidia fanboys beg for certain AMD fanboys to be banned.”
i’m specifically talk about the Avatar use. Some AMDiot get upset about nvidiot using lisa su as Avatar on how it tarnish lisa su reputation bla bla bla but they were fine with AMDiot using JHH as avatar for years. they never complain about those AMDiot action using JHH avatar tarnishing JHH reputation.
“The fact that you had nothing to say about that suggests that YOU’VE never seen an AMD fanboy do that either.”
why insisting on this specific issue? the fact still stands that both AMDiot and nvidiot are stupid. compared to what i have seen in the past this is nothing.
oh I don’t use windows it a
oh I don’t use windows it a monopoly
I could care less if amd goes under its there own fault
lastly Charlie is being sarcastic because he had sweet loving with NVidia that went bad.
nothing to see here
Could pascal plus hbm2 be a
Could pascal plus hbm2 be a 2017 thing, and gddr5 on pascal is that gddr5plus that was confused with gddr6 a few months ago?