Ryan and the rest of the crew here at PC Perspective are excited about AMD's new memory architecture and the fact that they will be first to market with it. However as any intelligent reader is wont to look for; a second opinion on the topic is worth finding. Look no further than The Tech Report who have also been briefed on AMD's new memory architecture. Read on to see what they learned from Joe Macri and their thoughts on the successor to GDDR5 and HBM2 which is already in the works.
"HBM is the next generation of memory for high-bandwidth applications like graphics, and AMD has helped usher it to market. Read on to find out more about HBM and what we've learned about the memory subsystem in AMD's next high-end GPU, code-named Fiji."
Here are some more Graphics Card articles from around the web:
- AMD HBM High Bandwidth Memory Technology Unveiled @ [H]ard|OCP
- Diamond Wireless Video Stream HD 1080P HDMI @ eTeknix
- KFA2 GeForce GTX 980 ‘8Pack Edition’ 4096MB @ Kitguru
- Gigabyte GTX 960 OC 2 GB @ techPowerUp
- eForce GTX TITAN X Video Card Review @ Hardware Secrets
I was wondering about the
I was wondering about the included picture. There was no way the memory stack would stick up above the gpu die. The article confirms that the memory die are ridiculously thin, so the stacks end up the same height as the gpu die. It would be interesting to see a wafer of memory chips polished down to paper thin and floppy.
I am a bit suspicious that any significant amount of heat can be transferred out of the gpu, through the interposer, and out through the memory die. It would have to transfer through the interposer which is thin silicon. There could also be some under fill, but for micro-bumps the layer would be very thin.
If I’m understanding their
If I'm understanding their implementation correctly, there will be a heat sink sitting directly on top of the GPU and HBM modules for heat dissipation, so the interposer as a heat transfer layer should not come into play…
Whitepaper/slide presentation from HK-Hynix.
I was referring specifically
I was referring specifically to this bit from the tech report article:
“Fortunately, Macri told us the power density situation was “another beautiful thing” about HBM. He explained that the DRAMs actually work as a heatsink for the GPU, effectively increasing the surface area for the heatsink to mate to the chips. That works out because, despite what you see in the “cartoon diagrams” (Macri’s words), the Z height of the HBM stack and the GPU is almost exactly the same. As a result, the same heatsink and thermal interface material can be used for both the GPU and the memory.”
I don’t see how the DRAM would work as much of a heatsink for the gpu. The interposer is so thin that little heat will transfer through it. I guess they could use some thermally conductive fill material between the die, but this seems like it would also be a very small amount of heat compared to the direct contact with the lid.
Pascal……AMD thanks for
Pascal……AMD thanks for playing
Yes and by that time Nvidia
Yes and by that time Nvidia will have stripped out all the DP capabilities, and begin reducing more floating point resources, the cards will only be good for one thing, and screw any other uses. Nvidia is aggressively segmenting the feature set of its GPU lines, in order to charge more for adding the features back, ever wonder why AMDs GPUs were so popular with the coin mining crowd. At lest AMD’s cards will still have other uses besides gaming, and not suck up the Benjamins. With Nvidia your banker(pay day loans) will say thanks for the “second mortgage” as you hand over the title on the double wide.
At least the Nvidia card will
At least the Nvidia card will work without having to ask popular websites to test fixes multiple times until they just barely get a resolution to a problem they denied or didn’t know about for months in the first place. (SLI frame pacing and the broken Enduro cover up)
Like everything AMD as of
Like everything AMD as of late. They’re Not”free”sync being half assed and the fact that AMD hasn’t released a WHQL driver in over 160+ days…. I’m glad I have a job and make good money so I can have nice things. It pays not to go cheap or go half ass.
You live in the root cellar
You live in the root cellar under the hatch at the back of the doublewide, and it’s all mom’s stripper money that you pilfered from the secret Ball Jar, jars used more to store moonshine than for preserves, that mom thought she had hidden properly. This Ball Jar had a slight crack at the rim, and thus was not suitable for keeping mom’s shine from evaporating and vapors from blowing the doublewide straight across the Peabody coal mine and damn near too orbit.
It’s the only logical example from someone who professes a pathological love of anyone’s brand of product like it was a sports team. Fanboytosis is a condition, and the very reason that WccF Tech. can remain in business! As for the usual consumer it’s more the performance per dollar that counts, that and overall usability of the GPU for other tasks without resorting to unnecessary spending to get a feature set that used to come with any standard GPU.
So, you are talking about
So, you are talking about tech that won’t be released for at least another year?
490x… Nvidia thanks for playing
and then the next nvidia release reverses that, then the next AMD release reverses that, and so on and so on, I have to laugh at just how ridiculous your statement is.
The point is that when you post comments like the one you just did, you make yourself look like an idiot.
That picture is just
That picture is just marketing and Charlie over at S/A has a good explanation of the what and whys of HBM memory, it’s an interesting read, but adding more lanes of BUS/Channel traces at a lower clock rate, 1024 lanes/bits, per stack, (clocked at 500Mhz-1Ghz) for HBM, as opposed to 32 lanes/bits (clocked at 7Ghz+) for GDDR5, saves a whole bunch of energy. Also the Interposer’s traces can be very thin (as thin as any silicon traces can be with the current process node) so that allows all of the complexity of running the data paths to be moved onto the module and allows the PCB to be simplified and reduces the pin counts off module. Charlie goes into great detail about why interposers and HBM are from an engineering standpoint the only way to solve the need for more bandwidth without having the memory/memory subsystem almost taking more power to run, than the GPU, or CPU, itself consumes. Wider is definitely better, and those lower clocks allows for less errors in timing, simplified PCB designs, and lower latency, because the memory is literally right next to the processor, and the die stacks are connected to each other through vertical pathways(Vias). Hopefully APUs will be getting HBM ASAP, that would definitely be a good thing for laptops(8GB at least).
There is nothing preventing
There is nothing preventing them from using both technologies at the same time. With HBM in the package, there are few off package connections. For an APU in a laptop, you would probably integrate almost everything into the package. Technically, you could put things other than memory chips on the interposer. You could put a separate GPU and CPU, separate IO chips, etc. You wouldn’t need to run a PCI-e link for a gpu, so you would only need PCI-e for storage connections really. PCI-e is relatively low pin count. You would have some other low speed IO interfaces and power and ground. You would have plenty of off chip interconnect area available to put an external memory interface. This would allow for the HBM to act as a giant L4 cache just like Intel’s crystalwell chip with on-package eDRAM. I don’t know if we will get graphics cards with HBM and external GDDR5, but it could be useful for a compute product to provide large memory sizes.
Awsome,now if and could
Awsome,now if and could convince Microsoft to raise the just in time capability of its interrupt technology by a a lot.its max is nice but its just in time capability is so bad everything wait on lapic just in time capacity lol.