Global Sources
EE Times-India
Stay in touch with EE Times India
 
EE Times-India > Memory/Storage
 
 
Memory/Storage  

AMD looks to merge GPU, DRAM stack into single device

Posted: 21 May 2015     Print Version  Bookmark and Share

Keywords:AMD  GPU  DRAM  SK Hynix  Nvidia 

AMD has announced its plan to combine on a single device one its graphics processors and the SK Hynix high bandwidth memory (HBM) DRAM stack. The company said the technology will beat to market similar devices described by its rival Nvidia, although product specifics have yet to be revealed.

The approach will deliver more than 100GB/s of memory bandwidth, up from 28GB/s using external GDDR5 DRAMs in today's boards. The GPU die and DRAM stack will sit side-by-side on a silicon interposer in a so-called 2.5D stack, a technique first pioneered in FPGAs by Xilinx.

Although the HBM stack runs at a slower clock rate than GDDR5 chips (500MHz compared to 1,750MHz), the HBM chips sit on a 1,024bit link compared to a 32bit interface for GDDR5. The HBM stack also runs at a lower voltage than GDDR5 (1.3V versus 1.5V).

As a result, HBM can deliver 35GB/s of bandwidth per watt, more than three times the 10.66GB/s/W of GDDR5. In addition, the HBM stack fits into a 35mm2 area, 94 per cent smaller than the GDDR5 chips required to deliver as much capacity.

AMD hasn't announced which GPU chip it will use on the device or when it will ship. For its part, the SK Hynix HBM stack will be available in the next couple of months. It will likely be one to two years before price points for the approach are low enough push the technology to a broad set of applications.

AMD plans for a 2.5D GPU/DRAM stack

AMD provided a general description of its plans for a 2.5D GPU/DRAM stack.

AMD engineers "rebalanced DRAM versus logic power consumption to protect future GPU performance growth," the company said.

"We live in a fixed-power world. Using GDDR5 to hit the bandwidth [goal] we'd be taking away power from the GPU compute budget," said Joe Macri, an AMD corporate fellow and product CTO.

The HBM stack took seven years to build from whiteboard to availability, Macri noted. By comparison, the GDDR5 interface took four years to come to market.

Nvidia previously announced work with SK Hynix's competitor Micron on a next-generation memory stack called the hybrid memory cube (HMC). Nvidia said its GPU stack will be made using a TSMC process called CoWoS (chip on wafer on substrate). Nvidia has sent mixed signals about exactly which of its graphics chips will use HMC or when such parts will be ready.

"Nvidia been talking about this in Powerpoint, [and] while they were drawing their Powerpoint, we were doing the work," Macri said. "I'd be surprised if Nvidia used HMC 1 at all; I think they'll wait for HMC 2. They are very far behind," he added.

DRAM stacks will represent just 3-4 per cent of the total DRAM market in the next five years, with the first products shipping in the next year, said Mike Howard, a memory analyst with IHS Inc.

Nvidia/Micron, AMD/SK Hynix, and memory giant Samsung have similar technologies, Howard said. In the near term, the recent introduction of DDR4 DRAMs for general-purpose computers will mean the DRAM stacks will be used in smaller markets such as high-end graphics, he added.

The graphics market may not be large enough to sustain DRAM stacks from all three major memory makers, Howard said. "But with the recent launch of DDR4 and LPDDR4...there's still too much fog of war to figure out what's going to replace these technologies," he added.

- Jessica Lipsky
  EE Times





Comment on "AMD looks to merge GPU, DRAM stack i..."
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top