简体   繁体   中英

How to compute GPU memory bus?

I'm learning OpenCL/CUDA for GPU computing. When I study the GDDR5 architecture, I'm told that

memory bus = quntity of memory channel * memory channel width

I see an AMD GPU has 16 memory channels with 32-bits wide, so I get the memory pin width = 16 * 32 = 512bits. But I found that the mainstream graphic card has only 256/384-bits memory bus.

What's going wrong with it?

For GPUs, the number of memory channels usually is not explicitely stated, but rather the total bus width (in bits) for all channels combined. The bus width varies greatly depending on how many memory modules are on the PCB and the bus width per memory module. GPUs with 256bit total bus width typically have 8 memory modules with 1GB capacity each and GPUs with 384bit have 12.

For CPUs or integrated GPUs which share main memory:

  • memory bus width per channel = 64bit
  • numer of memory channels = 2 (mainstream plattforms) / 4 or 8 (high-end desktop / workstation)
  • memory clock = 1600MHz (DDR3) - 3200+MHz (DDR4)
  • memory bandwidth = 0.125 * memory bus width per channel * numer of memory channels * memory clock

For dedicated GPUs:

  • total memory bus width = 64bit (GDDR3) - 256bit (GDDR5) - 5120bit (HBM2)
  • effective memory clock = <5GHz (GDDR5) - 19.5GHz (GDDR6X)
  • memory bandwidth = 0.125 * total memory bus width * effective memory clock

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM