繁体   English   中英

为什么CUDA中的重叠数据传输比预期的慢?

[英]Why are overlapping data transfers in CUDA slower than expected?

当我在Tesla C2050的SDK(4.0)中运行simpleMultiCopy时,会得到以下结果:

[simpleMultiCopy] starting...
[Tesla C2050] has 14 MP(s) x 32 (Cores/MP) = 448 (Cores)
> Device name: Tesla C2050
> CUDA Capability 2.0 hardware with 14 multi-processors
> scale_factor = 1.00
> array_size   = 4194304


Relevant properties of this CUDA device
(X) Can overlap one CPU<>GPU data transfer with GPU kernel execution (device property "deviceOverlap")
(X) Can overlap two CPU<>GPU data transfers with GPU kernel execution
    (compute capability >= 2.0 AND (Tesla product OR Quadro 4000/5000)

Measured timings (throughput):
 Memcpy host to device  : 2.725792 ms (6.154988 GB/s)
 Memcpy device to host  : 2.723360 ms (6.160484 GB/s)
 Kernel         : 0.611264 ms (274.467599 GB/s)

Theoretical limits for speedup gained from overlapped data transfers:
No overlap at all (transfer-kernel-transfer): 6.060416 ms 
Compute can overlap with one transfer: 5.449152 ms
Compute can overlap with both data transfers: 2.725792 ms

Average measured timings over 10 repetitions:
 Avg. time when execution fully serialized  : 6.113555 ms
 Avg. time when overlapped using 4 streams  : 4.308822 ms
 Avg. speedup gained (serialized - overlapped)  : 1.804733 ms

Measured throughput:
 Fully serialized execution     : 5.488530 GB/s
 Overlapped using 4 streams     : 7.787379 GB/s
[simpleMultiCopy] test results...
PASSED

这表明预期的运行时间为2.7毫秒,而实际上需要4.3毫秒。 究竟是什么导致这种差异? (我还将这个问题发布在http://forums.developer.nvidia.com/devforum/discussion/comment/8976上 。)

第一个内核启动要等到第一个memcpy完成后才能启动,而最后一个memcpy必须等到最后一个内核启动完成后才能启动。 因此,有一个“突出端”引入了您正在观察的一些开销。 您可以通过增加流的数量来减小“突出端”的大小,但是流的引擎间同步会产生其自身的开销。

重要的是要注意,重叠的计算+传输并不能总是使给定的工作负载受益-除了上述开销问题之外,工作负载本身还必须花费等量的时间进行计算和数据传输。 根据阿姆达尔定律,随着工作负载变为基于传输的负载或受计算限制的工作,潜在的2倍或3倍加速下降。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM