简体   繁体   English

VexCL,Thrust和Boost.Compute之间的差异

[英]Differences between VexCL, Thrust, and Boost.Compute

With a just a cursory understanding of these libraries, they look to be very similar. 通过对这些库的粗略理解,它们看起来非常相似。 I know that VexCL and Boost.Compute use OpenCl as a backend (although the v1.0 release VexCL also supports CUDA as a backend) and Thrust uses CUDA. 我知道VexCL和Boost.Compute使用OpenCl作为后端(尽管v1.0版本VexCL也支持CUDA作为后端)并且Thrust使用CUDA。 Aside from the different backends, what's the difference between these. 除了不同的后端,这些之间的区别是什么。

Specifically, what problem space do they address and why would I want to use one over the other. 具体来说,他们解决了什么问题空间,为什么我要使用其中一个。

Also, on the Thrust FAQ it is stated that 另外,在Thrust常见问题解答中说明了这一点

The primary barrier to OpenCL support is the lack of an OpenCL compiler and runtime with support for C++ templates OpenCL支持的主要障碍是缺少OpenCL编译器和运行时支持C ++模板

If this is the case, how is it possible that VexCL and Boost.Compute even exist. 如果是这种情况,VexCL和Boost.Compute怎么可能存在。

I am the developer of VexCL , but I really like what Kyle Lutz , the author of Boost.Compute , had to say on the same subject on Boost mailing list . 我是VexCL的开发者,但我真的很喜欢Boost.Compute的作者Kyle LutzBoost邮件列表上对同一主题所说的话。 In short, from the user standpoint Thrust , Boost.Compute, AMD's Bolt and probably Microsoft's C++ AMP all implement an STL-like API, while VexCL is an expression template based library that is closer to Eigen in nature. 简而言之,从用户的角度来看, Thrust ,Boost.Compute,AMD的Bolt和可能的Microsoft的C ++ AMP都实现了类似STL的API,而VexCL是一个基于表达式模板的库,本质上更接近Eigen I believe the main difference between the STL-like libraries is their portability: 我相信类似STL的库之间的主要区别在于它们的可移植性:

  1. Thrust only supports NVIDIA GPUs, but may also work on CPUs through its OpenMP and TBB backends. Thrust仅支持NVIDIA GPU,但也可以通过其OpenMP和TBB后端在CPU上运行。
  2. Bolt uses AMD extensions to OpenCL which are only available on AMD GPUs. Bolt使用AMD对OpenCL的扩展,这些扩展仅在AMD GPU上可用。 It also provides Microsoft C++ AMP and Intel TBB backends. 它还提供Microsoft C ++ AMP和Intel TBB后端。
  3. The only compiler that supports Microsoft C++ AMP is Microsoft Visual C++ (although the work on Bringing C++AMP Beyond Windows is being done). 唯一支持Microsoft C ++ AMP的编译器是Microsoft Visual C ++(尽管正在开展关于使用C ++ AMP超越Windows的工作)。
  4. Boost.Compute seems to be the most portable solution of those, as it is based on standard OpenCL. Boost.Compute似乎是最便携的解决方案,因为它基于标准的OpenCL。

Again, all of those libraries are trying to implement an STL-like interface, so they have very broad applicability. 同样,所有这些库都试图实现类似STL的接口,因此它们具有非常广泛的适用性。 VexCL was developed with scientific computing in mind. VexCL的开发考虑了科学计算。 If Boost.Compute was developed a bit earlier, I could probably base VexCL on top of it :). 如果Boost.Compute的开发时间稍早,我可能会将VexCL建立在它之上:)。 Another library for scientific computing worth looking at is ViennaCL , a free open-source linear algebra library for computations on many-core architectures (GPUs, MIC) and multi-core CPUs. 另一个值得关注的科学计算库是ViennaCL ,这是一个免费的开源线性代数库,用于计算多核架构(GPU,MIC)和多核CPU。 Have a look at [1] for the comparison of VexCL, ViennaCL, CMTL4 and Thrust for that field. 看看[1]比较VexCL,ViennaCL,CMTL4和Thrust对于该领域。

Regarding the quoted inability of Thrust developers to add an OpenCL backend: Thrust, VexCL and Boost.Compute (I am not familiar with the internals of other libraries) all use metaprogramming techniques to do what they do. 关于Thrust开发人员引用无法添加OpenCL后端:Thrust,VexCL和Boost.Compute(我不熟悉其他库的内部)都使用元编程技术来完成他们的工作。 But since CUDA supports C++ templates, the job of Thrust developers is probably a bit easier: they have to write metaprograms that generate CUDA programs with help of C++ compiler. 但是由于CUDA支持C ++模板,Thrust开发人员的工作可能更容易一些:他们必须编写在C ++编译器的帮助下生成CUDA程序的元程序。 VexCL and Boost.Compute authors write metaprograms that generate programs that generate OpenCL source code. VexCL和Boost.Compute作者编写元程序,生成生成OpenCL源代码的程序。 Have a look at the slides where I tried to explain how VexCL is implemented. 看一下我试图解释VexCL如何实现的幻灯片 So I agree that current Thrust's design prohibits them adding an OpenCL backend. 所以我同意当前Thrust的设计禁止他们添加OpenCL后端。

[1] Denis Demidov, Karsten Ahnert, Karl Rupp, Peter Gottschling, Programming CUDA and OpenCL: A Case Study Using Modern C++ Libraries , SIAM J. Sci. [1] Denis Demidov,Karsten Ahnert,Karl Rupp,Peter Gottschling, 编程CUDA和OpenCL:使用现代C ++库的案例研究 ,SIAM J. Sci。 Comput., 35(5), C453–C472. Comput。,35(5),C453-C472。 (an arXiv version is also available). (也提供arXiv版本 )。

Update: @gnzlbg commented that there is no support for C++ functors and lambdas in OpenCL-based libraries. 更新:@gnzlbg评论说在基于OpenCL的库中不支持C ++仿函数和lambda。 And indeed, OpenCL is based on C99 and is compiled from sources stored in strings at runtime, so there is no easy way to fully interact with C++ classes. 事实上,OpenCL基于C99,并且是在运行时从存储在字符串中的源代码编译的,因此没有简单的方法可以与C ++类完全交互。 But to be fair, OpenCL-based libraries do support user-based functions and even lambdas to some extent. 但公平地说,基于OpenCL的库确实在某种程度上支持基于用户的功能甚至lambda。

Having said that, CUDA-based libraries (and may be C++ AMP) have an obvious advantage of actual compile-time compiler (can you even say that?), so the integration with user code can be much tighter. 话虽如此,基于CUDA的库(可能是C ++ AMP)具有实际编译时编译器的明显优势(你甚至可以这么说吗?),因此与用户代码的集成可以更加紧密。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM