简体   繁体   English

Lapackpp vs Boost BLAS

[英]Lapackpp vs Boost BLAS

for start, i am newbie in C++. 首先,我是C ++的新手。

i am writing a program for my Master thesis which part of it suppose to solve regression in a recursive way. 我正在为我的硕士论文写一个程序,它的一部分假设是以递归的方式解决回归问题。

I would like to solve: 我想解决:

Ax = y

In my case computation speed is not neglectable, that is way i would like to know if Boost::BLAS using 在我的情况下,计算速度是不可忽略的,这是我想知道Boost :: BLAS使用的方式

x = (A^T A)^{-1}A^Ty

will require less computation time then Lapackpp (I am using gentoo). Laprappp(我使用的是gentoo)需要更少的计算时间。

PS I was able to find at Lapackpp project site Class documentations but not examples. PS我能够在Lapackpp项目网站上找到Class文档而不是示例。 Could someone provides me some examples in case Lapack is faster then Boost::BLAS 有人可以提供一些例子,以防Lapack比Boost :: BLAS更快

Thanks 谢谢

From a numerical analysis standpoint, you never want to write code that 从数值分析的角度来看,您永远不想编写代码

  • Explicitly inverts a matrix, or 显式反转矩阵,或
  • Forms the matrix of normal equations ( A^TA ) for a regression 形成正则方程矩阵( A^TA )用于回归

Both of these are more work and less accurate (and likely less stable) than the alternatives that solve the same problem directly. 与直接解决相同问题的替代方案相比,这两种方法都更有效,而且准确度更低(并且可能更不稳定)。

Whenever you see some math showing a matrix inversion, that should be read to mean "solve a system of linear equations", or factor the matrix and use the factorization to solve the system. 每当你看到一些显示矩阵求逆的数学时,应将其理解为“求解线性方程组”,或者对矩阵进行因子分解并使用因子分解来求解系统。 Both BLAS and Lapack have routines to do this. BLAS和Lapack都有例行程序。

Similarly, for the regression, call a library function that computes a regression, or read how to do it yourself. 同样,对于回归,调用计算回归的库函数,或者自己阅读如何进行回归。 The normal equations method is the textbook wrong way to do it. 正规方程法是教科书错误的方法

Do you really need to implement with C++? 你真的需要用C ++实现吗? Would for example python/ numpy be an alternative for you? 例如python / numpy会替代你吗? And for recursive regression (least squares) I'll recommend to look for MIT Professor Strang's lectures on linear algebra and/ or his books. 对于递归回归(最小二乘),我建议寻找麻省理工学院教授Strang关于线性代数和/或他的书籍的讲座。

Armadillo wraps BLAS and LAPACK in a nice C++ interface, and provides the following Matlab-like functions directly related to your problem: Armadillo将BLAS和LAPACK包装在一个漂亮的C ++界面中,并提供与您的问题直接相关的以下类似Matlab的函数:

  • solve() , to solve a system of linear equations solve() ,求解线性方程组
  • pinv() , pseudo-inverse (which uses SVD internally) pinv() ,伪逆(在内部使用SVD)

High level interface and low level optimizations are two different things. 高级接口和低级优化是两回事。

LAPACK and uBLAS provide high level interface and un-optimized low level implementation. LAPACK和uBLAS提供高级接口和未优化的低级实现。 Hardware optimized low level routines (or bindings) should come from somewhere else. 硬件优化的低级例程(或绑定)应该来自其他地方。 Once bindings are provided, LAPACK and uBLAS can use optimized low level routines instead of their own un-optimized implementations. 一旦提供了绑定,LAPACK和uBLAS就可以使用优化的低级例程而不是他们自己的未优化实现。

For example, ATLAS provides optimized low level routines, but only limited high level (level 3 BLAS and etc) interface. 例如,ATLAS提供优化的低级例程,但仅限于高级别(3级BLAS等)接口。 You can bind ATLAS to LAPACK. 您可以将ATLAS绑定到LAPACK。 Then LAPACK would use ATLAS for low level work. 然后LAPACK将使用ATLAS进行低级别的工作。 Think of LAPACK as a senior manager who delegates technical work to experienced engineers (ATLAS). 将LAPACK视为将技术工作委派给经验丰富的工程师(ATLAS)的高级经理。 The same for uBLAS. uBLAS也一样。 You can bind uBLAS and MKL. 你可以绑定uBLAS和MKL。 The result would be optimized C++ library. 结果将是优化的C ++库。 Check the documentation and use google to figure out how to do it. 检查文档并使用谷歌找出如何做到这一点。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM