简体   繁体   中英

Making C++ Eigen LU faster (my tests show 2x slower than GSL)

I'm comparing LU decomposition/solve of Eigen to GSL, and find Eigen to be on the order of 2x slower with -O3 optimizations on g++/OSX. I isolated timing of the decompose and the solve, but find both to be substantially slower than their GSL counterparts. Am I doing something silly, or does Eigen not perform well for this use case (eg very small systems?) Built with Eigen 3.2.8 and an older GSL 1.15. The test case is very contrived, but mirrors the results in some nonlinear-fitting software I'm writing - Eigen being anywhere from 1.5x - 2x+ slower for the total LU/solve operation.

#define NDEBUG

#include "sys/time.h"
#include "gsl/gsl_linalg.h"
#include <Eigen/LU>

// Ax=b is a 3x3 system for which soln is x=[8,2,3]
//
double avals_col[9] = { 4, 2, 2, 7, 5, 5, 7, 5, 9 };
    // col major
double avals_row[9] = { 4, 7, 7, 2, 5, 5, 2, 5, 9 };
    // row major
double bvals[9] = { 67, 41, 53 };

//----------- helpers

void print_solution( double *x, int dim, char *which ) {
    printf( "%s solve for x:\n", which );
    for( int j=0; j<3; j++ ) {
        printf( "%g ", x[j] );
    }
    printf( "\n" );
}

struct timeval tv;
struct timezone tz;
double timeNow() {
    gettimeofday( &tv, &tz );
    int _mils = tv.tv_usec/1000;
    int _secs = tv.tv_sec;
    return (double)_secs + ((double)_mils/1000.0);
}

//-----------

void run_gsl( double *A, double *b, double *x, int dim, int reps ) {
    gsl_matrix_view gslA;
    gsl_vector_view gslB;
    gsl_vector_view gslX;
    gsl_permutation *gslP;
    int sign;

    gslA = gsl_matrix_view_array( A, dim, dim );
    gslP = gsl_permutation_alloc( dim );
    gslB = gsl_vector_view_array( b, dim );
    gslX = gsl_vector_view_array( x, dim );

    int err;
    double t, elapsed;
    t = timeNow();
    for( int i=0; i<reps; i++ ) {
        // gsl overwrites A during decompose, so we must copy the initial A each time.
        memcpy( A, avals_row, sizeof(avals_row) );
        err = gsl_linalg_LU_decomp( &gslA.matrix, gslP, &sign );
    }
    elapsed = timeNow() - t;
    printf( "GSL decompose (%dx) time = %g\n", reps, elapsed );

    t = timeNow();
    for( int i=0; i<reps; i++ ) {
        err = gsl_linalg_LU_solve( &gslA.matrix, gslP, &gslB.vector, &gslX.vector );
    }
    elapsed = timeNow() - t;
    printf( "GSL solve (%dx) time = %g\n", reps, elapsed );

    gsl_permutation_free( gslP );
}

void run_eigen( double *A, double *b, double *x, int dim, int reps ) {
    Eigen::PartialPivLU<Eigen::MatrixXd> eigenA_lu;

    Eigen::Map< Eigen::Matrix < double, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor > > ma( A, dim, dim );
    Eigen::Map<Eigen::MatrixXd> mb( b, dim, 1 );

    int err;
    double t, elapsed;
    t = timeNow();
    for( int i=0; i<reps; i++ ) {
        // This memcpy is not necessary for Eigen, which does not overwrite A in the
        // decompose, but do it so that the time is directly comparable to GSL. 
        memcpy( A, avals_col, sizeof(avals_col) );
        eigenA_lu.compute( ma );
    }
    elapsed = timeNow() - t;
    printf( "Eigen decompose (%dx) time = %g\n", reps, elapsed );

    t = timeNow();
    Eigen::VectorXd _x;
    for( int i=0; i<reps; i++ ) {
         _x = eigenA_lu.solve( mb );
    }
    elapsed = timeNow() - t;
    printf( "Eigen solve (%dx) time = %g\n", reps, elapsed );

    // copy soln to passed x
    for( int i=0; i<dim; i++ ) {
        x[i] = _x(i);
    }
}

int main() {
    // solve a 3x3 system many times

    double A[9], b[3], x[3];
    int dim = 3;
    int reps = 1000000;

    memcpy( b, bvals, sizeof(bvals) );
        // init b vector, A is copied multiple times in run_gsl/run_eigen

    run_eigen( A, b, x, dim, reps );
    print_solution( x, dim, "Eigen" );  

    run_gsl( A, b, x, dim, reps );
    print_solution( x, dim, "GSL" );

    return 0;
}

This produces, for example:

Eigen decompose (1000000x) time = 0.242
Eigen solve (1000000x) time = 0.108
Eigen solve for x:
8 2 3 
GSL decompose (1000000x) time = 0.049
GSL solve (1000000x) time = 0.075
GSL solve for x:
8 2 3 

Your benchmark is not really fair as you are doing the copy of the input matrix twice in the Eigen version: one manually through memcpy , and one within PartialPivLU . You also let Eigen knowns that mb is a vector by declaring it as a Map<Eigen::Vectord> . Then I get (GCC5,-O3,Eigen3.3):

Eigen decompose (1000000x) time = 0.087
Eigen solve (1000000x) time = 0.036
Eigen solve for x:
8 2 3
GSL decompose (1000000x) time = 0.032
GSL solve (1000000x) time = 0.062
GSL solve for x:
8 2 3

Moreover, Eigen's PartialPivLU is not really designed for such extremely tiny matrices (see below). For 3x3 matrices, better explicitly compute the inverse (for matrices up to 4x4 it is usually, ok, but not for larger ones!). In this case you must fix the sizes at compile-time:

Eigen::PartialPivLU<Eigen::Matrix3d> eigenA_lu;
Eigen::Map<Eigen::Matrix3d> ma(avals_col);
Eigen::Map<Eigen::Vector3d> mb(b);
Eigen::Matrix3d inv;
Eigen::Vector3d _x;
double t, elapsed;
t = timeNow();
for( int i=0; i<reps; i++ ) {
    inv = ma.inverse();
}
elapsed = timeNow() - t;
printf( "Eigen decompose (%dx) time = %g\n", reps, elapsed );

t = timeNow();
for( int i=0; i<reps; i++ ) {
  _x.noalias() = inv * mb;
}
elapsed = timeNow() - t;
printf( "Eigen solve (%dx) time = %g\n", reps, elapsed );

which gives me:

Eigen inverse and solve (1000000x) time = 0.0209999
Eigen solve (1000000x) time = 0.000999928

so much faster.

Now if we try a much larger problem, like 3000 x 3000, we get more than one order of magnitude of difference in favor of Eigen:

Eigen decompose (1x) time = 0.411
GSL decompose (1x) time = 6.073

This is typically the optimizations that allows such performance for large problems that also introduces some overhead for very tiny matrices.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM