简体   繁体   中英

Partitioned Matrix-Vector Multiplication

Given a very sparse nxn matrix A with nnz(A) non-zeros, and a dense nxn matrix B . I would like to compute the matrix product AxB . Since n is very large, if carried out naively, the dense matrix B cannot be put into the memory. I have the following two options, but not sure which one is better. Could you give some suggestions. Thanks.

Option1. I parition the matrix B into n column vectors [b1,b2,...,bn] . Then, I can put matrix A and any single vector bi into the memory, and calculate the A*b1, A*b2, ..., A*bn , respectively.

Option2. I partition the matrices A and B , respectively, into four n/2Xn/2 blocks, and then use the block matrix-matrix multiplications to calculate A*B .

Which of the above choice is better? Can I say that Option 1 has high performance in parallel calculation?

See a discussion of both approaches, though for two dense matrices, in this document from the Scalapack documentation. Scalapack is the one of the reference tools for distributed linear algebra.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM