简体   繁体   English

是否可以在 Tensorflow 中对 3 阶矩阵执行稀疏-密集矩阵乘法?

[英]Is it possible to perform sparse - dense matrix multiplication in Tensorflow for rank 3 matrices?

I am trying to perform sparse matrix - dense matrix multiplication in TensorFlow, where both matrices have a leading batch dimension (ie, rank 3).我正在尝试在 TensorFlow 中执行稀疏矩阵 - 密集矩阵乘法,其中两个矩阵都具有前导批处理维度(即等级 3)。 I am aware that TensorFlow provides the tf.sparse.sparse_dense_matmul function for rank 2 matrices, but I am looking for a method to handle rank 3 matrices.我知道 TensorFlow 为 2 阶矩阵提供了 tf.sparse.sparse_dense_matmul function,但我正在寻找一种方法来处理 3 阶矩阵。 Is there a built-in function or method in TensorFlow that can handle this case efficiently, without the need for expensive reshaping or slicing operations?是否有内置的 function 或 TensorFlow 中的方法可以有效地处理这种情况,而不需要昂贵的整形或切片操作? Performance is critical in my application.性能在我的应用程序中至关重要。

To illustrate my question, consider the following example code:为了说明我的问题,请考虑以下示例代码:

import tensorflow as tf

# Define sparse and dense matrices with leading batch dimension
sparse_tensor = tf.SparseTensor(indices=[[0, 1, 1], [0, 0, 1], [1, 1, 1], [1, 2, 1], [2, 1, 1]],
                                values=[1, 1, 1, 1, 1],
                                dense_shape=[3, 3, 2])
dense_matrix = tf.constant([[[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]],
                        [[0.9, 0.10, 0.11, 0.12], [0.13, 0.14, 0.15, 0.16]],
                        [[0.17, 0.18, 0.19, 0.20], [0.21, 0.22, 0.23, 0.24]]], dtype=tf.float32)
        
# Perform sparse x dense matrix multiplication
result = tf.???(sparse_tensor, dense_matrix)  # Result should have shape [3, 3, 4]

In TF, Sparse and Dense multiplication broadcasts dense to sparse only.在 TF 中,稀疏和密集乘法仅将密集广播到稀疏。 Otherwise batch_sparse_dense_matmul can be done simply by,否则batch_sparse_dense_matmul可以简单地通过,

tf.sparse.reduce_sum(tf.sparse.expand_dims(sparse_tensor,-1)*tf.expand_dims(dense_matrix,1), 2)
#[3,3,2,1] * [3,1,2,4] and reduce sum along dim=2
# the above throws error 
# because the last dim of sparse tensor [1] cannot be broadcasted to [4]

To fix the above issue, we need to tile the last dimension of the sparse tensor to make it 4.为了解决上述问题,我们需要tile稀疏张量的最后一个维度,使其成为 4。

k = 4
tf.sparse.concat(-1,[tf.sparse.expand_dims(sparse_tensor, -1)]*k)
##[3,3,2,4]

Putting together,放在一起,

tf.sparse.reduce_sum(tf.sparse.concat(-1,[tf.sparse.expand_dims(sparse_tensor, -1)]*k)*tf.expand_dims(dense_matrix,1), 2)
#timeit:1.99ms

Another way would be to use tf.map_fn ,另一种方法是使用tf.map_fn

tf.map_fn(
        lambda x: tf.sparse.sparse_dense_matmul(x[0], x[1]),
        elems=(tf.sparse.reorder(tf.cast(sparse_tensor, tf.float32)),dense_matrix ), fn_output_signature=tf.float32
    )
#timeit:4.42ms

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM