简体   繁体   English

Matmul有不同的级别

[英]Matmul with different rank

I have 3 tensor 我有3个张量
X shape (1, c, h, w) , assume (1, 20, 40, 50) X(1, c, h, w) ,假设(1, 20, 40, 50)
Fx shape (num, w, N) , assume (1000, 50, 10) Fx形状(num, w, N) ,假设(1000, 50, 10)
Fy shape (num, N, h) , assume (1000, 10, 40) Fy形状(num, N, h) ,假设(1000, 10, 40)

What I want to do is Fy * (X * Fx) ( * means matmul ) 我想做的是Fy * (X * Fx)*表示matmul
X * Fx shape (num, c, h, N) , assume (1000, 20, 40, 10) X * Fx形状(num, c, h, N) ,假设(1000, 20, 40, 10)
Fy * (X * Fx) shape (num, c, N, N) , assume (1000, 20, 10, 10) Fy * (X * Fx)形状(num, c, N, N) ,假设(1000, 20, 10, 10)

I am using tf.tile and tf.expand_dims to do it 我正在使用tf.tiletf.expand_dims来完成它
but I think it use a lot of memory( tile copy data right?), and slow 但我认为它使用了大量内存( tile复制数据对吗?),而且速度慢
try to find better way that faster and use small memory to accomplish 试着找到更好的方式,更快,并使用小内存来完成

# X: (1, c, h, w)
# Fx: (num, w, N)
# Fy: (num, N, h)

X = tf.tile(X, [tf.shape(Fx)[0], 1, 1, 1])  # (num, c, h, w)
Fx_ex = tf.expand_dims(Fx, axis=1)  # (num, 1, w, N)
Fx_ex = tf.tile(Fx_ex, [1, c, 1, 1])  # (num, c, w, N)
tmp = tf.matmul(X, Fxt_ex)  # (num, c, h, N)

Fy_ex = tf.expand_dims(Fy, axis=1)  # (num, 1, N, h)
Fy_ex = tf.tile(Fy_ex, [1, c, 1, 1])  # (num, c, N, h)
res = tf.matmul(Fy_ex, tmp) # (num, c, N, N)

A case for the mythical einsum , I guess: 我想是一个神话般的einsum案例:

>>> import numpy as np
>>> X = np.random.rand(1, 20, 40, 50)
>>> Fx = np.random.rand(100, 50, 10)
>>> Fy = np.random.rand(100, 10, 40)
>>> np.einsum('nMh,uchw,nwN->ncMN', Fy, X, Fx).shape
(100, 20, 10, 10)

It's should work almost the same in tf as in numpy (using uppercase indices isn't allowed in some tf versions, I saw). 它应该在tfnumpy的工作方式几乎相同(在某些tf版本中不允许使用大写索引,我看到了)。 Although this admittedly exceeds a regex in unreadability if you've never seen the notation before. 虽然如果你以前从未见过这种表示法,这肯定超过了不可读性的正则表达式。

For otherone may interested 对于其他人可能感兴趣
I think the answer of @phg maybe work 我认为@phg的答案可能有用
But in my case num h w are dynamic, ie None 但对我来说num h w是动态的,即None
So tf.einsum in tensorflow r1.0 will raise error, since there are more than one None shape in one tensor 因此, tf.einsum量流r1.0中的tf.einsum将引发误差,因为在一个张量中存在多个None形状

fortunately, there is a issue and pull request 幸运的是,有一个问题拉请求
seems can handle situation that there are more than one None shape 似乎可以处理有多个None形状的情况
Need to build from source(master branch) 需要从源码构建(主分支)
I will report the result after I re-build tensorflow 我将在重新构建tensorflow后报告结果

BTW, in tf.einsum only accept lowercase 顺便说一句,在tf.einsum只接受小写

Report 报告
Yes, The newest version of tensorflow (master branch) accept dynamic shape for tf.einsum 是的,最新版本的tensorflow(主分支)接受tf.einsum动态形状
and it is huge speed improvement after using tf.einsum , really awesome 使用tf.einsum后速度大大提升,非常棒

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 ValueError:形状必须为 2 级,但“MatMul”为 1 级 - ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' ValueError:Shape必须是2级,但是'MatMul'的排名是3 - ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' 等级> 2的Tensorflow matmul操作不起作用 - Tensorflow matmul operation for rank>2 does not work ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shape: [2], [2,3] - ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [2], [2,3] 如何解决“hape必须排名2但是'MatMul_4'(op:'MatMul')排名为0的输入形状:[],[3]。” - how to fix “hape must be rank 2 but is rank 0 for 'MatMul_4' (op: 'MatMul') with input shapes: [], [3].” 尝试转换数学运算时MatMul排名错误 - MatMul rank error when trying to convert math operations 形状必须为2级,但输入形状为[100,100],[?, 15,100]的'MatMul_46'(op:'MatMul')的等级为3 - Shape must be rank 2 but is rank 3 for 'MatMul_46' (op: 'MatMul') with input shapes: [100,100], [?,15,100] 不同等级的张量批量tf.matmul - Batch tf.matmul of tensors with different ranks 广播np.dot vs tf.matmul以进行张量矩阵乘法(形状必须为2级,但为3级错误) - Broadcasting np.dot vs tf.matmul for tensor-matrix multiplication (Shape must be rank 2 but is rank 3 error) 添加 2 个不同等级的张量 - Add 2 tensors with different rank
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM