简体   繁体   English

如何用pytorch进行“逐个元素就地逆运算”?

[英]How to do a “element by element in-place inverse” with pytorch?

Given is an array a : 给定一个数组a

a = np.arange(1, 11, dtype = 'float32')

With numpy, I can do the following: 使用numpy,我可以执行以下操作:

np.divide(1.0, a, out = a)

Resulting in: 导致:

array([1.        , 0.5       , 0.33333334, 0.25      , 0.2       ,
       0.16666667, 0.14285715, 0.125     , 0.11111111, 0.1       ],
      dtype=float32)

Assuming that a is instead a pytorch tensor, the following operation fails : 假设a代替pytorch张量,则以下操作失败

torch.div(1.0, a, out = a)

The first parameter of div is expected to be a tensor of matching length/shape. div的第一个参数应该是匹配长度/形状的张量。

If I substitute 1.0 with an array b filled with ones, its length equal to the length of a , it works. 如果我用充满b的数组b替换1.0 ,它的长度等于a的长度,它就可以工作。 The downside is that I have to allocate memory for b . 缺点是我必须为b分配内存。 I can also do something like a = 1.0 / a which will yet again allocate extra (temporary) memory. 我还可以执行类似a = 1.0 / a ,这将再次分配额外的(临时)内存。

How can I do this operation efficiently "in-place" (without the allocation of extra memory), ideally with broadcasting? 如何理想地在广播中有效地“就地”执行此操作(无需分配额外的内存)?

Pytorch follows the convention of using _ for in-place operations. Pytorch遵循使用_进行就地操作的约定。 for eg 例如

add -> add_  # in-place equivalent
div -> div_  # in-place equivalent
etc

Element-by-element inplace inverse. 逐个元素就位逆。

>>> a = torch.arange(1, 11, dtype=torch.float32) 
>>> a.pow_(-1) 
>>> a
>>> tensor([1.0000, 0.5000, 0.3333, 0.2500, 0.2000, 0.1667, 0.1429, 0.1250, 0.1111, 0.1000])

>>> a = torch.arange(1, 11, dtype=torch.float32) 
>>> a.div_(a ** a) 
>>> a
>>> tensor([1.0000, 0.5000, 0.3333, 0.2500, 0.2000, 0.1667, 0.1429, 0.1250, 0.1111, 0.1000])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM