[英]NumPy calculate square of norm 2 of vector
I have vector a
.我有向量
a
。
I want to calculate np.inner(a, a)
我想计算
np.inner(a, a)
But I wonder whether there is prettier way to calc it.但我想知道是否有更漂亮的方法来计算它。
[The disadvantage of this way, that if I want to calculate it for ab
or a bit more complex expression, I have to do that with one more line. [这种方式的缺点是,如果我想为
ab
或更复杂的表达式计算它,我必须多做一行。 c = a - b
and np.inner(c, c)
instead of somewhat(a - b)
] c = a - b
和np.inner(c, c)
而不是somewhat(a - b)
]
to calculate norm2计算范数2
numpy.linalg.norm(x, ord=2)
numpy.linalg.norm(x, ord=2)**2
for square numpy.linalg.norm(x, ord=2)**2
方格
Honestly there's probably not going to be anything faster than np.inner
or np.dot
.老实说,可能没有比
np.inner
或np.dot
更快的了。 If you find intermediate variables annoying, you could always create a lambda function:如果你觉得中间变量很烦人,你总是可以创建一个 lambda 函数:
sqeuclidean = lambda x: np.inner(x, x)
np.inner
and np.dot
leverage BLAS routines, and will almost certainly be faster than standard elementwise multiplication followed by summation. np.inner
和np.dot
利用 BLAS 例程,几乎肯定会比标准的元素乘法和求和更快。
In [1]: %%timeit -n 1 -r 100 a, b = np.random.randn(2, 1000000)
((a - b) ** 2).sum()
....:
The slowest run took 36.13 times longer than the fastest. This could mean that an intermediate result is being cached
1 loops, best of 100: 6.45 ms per loop
In [2]: %%timeit -n 1 -r 100 a, b = np.random.randn(2, 1000000)
np.linalg.norm(a - b, ord=2) ** 2
....:
1 loops, best of 100: 2.74 ms per loop
In [3]: %%timeit -n 1 -r 100 a, b = np.random.randn(2, 1000000)
sqeuclidean(a - b)
....:
1 loops, best of 100: 2.64 ms per loop
np.linalg.norm(..., ord=2)
uses np.dot
internally, and gives very similar performance to using np.inner
directly. np.linalg.norm(..., ord=2)
np.dot
内部使用np.dot
,其性能与直接使用np.inner
非常相似。
I don't know if the performance is any good, but (a**2).sum()
calculates the right value and has the non-repeated argument you want.我不知道性能是否有任何好处,但是
(a**2).sum()
计算出正确的值并具有您想要的非重复参数。 You can replace a
with some complicated expression without binding it to a variable, just remember to use parentheses as necessary, since **
binds more tightly than most other operators: ((ab)**2).sum()
您可以用一些复杂的表达式替换
a
而不将其绑定到变量,只要记住必要时使用括号,因为**
比大多数其他运算符绑定得更紧密: ((ab)**2).sum()
In case you end up here looking for a fast way to get the squared norm, these are some tests showing distances = np.sum((descriptors - desc[None])**2, axis=1)
to be the quickest.如果您最终在这里寻找获得平方范数的快速方法,这些测试表明
distances = np.sum((descriptors - desc[None])**2, axis=1)
是最快的。
import timeit
setup_code = """
import numpy as np
descriptors = np.random.rand(3000, 512)
desc = np.random.rand(512)
"""
norm_code = """
np.linalg.norm(descriptors - desc[None], axis=-1)
"""
norm_time = timeit.timeit(stmt=norm_code, setup=setup_code, number=100, )
einsum_code = """
x = descriptors - desc[None]
sqrd_dist = np.einsum('ij,ij -> i', x, x)
"""
einsum_time = timeit.timeit(stmt=einsum_code, setup=setup_code, number=100, )
norm_sqrd_code = """
distances = np.sum((descriptors - desc[None])**2, axis=1)
"""
norm_sqrd_time = timeit.timeit(stmt=norm_sqrd_code, setup=setup_code, number=100, )
print(norm_time) # 0.7688689678907394
print(einsum_time) # 0.29194538854062557
print(norm_sqrd_time) # 0.274090813472867
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.