简体   繁体   中英

Dealing with zeros in numpy array normalization

I have a numpy array of 2D vectors, which I am trying to normalize as below. The array can have vectors with magnitude zero.

x = np.array([[0.0, 0.0], [1.0, 0.0]])
norms = np.array([np.linalg.norm(a) for a in x])

>>> x/norms
array([[ nan,   0.],
       [ inf,   0.]])

>>> nonzero = norms > 0.0
>>> nonzero
array([False,  True], dtype=bool)

Can I somehow use nonzero to apply the division only to x[i] such that nonzero[i] is True ? (I can write a loop for this - just wondering if there's a numpy way of doing this)

Or is there a better way of normalizing the array of vectors, skipping all zero vectors in the process?

If you can do the normalization in place, you can use your boolean indexing array like this:

nonzero = norms > 0
x[nonzero] /= norms[nonzero]

Here's one possible way of doing this

norms = np.sqrt((x**2).sum(axis=1,keepdims=True))
x[:] = np.where(norms!=0,x/norms,0.)

This uses np.where to do the substitution you need.

Note: in this case x is modified in place.

It's probably easiest just to do the calculation and then modify the results to be what you'd like:

y = x/norms
y[np.isnan(y) | np.isinf(y)]=0

#y = array([[ 0.,  0.],
#       [ 0.,  0.]])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM