简体   繁体   English

在 Python 中计算 numpy ndarray 中非 NaN 元素的数量

[英]Counting the number of non-NaN elements in a numpy ndarray in Python

I need to calculate the number of non-NaN elements in a numpy ndarray matrix.我需要计算 numpy ndarray 矩阵中非 NaN 元素的数量。 How would one efficiently do this in Python?如何在 Python 中有效地做到这一点? Here is my simple code for achieving this:这是我实现此目的的简单代码:

import numpy as np

def numberOfNonNans(data):
    count = 0
    for i in data:
        if not np.isnan(i):
            count += 1
    return count 

Is there a built-in function for this in numpy? numpy 中是否有内置函数? Efficiency is important because I'm doing Big Data analysis.效率很重要,因为我正在做大数据分析。

Thnx for any help!感谢您的帮助!

np.count_nonzero(~np.isnan(data))

~ inverts the boolean matrix returned from np.isnan . ~反转从np.isnan返回的布尔矩阵。

np.count_nonzero counts values that is not 0\false. np.count_nonzero计算非 0\false 的值。 .sum should give the same result. .sum应该给出相同的结果。 But maybe more clearly to use count_nonzero但也许更清楚地使用count_nonzero

Testing speed:测试速度:

In [23]: data = np.random.random((10000,10000))

In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan

In [25]: %timeit data.size - np.count_nonzero(np.isnan(data))
1 loops, best of 3: 309 ms per loop

In [26]: %timeit np.count_nonzero(~np.isnan(data))
1 loops, best of 3: 345 ms per loop

In [27]: %timeit data.size - np.isnan(data).sum()
1 loops, best of 3: 339 ms per loop

data.size - np.count_nonzero(np.isnan(data)) seems to barely be the fastest here. data.size - np.count_nonzero(np.isnan(data))似乎几乎不是这里最快的。 other data might give different relative speed results.其他数据可能会给出不同的相对速度结果。

Quick-to-write alterantive快速书写替代品

Even though is not the fastest choice, if performance is not an issue you can use:即使不是最快的选择,如果性能不是问题,您可以使用:

sum(~np.isnan(data)) . sum(~np.isnan(data))

Performance:表现:

In [7]: %timeit data.size - np.count_nonzero(np.isnan(data))
10 loops, best of 3: 67.5 ms per loop

In [8]: %timeit sum(~np.isnan(data))
10 loops, best of 3: 154 ms per loop

In [9]: %timeit np.sum(~np.isnan(data))
10 loops, best of 3: 140 ms per loop

To determine if the array is sparse, it may help to get a proportion of nan values要确定数组是否稀疏,可能有助于获取一定比例的 nan 值

np.isnan(ndarr).sum() / ndarr.size

If that proportion exceeds a threshold, then use a sparse array, eg - https://sparse.pydata.org/en/latest/如果该比例超过阈值,则使用稀疏数组,例如 - https://sparse.pydata.org/en/latest/

An alternative, but a bit slower alternative is to do it over indexing.另一种选择,但有点慢的选择是通过索引来完成。

np.isnan(data)[np.isnan(data) == False].size

In [30]: %timeit np.isnan(data)[np.isnan(data) == False].size
1 loops, best of 3: 498 ms per loop 

The double use of np.isnan(data) and the == operator might be a bit overkill and so I posted the answer only for completeness. np.isnan(data)==运算符的双重使用可能有点矫枉过正,所以我发布答案只是为了完整性。

len([i for i in data if np.isnan(i) == True])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM