## Question

Asked By – jjepsuomi

I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this:

```
import numpy as np
def numberOfNonNans(data):
count = 0
for i in data:
if not np.isnan(i):
count += 1
return count
```

Is there a built-in function for this in numpy? Efficiency is important because I’m doing Big Data analysis.

Thnx for any help!

**Now we will see solution for issue: Counting the number of non-NaN elements in a numpy ndarray in Python **

## Answer

```
np.count_nonzero(~np.isnan(data))
```

`~`

inverts the boolean matrix returned from `np.isnan`

.

`np.count_nonzero`

counts values that is not 0\false. `.sum`

should give the same result. But maybe more clearly to use `count_nonzero`

Testing speed:

```
In [23]: data = np.random.random((10000,10000))
In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan
In [25]: %timeit data.size - np.count_nonzero(np.isnan(data))
1 loops, best of 3: 309 ms per loop
In [26]: %timeit np.count_nonzero(~np.isnan(data))
1 loops, best of 3: 345 ms per loop
In [27]: %timeit data.size - np.isnan(data).sum()
1 loops, best of 3: 339 ms per loop
```

`data.size - np.count_nonzero(np.isnan(data))`

seems to barely be the fastest here. other data might give different relative speed results.

This question is answered By – M4rtini

**This answer is collected from stackoverflow and reviewed by FixPython community admins, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 **