I have written a user defined function that loops through a row of values in order to give the number of zeros between values (distance between values). Those distances are appended into a list and then averaged for a final value of average distance between values. The function works great when I load in a CSV file with just one row of values. However, I would like to be able to apply the function to a file with multiple rows, and then report the output of each row into a dataframe.
This is all being run with python 3.7. I attempted to create a nested loop in order apply the function manually. I have tried the numpy.apply_along_axis function. I have also tried reading the file in as a pandas dataframe, and then using the .apply() function. However, I am a bit unfamiliar with pandas, and when I replaced the numpy indexing in the function with pandas indexing, I began to generate multiple errors.
When I load in a larger CSV file and try to apply it to file[0] for example, the function does not work. It seems to work only when I load in a file with one row of values.
def avg_dist():
import statistics as st
dist = []
ctr=0
#distances between events
for i in range(len(n)):
if n[i] > 0 and i < (len(n)-1):
if n[i+1]==0:
i+=1
while n[i]==0 and i < (len(n)-1):
ctr+=1
i+=1
dist.append(ctr)
ctr=0
else:
i+=1
else:
i+=1
#Average distance between events
aved = st.mean(dist)
return(aved)
The latest response is at the end of the answer. There have been several edits.
The very end (4th edit) of the answer has a completely new approach.
I'm not certain what you're trying to do but hopefully this can help.
import numpy as np
# Generate some events
events = np.random.rand(3,12)*10.
events *= np.random.randint(5, size=(3,12))<1
events
Out[36]:
array([[ 0. , 0. , 0. , 0. , 0. ,
0. , 5.35598205, 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 6.65094145, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 6.04581361],
[ 6.88119682, 4.31178109, 0. , 0. , 0. ,
0. , 0. , 1.16999289, 0. , 0. ,
0. , 0. ]])
# generate a boolean array of events. (as int for a compact print.)
an_event = (events != 0).astype(np.int)
n_event
Out[37]:
array([[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]])
def event_range(arr):
from_start = arr.cumsum(axis=1)
from_end = np.flip(arr, axis=1).cumsum(axis=1)
from_end = np.flip(from_end, axis=1)
return np.logical_and(from_start, from_end).astype(np.int)
event_range function step by step.
from_start is the cumsum of an_event. zero before any event, >0 after that.
from_start = an_event.cumsum(axis=1) # cumsum the event count. zeros before the first event.
from_start
Out[40]:
array([[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], # zeroes before the first event.
[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2],
[1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3]], dtype=int32)
from_end is the cumsum of an_event but from max to min. Therefore zero after the last event.
from_end = np.flip(an_event, axis=1).cumsum(axis=1) # cumsum of reversed arrays
from_end = np.flip(from_end, axis=1) # reverse the result.
from_end
Out[41]:
array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], # zero after the last event.
[2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[3, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]], dtype=int32)
logically anding these together to get, zeroes before the first event, ones after that and zeroes after the last event.
ev_range = np.logical_and(from_start, from_end).astype(np.int)
ev_range
Out[42]:
# zero before first and after last event, one between the events.
array([[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]])
n_range = ev_range.sum(axis=1)
n_range
Out[43]: array([ 1, 11, 8])
n_events = an_event.sum(axis=1)
n_events
Out[44]: array([1, 2, 3])
avg = n_range / n_events
avg
Out[45]: array([ 1. , 5.5 , 2.66666667])
Should avg be n_range/ (n_events-1)? ie count the gaps, not the events.
What would you expect for only one event in a row? What for zero events in a row?
Edit following comments
To count gaps longer than zero gets a bit involved. The easiest is probably to take the differences for consecutive columns. Where these are -1 there is a 1 followed by a zero. You need to add a final zero column to your data in case the last column has an event in it.
np.random.seed(10)
test = 1*(np.random.randint(4, size=(4,12))<1)
test
Out[24]:
array([[0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1],
[0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]])
temp = np.diff(test, axis=-1)
temp
Out[26]:
array([[ 0, 1, -1, 1, -1, 0, 1, -1, 0, 1, -1],
[ 0, 1, -1, 1, -1, 1, -1, 1, -1, 1, 0],
[ 0, 0, 1, -1, 0, 0, 0, 1, 0, -1, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0]])
np.where(temp<0, 1,0)
Out[28]:
array([[0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]])
In [29]: np.where(temp<0, 1,0).sum(axis=-1)-1
Out[29]: array([3, 3, 1, 0]) # should be [3, 4, 1, 0]
Add a column of zeros to test.
test = np.hstack((test, np.zeros((4,1), dtype = np.int)))
test
Out[31]:
array([[0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]])
temp=np.diff(test, axis=-1)
temp
Out[35]:
array([[ 0, 1, -1, 1, -1, 0, 1, -1, 0, 1, -1, 0],
[ 0, 1, -1, 1, -1, 1, -1, 1, -1, 1, 0, -1], # An extra -1 here.
[ 0, 0, 1, -1, 0, 0, 0, 1, 0, -1, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0]])
np.where(temp<0, 1,0).sum(axis=-1)-1
Out[36]: array([3, 4, 1, 0])
As I said, a bit involved. It may be easier to loop through but this should be faster if more difficult to understand.
2nd Edit following another idea.
import numpy as np
np.random.seed(10)
test = 1*(np.random.randint(4, size=(4,12))<1)
test
Out[2]:
array([[0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1],
[0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]])
temp = np.diff(test, axis=-1)
np.where(temp<0, 1, 0).sum(axis=-1)+test[:,-1]-1
# +test[:,-1] adds the last column to include any 1's from there.
Out[4]: array([3, 4, 1, 0])
3rd Edit
Thinking this through I created 2 functions, I also show a do_divide which copes with divide by zero.
import numpy as np
def zero_after_last_event(arr):
"""
Returns an array set to zero in all cells after the last event
"""
from_end = np.flip(arr, axis=-1).cumsum(axis=-1) # cumsum of reversed arrays
from_end = np.flip(from_end, axis=-1) # reverse the result.
from_end[from_end>0] = 1 # gt zero set to 1
return from_end
def event_range(arr):
""" event_range is zero before the first event,
zero after the last event and 1 elsewhere. """
return np.logical_and(arr.cumsum(axis=-1), zero_after_last_event(arr)).astype(np.int)
def do_divide(a, b):
""" Does a protected divide. Returns zero for divide by zero """
with np.errstate(invalid='ignore'): # Catch divide by zero
result = a / b
result[~np.isfinite(result)] = 0.
return result
Set up a test array
np.random.seed(10)
events = 1*(np.random.randint(4, size=(4,12))<1)
events
Out[15]:
array([[0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1],
[0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]])
With the functions and data above above this follows.
# Count gap lengths
gaps = 1 - events # invert the values in events (1->0, 0->1)
gaps = np.logical_and(gaps, event_range(events)).astype(np.int)
gaps
Out[19]:
array([[0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
sumgaps = gaps.sum(axis = -1)
sumgaps
Out[22]: array([5, 4, 4, 0])
# Count how many gaps
temp = np.diff(events, axis=-1) # temp is -1 when an event isn't immediately followed by another event.
n_gaps = np.where(temp<0, 1, 0).sum(axis=-1)+events[:,-1]-1
# +test[:,-1] adds the last column to include any 1's from there.
n_gaps
Out[23]: array([3, 4, 1, 0])
do_divide(sum_gaps, n_gaps)
Out[21]: array([1.66666667, 1. , 4. , 0. ])
4th Edit - using np.bincount
import numpy as np
def do_divide(a, b):
""" Does a protected divide. Returns zero for divide by zero """
with np.errstate(invalid='ignore'): # Catch divide by zero
result = a / b
result[~np.isfinite(result)] = 0.
return result
np.random.seed(10)
events = 1*(np.random.randint(4, size=(4,12))<1)
cumulative = events.cumsum(axis=1)
cumulative
Out[2]:
array([[0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4],
[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 6],
[0, 0, 0, 1, 1, 1, 1, 1, 2, 3, 3, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1]])
bin_count_len = 1+cumulative.max() # Biggest bins length required.
result = np.zeros((cumulative.shape[0], bin_count_len), dtype=np.int)
for ix, row in enumerate(cumulative):
result[ix] = np.bincount( row, minlength = bin_count_len )
result
Out[4]:
array([[2, 2, 3, 3, 2, 0, 0],
[2, 2, 2, 2, 2, 1, 1],
[3, 5, 1, 3, 0, 0, 0],
[9, 3, 0, 0, 0, 0, 0]])
Lose column 0. Its before any events. Lose the last column, always after the last event. The gaps include the opening event, -1 removes it from the gap size.
temp = result[:, 1:-1]-1 #
temp
Out[6]:
array([[ 1, 2, 2, 1, -1],
[ 1, 1, 1, 1, 0],
[ 4, 0, 2, -1, -1],
[ 2, -1, -1, -1, -1]])
Set any cell temp[r, n] = 0 if temp[r, n+1]==0
temp_lag = (result[:, 2:]>0)*1
temp_lag
Out[8]:
array([[1, 1, 1, 0, 0],
[1, 1, 1, 1, 1],
[1, 1, 0, 0, 0],
[0, 0, 0, 0, 0]])
temp *= temp_lag
temp
Out[10]:
array([[1, 2, 2, 0, 0],
[1, 1, 1, 1, 0],
[4, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
tot_gaps = temp.sum(axis=1)
n_gaps = np.count_nonzero(temp, axis=1)
tot_gaps, n_gaps
Out[13]: (array([5, 4, 4, 0]), array([3, 4, 1, 0]))
do_divide(tot_gaps, n_gaps)
Out[14]: array([1.66666667, 1. , 4. , 0. ])
HTH
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.