简体   繁体   中英

Pandas DataFrame group-by indexes matching list - indexes respectively smaller than list[i+1] and greater than list[i]

I have a DataFrame Times_df with times in a single column and a second DataFrame End_df with specific end times for each group indexed by group name.

Times_df = pd.DataFrame({'time':np.unique(np.cumsum(np.random.randint(5, size=(100,))), axis=0)})

End_df = pd.DataFrame({'end time':np.unique(random.sample(range(Times_df.index.values[0], Times_df.index.values[-1]), 10))})
End_df.index.name = 'group'

I want to add a group index for all times in Times_df smaller or equal than each consequitive end time in End_df but greater than the previous one

I can only do it for now with a loop, which takes forever;(

lis = []
i = 1
for row in Times_df['time'].values:
while i <= row:
    lis.append((End_df['end time']==row).index)
    i +1

Then I add the list lis as a new column to Times_df

Times_df['group']=lis 

A nother sollution that sadly still uses a loop is this:

test_df = pd.DataFrame()
for group, index in  End_df.iterrows():
    test = count.loc[count.index<=index['end time]][:]
    test['group']=group
    test_df = pd.concat([test_df,test], axis=0, ignore_index=True)

I think what you are looking for is pd.cut to bin your values into the groups.

bins = [0, 3, 10, 20, 53, 59, 63, 65, 68, 74, np.inf]
groups = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

Times_df["group"] = pd.cut(Times_df["time"], bins, labels=groups)

print(Times_df)
    time    group
0   2   0
1   3   0
2   7   1
3   11  2
4   15  2
5   16  2
6   18  2
7   22  3
8   25  3
9   28  3

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM