[英]Faster way to apply custom function to each row in pandas dataframe?
I have two dataframes ad_df, x_df . 我有两个数据框ad_df,x_df 。 I created a function find_ids that takes in an ID ad_id and a date ad_date from ad_df .
我创建了一个函数find_ids取入的ID ad_id和从ad_df日期ad_date。
The function filters x_df by the following 该函数通过以下内容过滤x_df
Then I append the resulting dataframe to a global dataframe res_df that keeps track of these rows. 然后,我将结果数据框附加到跟踪这些行的全局数据框res_df中。
I call the function by using the line below: 我通过使用以下行来调用该函数:
ad_df.apply(lambda x: find_units_moved(x['SerialNo'],x['Audit Date'] ), axis = 1)
Is there a faster way to do this? 有更快的方法吗? ad_df has about 1M rows, so hopefully there is a faster way to do this.
ad_df大约有100万行,因此希望有一种更快的方法。 The code for the function is shown below.
该功能的代码如下所示。
def find_ad_ids(ad_id, ad_date):
id_specific_df = x_df.loc[x_df['ID'] == ad_id]
beg_range_date = ad_date - timedelta(days = 2)
end_range_date = ad_date + timedelta(days = 15)
beg_df = id_specific_df[(id_specific_df['Last_Date'] > beg_range_date) & (id_specific_df['Last_Date'] < ad_date)]
end_df = id_specific_df[(id_specific_df['Last_Date''] > ad_date) & (id_specific_df['Last_Date'] < end_range_date)]
if(len(beg_df.columns) != 0 and len(end_df.columns) != 0):
if(('1' in beg_df['Geo_Label'].array) and ('1' in end_df['Geo_Label'].array)):
res_df.append(pd.concat([beg_df, end_df], ignore_index=True))
One of the fastest ways to append data to a Dataframe is through dict: 将数据追加到数据框的最快方法之一是通过dict:
startTime = time.perf_counter()
row_list = []
for i in range (0,5):
row_list.append(dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E']))
for i in range( 1,numOfRows-4):
dict1 = dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E'])
row_list.append(dict1)
df4 = pd.DataFrame(row_list, columns=['A','B','C','D','E'])
print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows))
print(df4.shape)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.