简体   繁体   中英

Is there a efficient way to bypass a nested for loop?

I've got a nested for loop, and I'm wondering if there's a more efficient way to do this, code-wise:

My data looks similar to the following.

  ID  | DEAD     | 2009-10 | ...    | 2016-10
 -----------------------------------------
  1   | 2018-11  | 5.4     | ...    | 6.5 
  2   | 2014-01  | 0.5     | ...    | 5.2
  ...                      
  N   | 2008-11  | 8.6     | ...    | 1.3

The goal is to replace the values with np.NaN as soon as a product expires (when column 'DEAD' < date), otherwise the values should remain the same.

  ID  | DEAD     | 2009-10 | ...    | 2016-10
 -----------------------------------------
  1   | 2018-11  | 5.4     | ...    | 6.5 
  2   | 2014-01  | 0.5     | ...    | NaN
  ...                      
  N   | 2008-11  | 8.6     | ...    | NaN

My initial idea was to apply a nested for loop to check whether the condition 'DEAD' < date is reached. The method works for smaller N. But since my data includes over 20,000 rows and 400 columns it requires too much time.

time = df.columns[2:] # take the header as an index
time = pd.DataFrame(time)
time.columns = ['Dummy']
time['Dummy'] = pd.to_datetime(time.Dummy) # Convert index argument to datetime

df['DEAD'] = pd.to_datetime(tore.DEAD) # Convert column 'DEAD' to datetime



lists = []
for i in range(397):
    row = []
    for j in range(20000):
        if time.iloc[i,0] <= df.iloc[j,0]: 
            newlist = df.iloc[j,i]
        else:
            newlist = np.NaN
        row.append(newlist)
    lists.append(row)

lists = pd.DataFrame(lists)
lists = lists.transpose()

Appreciate any suggestions!

You can try to iterate through each column instead:

for column_name in df.drop('DEAD', axis=1):
   column_date = pd.to_datetime(column_name)
   df[column_name].mask(df['DEAD']<column_date, inplace=True)

The mask method is also useful here.

If your columns are ordered - for example, ascending order by date - then you could avoid some of the looping and checking.

  • For each row, find the first column that is meets your condition
    • You could do this with a binary search if you really want to optimize
  • Get the index of this column; call it i
  • Update all the subsequent columns with index >= i to the NaN value

The update itself is still being done cell-by-cell, which might not perform particularly well.

You might get better performance if you create a second dataframe with the same dimensions that could be used like a bitmask, containing 0 and 1 values indicating whether the value in the underlying dataframe should be retained or removed.

如果这些数据存储在数据库中,您应该直接使用sql,更快。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM