简体   繁体   中英

pandas Dataframe Replace NaN values with with previous value based on a key column

I have a pd.dataframe that looks like this:

key_value    a    b    c    d    e
value_01     1    10   x   NaN  NaN
value_01    NaN   12  NaN  NaN  NaN
value_01    NaN   7   NaN  NaN  NaN
value_02     7    4    y   NaN  NaN 
value_02    NaN   5   NaN  NaN  NaN
value_02    NaN   6   NaN  NaN  NaN
value_03     19   15   z   NaN  NaN

So now based on the key_value,

For column 'a' & 'c', I want to copy over the last cell's value from the same column 'a' & 'c' based off of the key_value.

For another column 'd', I want to copy over the row 'i - 1' cell value from column 'b' to column 'd' i'th cell.

Lastly, for column 'e' I want to copy over the sum of 'i - 1' cell's from column 'b' to column 'e' i'th cell .

For every key_value the columns 'a', 'b' & 'c' have some value in their first row, based on which the next values are being copied over or for different columns the values are being generated for.

key_value    a    b    c    d    e
value_01     1    10   x   NaN  NaN
value_01     1    12   x    10   10
value_01     1    7    x    12   22
value_02     7    4    y   NaN  NaN
value_02     7    5    y    4    4
value_02     7    6    y    5    9
value_03     8    15   z   NaN  NaN

My current approach:

size = df.key_value.size
for i in range(size):
    if pd.isna(df.a[i]) and df.key_value[i] == output.key_value[i - 1]:
        df.a[i] = df.a[i - 1]
        df.c[i] = df.c[i - 1]
        df.d[i] = df.b[i - 1]
        df.e[i] = df.e[i] + df.b[i - 1]

For columns like 'a' and 'b' the NaN values are all in the same row indexes.

My approach works but takes very long since my datframe has over 50000 records, I was wondering if there is a different way to do this, since I have multiple columns like 'a' & 'b' where values need to be copied over based on 'key_value' and some columns where the values are being computed using say a column like 'b'

pd.concat with groupby and assign

pd.concat([
    g.ffill().assign(d=lambda d: d.b.shift(), e=lambda d: d.d.cumsum())
    for _, g in df.groupby('key_value')
])

  key_value     a  b  c    d    e
0  value_01   1.0  1  x  NaN  NaN
1  value_01   1.0  2  x  1.0  1.0
2  value_01   1.0  3  x  2.0  3.0
3  value_02   7.0  4  y  NaN  NaN
4  value_02   7.0  5  y  4.0  4.0
5  value_02   7.0  6  y  5.0  9.0
6  value_03  19.0  7  z  NaN  NaN

groupby and apply

def h(g):
    return g.ffill().assign(
        d=lambda d: d.b.shift(), e=lambda d: d.d.cumsum())

df.groupby('key_value', as_index=False, group_keys=False).apply(h)

You can use groupby + ffill for the groupwise filling. The other operations require shift and cumsum .

In general, note that many common operations have been implemented efficiently in Pandas.

g = df.groupby('key_value')

df['a'] = g['a'].ffill()
df['c'] = g['c'].ffill()
df['d'] = df['b'].shift()
df['e'] = df['d'].cumsum()

print(df)

  key_value     a  b  c    d     e
0  value_01   1.0  1  x  NaN   NaN
1  value_01   1.0  2  x  1.0   1.0
2  value_01   1.0  3  x  2.0   3.0
3  value_02   7.0  4  y  3.0   6.0
4  value_02   7.0  5  y  4.0  10.0
5  value_02   7.0  6  y  5.0  15.0
6  value_03  19.0  7  z  6.0  21.0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM