简体   繁体   中英

Remove Columns And Create Unique Row For Each Removed Column Pandas Dataframe

This is a really tricky issue I've run into which is slamming my memory management, here's the setup:

I have a dataframe with the following column setup:

Unique1 Unique2 Unique3 d_1 d_2 d_3..... d_2000
   A       B      C      1   4   0         100

I want to remove the d_1...d_2000 columns and instead have a unique row for each entry:

Unique1 Unique2 Unique3 d_index d_value
   A       B       C      d_1     1
   A       B       C      d_2     4
   A       B       C      d_3     0
   .
   .
   .
   A       B       C      d_2000  100

The following code gives me a 2 dim series which can be zipped back up into a dataframe, but because I need to work with a few working variables it quickly runs out of 32gb of ram on linux (works on windows env but is very slow):

def convert_timeseries_to_rows(row):
    d_idx = 1
    rows_to_return = []
    for day_count in row[6:]: ### d columns start from 6
        new = list(row[:6]) ### keep first 6 columns
        day_string = "d_"+str(d_idx)
        new.append(day_string)
        new.append(day_count)
        rows_to_return.append(new)
        d_idx = d_idx + 1
    return rows_to_return ### return all rows generated


2_dim_series = df.apply(convert_timeseries_to_rows, axis=1)


data = []
columns = ['unique1', "unique2"..., 'date_index', 'units']
for each in 2_dim_series :
    for row in each:
        data.append(dict(zip(columns,row)))
data = pd.DataFrame(data)
data.to_csv('save_to_disk.csv')

Can any of the pros think of a better way to do this (in python)?

Thanks!

Example Input:

Unique1 Unique2 Unique3 d_1 d_2 d_3
   A       B      C      1   4   0 
   D       E      F      5   9   12 

Example Output:

Unique1 Unique2 Unique3 d_index d_value
   A       B       C      d_1     1
   A       B       C      d_2     4
   A       B       C      d_3     0
   D       E       F      d_1     5
   D       E       F      d_2     9
   D       E       F      d_3     12

Pandas has a solution for this: melt

df.melt(id_vars=['Unique1','Unique2','Unique3'],
        var_name='d_index',
        value_name='d_value')
 .sort_values('Unique1', ignore_index=True)


  Unique1   Unique2 Unique3 d_index d_value
0      A       B    C        d_1    1
1      A       B    C        d_2    4
2      A       B    C        d_3    0
3      D       E    F        d_1    5
4      D       E    F        d_2    9
5      D       E    F        d_3    12

I recreated the dataframe like this:

import pandas as pd
n = 2000
df = pd.DataFrame(columns=['Unique' + str(i) for i in range(1,4)] 
             + ['d_' + str(i) for i in range(n)], 
            data= [['A','B','C']  + np.random.randint(0,100,n).astype(str).tolist()],
                  index = [0])

Then identified the columns you're working with:

d_cols = df.columns[df.columns.str.contains('d_')]
u_cols = df.columns[df.columns.str.contains('Unique')]

Then generated a second dataframe:

df2 = pd.DataFrame({'d_index':d_cols, 
                    'd_value': df[d_cols].values.flatten()})
for col in u_cols:
    df2[col] = df[col][0]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM