Good day,
from one of our clients we're getting csv-exports that are looking something like this:
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
1 abc object_1 12 none none none none
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
2 def object_2 7 object_3 19 none none
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
3 ghi object_4 25 none none none none
Now I really only care for the pair of objects (object-name and amount). In each set of data the max number of pairs are always the same, but they are randomly filled. My question: is it possible to load them all into a dataframe and convert them into something like this:
object | amount
object_1 12
object_2 7
object_3 19
object_4 25
Loading all these csv-exports into a single dataframe isn't the problem, but does panda contains a solution for this kind of problem?
Thanks for all your help!
First concat
all the csvs, and then use pd.wide_to_long
:
csv_paths = ["your_csv_paths..."]
df = pd.concat([pd.read_csv(i) for i in csv_paths]).replace("none", np.NaN)
print (pd.wide_to_long(df, stubnames=["object","amount"],
i=["id","name"],j="Hi", suffix="\w*",
sep="_").dropna())
object amount
id name Hi
1 abc a object_1 12
2 def a object_2 7
b object_3 19
3 ghi a object_4 25
parser.py
import pandas as pd
df = pd.read_csv('test.csv')
fields = (
('object_a', 'amount_a'),
('object_b', 'amount_b'),
('object_c', 'amount_c')
)
print(df, '\n')
newDf = pd.DataFrame(columns=('object', 'amount'))
for idx, row in df.iterrows():
for fieldName, fieldValue in fields:
if row[fieldName] != 'none':
newDf.loc[len(newDf)] = (row[fieldName], row[fieldValue])
print(newDf, '\n')
test.csv
id,name,object_a,amount_a,object_b,amount_b,object_c,amount_c
1,abc,object_1,12,none,none,none,none
1,abc,object_2,15,object_3,42,none,none
1,abc,none,none,none,none,object_4,16
output
id name object_a amount_a object_b amount_b object_c amount_c
0 1 abc object_1 12 none none none none
1 1 abc object_2 15 object_3 42 none none
2 1 abc none none none none object_4 16
object amount
0 object_1 12
1 object_2 15
2 object_3 42
3 object_4 16
It's maybe not the best way, but if all .cvs contain only one line, you can do with:
import pandas as pd
def append_df(df, result_df):
for column in df.columns:
if column.startswith('object_'):
print(df[column].values)
if df[column].values[0] != 'none':
suffix = column.replace('object_','')
amount_col='amount_'+suffix
object_name = df[column].values [0]
amunt_value=df[amount_col].values [0]
data_to_append={'object':object_name,'amount':amunt_value}
result_df=result_df.append(data_to_append, ignore_index=True)
return result_df
result_df=pd.DataFrame()
data={'id':[1], 'name':['abc'],'object_a':['Obj1'], 'amount_a':[17],'object_b':['none'], 'amount_b':['none'],'object_c':['none'], 'amount_c':['none'] }
df = pd.DataFrame(data)
result_df=append_df(df,result_df)
data={'id':[2], 'name':['def'],'object_a':['Obj2'], 'amount_a':[24],'object_b':['Obj3'], 'amount_b':[18],'object_c':['none'], 'amount_c':['none'] }
df = pd.DataFrame(data)
result_df=append_df(df,result_df)
data={'id':[3], 'name':['ghi'],'object_a':['Obj4'], 'amount_a':[40],'object_b':['none'], 'amount_b':['none'],'object_c':['Obj5'], 'amount_c':[70] }
df = pd.DataFrame(data)
result_df=append_df(df,result_df)
#reoder columns
result_df = result_df[['object','amount']]
print(result_df)
result:
object amount
0 Obj1 17.0
1 Obj2 24.0
2 Obj3 18.0
3 Obj4 40.0
4 Obj5 70.0
Here is a an approach that uses pd.read_fwf()
to read the fixed-with file. The delimiter positions are found programatically. The wide_to_long()
presented by @HenryYik is also used.
# original data
from io import StringIO
import pandas as pd
data = '''id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
1 abc object_1 12 none none none none
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
2 def object_2 7 object_3 19 none none
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
3 ghi object_4 25 none none none none
'''
# get location of delimiters '|' from first line of file
first_line = next(StringIO(data)).rstrip('\n')
delimiter_pos = (
[-1] + # we will add 1 to this, to get 'real' starting location
[idx for idx, c in enumerate(first_line) if c == '|'] +
[len(first_line)])
# convert delimiter positions to start/end positions for each field
# zip() terminates with the shortest sequence is exhausted
colspecs = [ (start + 1, end)
for start, end in zip(delimiter_pos, delimiter_pos[1:])]
# import fixed width file
df = pd.read_fwf(StringIO(data), colspecs=colspecs)
# drop repeated header rows
df = df[ df['id'] != df.columns[0] ]
# convert wide to long
df = pd.wide_to_long(
df, stubnames=['object', 'amount'],
i = ['id', 'name'], j = 'group',
suffix='\w*', sep='_',).reset_index()
# drop rows with no info
mask = (df['object'] != 'none') & (df['amount'] != 'none')
t = df.loc[mask, ['object', 'amount']].set_index('object')
print(t)
amount
object
object_1 12
object_2 7
object_3 19
object_4 25
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.