简体   繁体   中英

Extracting all specific rows (separately) from multiple csv files and combine rows to save as a new file

I have a number of csv files. I need to extract all respective rows from each file and save it as a new file. ie first output file must contain first rows of all input files and so on.

I have done the following.

import pandas as pd
import os
import numpy as np
data = pd.DataFrame('', columns =['ObjectID', 'SPI'], index = np.arange(1,100))
path = r'C:\Users\bikra\Desktop\Pandas'
i = 1
for files in os.listdir(path):
    if files[-4:] == '.csv': 
        for j in range(0,10, 1):
        #print(files) 
            dataset = pd.read_csv(r'C:\Users\bikra\Desktop\Pandas'+'\\'+files)
            spi1 = dataset.loc[j,'SPI'] 
            data.loc[i]['ObjectID'] = files[:]
            data.loc[i]['SPI'] = spi1
            data.to_csv(r'C:\Users\bikra\Desktop\Pandas\output\\'+str(j)+'.csv') 
            i + 1

It works well when index (ie 'j' ) is specified. But when I tried to loop, the output csv file contains only first row. Where am I wrong?

You better use append:

data = data.append(spi1)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM