简体   繁体   中英

How to read faster multiple CSV files using Python pandas

My program should read ~400.000 csv files and it takes very long. The code I use is:

        for file in self.files:
            size=2048
            csvData = pd.read_csv(file, sep='\t', names=['acol', 'bcol'], header=None, skiprows=range(0,int(size/2)), skipfooter=(int(size/2)-10))

            for index in range(0,10):
                s=s+float(csvData['bcol'][index])
            s=s/10
            averages.append(s)
            time=file.rpartition('\\')[2]
            time=int(re.search(r'\d+', time).group())
            times.append(time)

Is there a chance to increase the speed?

You can use Threading. I took the following code from here and modified for your use case

global times =[]

def my_func(file):
        size=2048
        csvData = pd.read_csv(file, sep='\t', names=['acol', 'bcol'], header=None, skiprows=range(0,int(size/2)), skipfooter=(int(size/2)-10))

        for index in range(0,10):
            s=s+float(csvData['bcol'][index])
        s=s/10
        averages.append(s)
        time=file.rpartition('\\')[2]
        time=int(re.search(r'\d+', time).group())
        times.append(time)

threads = []
# In this case 'self.files' is a list of files to be read.
for ii in range(self.files):
# We start one thread per file present.
    process = Thread(target=my_func, args=[ii])
    process.start()
    threads.append(process)
# We now pause execution on the main thread by 'joining' all of our started threads.
# This ensures that each has finished processing the urls.
for process in threads:
    process.join()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM