I am having a hard time understanding how to leverage/learn how to use multiprocessing with my Python code. I am now processing csv files which are several gigs and tens of millions of records on a windows OS and am beginning to run into a massive processing speed bump. I have the following code:
import numpy as np
import pandas as pd
import datetime as dt
df = pd.read_csv(r'C:...\2017_import.csv')
df['FinalActualDate'] = pd.to_datetime(df['FinalActualDate'])
df['StartDate'] = pd.to_datetime(df['StartDate'])
df['DaysToInHome'] = (df['FinalActualDate'] - df['StartDate']).abs() / np.timedelta64(1, 'D')
df.to_csv(r'C:...\2017_output4.csv', index=False)
The data is on file that is 3.6 gigs. The data looks like:
Class,OwnerCode,Vendor,Campaign,Cycle,Channel,Product,Week,FinalActualDate,State,StartDate
3,ECM,VendorA,000206,06-17,A,ProductB,Initial,2017-06-14 02:01:00,NE,06-01-17 12:00:00
3,ECM,VendorB,000106,06-17,A,ProductA,Initial,2017-06-14 00:15:00,NY,06-01-17 12:00:00
3,ECM,AID,ED-17-0002-06,06-17,B,ProductB,Secondary,2017-06-13 20:30:00,MA,06-08-17 12:00:00
3,ECM,AID,ED-17-0002-06,06-17,C,ProductA,Third,2017-06-15 02:13:00,NE,06-15-17 12:00:00
This code works on small data sets but it is taking several hours on the actual, large, data set. I have tried several iterations of the import concurrent.futures and multiprocessing with no success. I am so lost it is not worth me posting what I have tried. I do realize that other factors impact speeds but obtaining new hardware is not an option. Any guidance would be appreciated.
Before you go off into multiprocessing
, I would consider dealing with some low-hanging fruit (which you'll want to do regardless):
Consider:
In [15]: df
Out[15]:
Class OwnerCode Vendor Campaign Cycle Channel Product \
0 3 ECM VendorA 000206 06-17 A ProductB
1 3 ECM VendorB 000106 06-17 A ProductA
2 3 ECM AID ED-17-0002-06 06-17 B ProductB
3 3 ECM AID ED-17-0002-06 06-17 C ProductA
Week FinalActualDate State StartDate
0 Initial 2017-06-14 02:01:00 NE 06-01-17 12:00:00
1 Initial 2017-06-14 00:15:00 NY 06-01-17 12:00:00
2 Secondary 2017-06-13 20:30:00 MA 06-08-17 12:00:00
3 Third 2017-06-15 02:13:00 NE 06-15-17 12:00:00
Since your date-time formats are regular, just pass the format
argument. Doing a simple test:
In [16]: dates = df.StartDate.repeat(10000)
In [17]: len(dates)
Out[17]: 40000
In [18]: %timeit pd.to_datetime(df.StartDate)
1000 loops, best of 3: 866 µs per loop
In [19]: %timeit pd.to_datetime(df.StartDate, format="%m-%d-%y %H:%M:%S")
10000 loops, best of 3: 106 µs per loop
I got an 8x increase in speed. Unless you are working with well-over 8 cores, this is a much greater speed-up than you would expect by parallelizing it.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.