简体   繁体   中英

cleaning big data using python

I have to clean a input data file in python. Due to typo error, the datafield may have strings instead of numbers. I would like to identify all fields which are a string and fill these with NaN using pandas. Also, I would like to log the index of those fields.

One of the crudest way is to loop through each and every field and checking whether it is a number or not, but this consumes lot of time if the data is big.

My csv file contains data similar to the following table:

Country  Count  Sales
USA         1   65000
UK          3    4000
IND         8       g
SPA         3    9000
NTH         5   80000

.... Assume that i have 60,000 such rows in the data.

Ideally I would like to identify that row IND has an invalid value under SALES column. Any suggestions on how to do this efficiently?

There is a na_values argument to read_csv :

na_values : list-like or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values

df = pd.read_csv('city.csv', sep='\s+', na_values=['g'])

In [2]: df
Out[2]:
  Country  Count  Sales
0     USA      1  65000
1      UK      3   4000
2     IND      8    NaN
3     SPA      3   9000
4     NTH      5  80000

Using pandas.isnull , you can select only those rows with NaN in the 'Sales' column, or the 'Country' series:

In [3]: df[pd.isnull(df['Sales'])]
Out[3]: 
  Country  Count  Sales
2     IND      8    NaN

In [4]: df[pd.isnull(df['Sales'])]['Country']
Out[4]: 
2    IND
Name: Country

If it's already in the DataFrame you could use apply to convert those strings which are numbers into integers (using str.isdigit ):

df = pd.DataFrame({'Count': {0: 1, 1: 3, 2: 8, 3: 3, 4: 5}, 'Country': {0: 'USA', 1: 'UK', 2: 'IND', 3: 'SPA', 4: 'NTH'}, 'Sales': {0: '65000', 1: '4000', 2: 'g', 3: '9000', 4: '80000'}})

In [12]: df
Out[12]: 
  Country  Count  Sales
0     USA      1  65000
1      UK      3   4000
2     IND      8      g
3     SPA      3   9000
4     NTH      5  80000

In [13]: df['Sales'] = df['Sales'].apply(lambda x: int(x) 
                                                  if str.isdigit(x)
                                                  else np.nan)

In [14]: df
Out[14]: 
  Country  Count  Sales
0     USA      1  65000
1      UK      3   4000
2     IND      8    NaN
3     SPA      3   9000
4     NTH      5  80000
import os
import numpy as np
import pandas as PD

filename = os.path.expanduser('~/tmp/data.csv')
df = PD.DataFrame(
        np.genfromtxt(
            filename, delimiter = '\t', names = True, dtype = '|O4,<i4,<f8'))
print(df)

yields

  Country  Count  Sales
0     USA      1  65000
1      UK      3   4000
2     IND      8    NaN
3     SPA      3   9000
4     NTH      5  80000

and to find the country with NaN sales, you could compute

print(y['Country'][np.isnan(y['Sales'])])

which yields the pandas.Series :

2    IND
Name: Country

Try to convert the 'sales' string to an int , if it is well formed then it goes on, if it is not it will raise a ValueError which we catch and replace with the place holder.

bad_lines = []

with open(fname,'rb') as f:
    header = f.readline()
    for j,l in enumerate(f):
        country,count,sales = l.split()
        try:
            sales_count = int(sales)
        except ValueError:
            sales_count = 'NaN'
            bad_lines.append(j)
        # shove in to your data structure
        print country,count,sales_count

you might need to edit the line that splits the line (as your example copied out as spaces, not tabs). Replace the print line, with what ever you want to do with the data. You probably need to relpace 'NaN' with the pandas NaN as well.

filename = open('file.csv')
filename.readline()

for line in filename:
    currentline = line.split(',')
    try:
        int(currentline[2][:-1])
    except:
        print currentline[0], currentline[2][:-1]

IND g

I propose to use a regex:

import re

ss = '''Country  Count  Sales
USA   ,      3  , 65000
UK    ,      3  ,  4000
IND   ,      8  ,     g
SPA   ,     ju  ,  9000
NTH   ,      5  , 80000
XSZ   ,    rob  ,    k3'''

with open('fofo.txt','w') as f:
    f.write(ss)

print ss
print

delimiter = ','

regx = re.compile('(.+?(?:{0}))'
                  '(( *\d+?)| *.+?)'
                  '( *(?:{0}))'
                  '(( *\d+?)| *.+?)'
                  '( *\r?\n?)$'.format(delimiter))

def READ(filepath, regx = regx):
    with open(filepath,'rb+') as f:
        yield f.readline()
        for line in f:
            if None in regx.match(line).group(3,6):
                g2,g3,g5,g6 = regx.match(line).group(2,3,5,6)
                tr = ('%%%ds' % len(g2) % 'NaN' if g3 is None else g3,
                      '%%%ds' % len(g5) % 'NaN' if g6 is None else g6)
                modified_line = regx.sub(('\g<1>%s\g<4>%s\g<7>' % tr),line)
                print ('------------------------------------------------\n'
                       '%r with aberration\n'
                       '%r modified line'
                       % (line,modified_line))
                yield modified_line
            else:
                yield line

with open('modified.txt','wb') as g:
    g.writelines(x for x in READ('fofo.txt'))

result

Country  Count  Sales
USA   ,      3  , 65000
UK    ,      3  ,  4000
IND   ,      8  ,     g
SPA   ,     ju  ,  9000
NTH   ,      5  , 80000
XSZ   ,    rob  ,    k3

------------------------------------------------
'IND   ,      8  ,     g\r\n' with aberration
'IND   ,      8  ,   NaN\r\n' modified line
------------------------------------------------
'SPA   ,     ju  ,  9000\r\n' with aberration
'SPA   ,    NaN  ,  9000\r\n' modified line
------------------------------------------------
'XSZ   ,    rob  ,    k3' with aberration
'XSZ   ,    NaN  ,   NaN' modified line

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM