简体   繁体   English

如何预处理并将“大数据”tsv文件加载到python数据帧中?

[英]How to preprocess and load a “big data” tsv file into a python dataframe?

I am currently trying to import the following large tab-delimited file into a dataframe-like structure within Python---naturally I am using pandas dataframe, though I am open to other options. 我目前正在尝试将以下大的制表符分隔文件导入到Python中类似数据帧的结构中 - 当然我使用的是pandas数据帧,尽管我对其他选项持开放态度。

This file is several GB in size, and is not a standard tsv file---it is broken, ie the rows have a different number of columns. 此文件大小为几GB,并且不是标准的tsv文件---它已损坏,即行具有不同的列数。 One row may have 25 columns, another has 21. 一行可能有25列,另一行有21列。

Here is an example of the data: 以下是数据示例:

Col_01: 14 .... Col_20: 25    Col_21: 23432    Col_22: 639142
Col_01: 8  .... Col_20: 25    Col_22: 25134    Col_23: 243344
Col_01: 17 .... Col_21: 75    Col_23: 79876    Col_25: 634534    Col_22: 5    Col_24: 73453
Col_01: 19 .... Col_20: 25    Col_21: 32425    Col_23: 989423
Col_01: 12 .... Col_20: 25    Col_21: 23424    Col_22: 342421    Col_23: 7    Col_24: 13424    Col_25: 67
Col_01: 3  .... Col_20: 95    Col_21: 32121    Col_25: 111231

As you can see, some of these columns are not in the correct order... 如您所见,其中一些列的顺序不正确...

Now, I think the correct way to import this file into a dataframe is to preprocess the data such that you can output a dataframe with NaN values, eg 现在,我认为将此文件导入数据框的正确方法是预处理数据,以便输出具有NaN值的数据帧,例如

Col_01 .... Col_20    Col_21    Col22    Col23    Col24    Col25
8      .... 25        NaN       25134    243344   NaN      NaN
17     .... NaN       75        2        79876    73453    634534
19     .... 25        32425     NaN      989423   NaN      NaN
12     .... 25        23424     342421   7        13424    67
3      .... 95        32121     NaN      NaN      NaN      111231

To make this even more complicated, this is a very large file, several GB in size. 为了使这更复杂,这是一个非常大的文件,大小为几GB。

Normally, I try to process the data in chunks, eg 通常,我尝试以块的形式处理数据,例如

import pandas as pd

for chunk in pd.read_table(FILE_PATH, header=None, sep='\t', chunksize=10**6):
    # place chunks into a dataframe or HDF 

However, I see no way to "preprocess" the data first in chunks, and then use chunks to read the data into pandas.read_table() . 但是,我认为无法首先在块中“预处理”数据,然后使用块将数据读入pandas.read_table() How would you do this? 你会怎么做? What sort of preprocessing tools are available---perhaps sed ? 什么样的预处理工具可用 - 也许是sed awk ? awk

This is a challenging problem, due to the size of the data and the formatting that must be done before loading into a dataframe. 这是一个具有挑战性的问题,因为在加载到数据帧之前必须完成数据的大小和格式化。 Any help appreciated. 任何帮助赞赏。

$ cat > pandas.awk
BEGIN {
    PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)                  
}
NR==1 {       # the header cols is in the beginning of data file
              # FORGET THIS: header cols from another file replace NR==1 with NR==FNR and see * below
    split($0,a," ")                  # mkheader a[1]=first_col ...
    for(i in a) {                    # replace with a[first_col]="" ...
        a[a[i]]
        printf "%6s%s", a[i], OFS    # output the header
        delete a[i]                  # remove a[1], a[2], ...
    }
    # next                           # FORGET THIS * next here if cols from another file UNTESTED
}
{
    gsub(/: /,"=")                   # replace key-value separator ": " with "="
    split($0,b,FS)                   # split record from ","
    for(i in b) {
        split(b[i],c,"=")            # split key=value to c[1]=key, c[2]=value
        b[c[1]]=c[2]                 # b[key]=value
    }
    for(i in a)                      # go thru headers in a[] and printf from b[]
        printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}

Data sample ( pandas.txt ): 数据样本( pandas.txt ):

Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
Col_01: 14  Col_20: 25    Col_21: 23432    Col_22: 639142
Col_01: 8   Col_20: 25    Col_22: 25134    Col_23: 243344
Col_01: 17  Col_21: 75    Col_23: 79876    Col_25: 634534    Col_22: 5    Col_24: 73453
Col_01: 19  Col_20: 25    Col_21: 32425    Col_23: 989423
Col_01: 12  Col_20: 25    Col_21: 23424    Col_22: 342421    Col_23: 7    Col_24: 13424    Col_25: 67
Col_01: 3   Col_20: 95    Col_21: 32121    Col_25: 111231

$ awk -f pandas.awk -pandas.txt
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
    14     25  23432 639142    NaN    NaN 
     8     25    NaN  25134 243344    NaN 
    17    NaN     75      5  79876 634534 
    19     25  32425    NaN 989423    NaN 
    12     25  23424 342421      7     67 
     3     95  32121    NaN    NaN 111231 

All needed cols should be in the data file header. 所有需要的cols应该在数据文件头中。 It's probably not a big job to collect the headers while processing, just keep the data in arrays and print in the end, maybe in version 3. 在处理时收集标题可能不是一件大事,只需将数据保存在数组中并最后打印,也许在版本3中。

If you read the headers from a different file ( cols.txt ) than the data file ( pandas.txt ), execute the script ( pandas.awk ): 如果从不同的文件( cols.txt )读取标题而不是数据文件( pandas.txt ),请执行脚本( pandas.awk ):

$ awk -F pandas.awk cols.txt pandas.txt

Another version which takes a separate column file as parameter or uses the first record. 另一个版本将单独的列文件作为参数或使用第一个记录。 Run either way: 运行方式:

awk -f pandas2.awk pandas.txt # first record as header
awk -f pandas2.awk cols.txt pandas.txt # first record from cols.txt
awk -v cols="cols.txt" -f pandas2.awk pandas.txt # read cols from cols.txt

Or even: 甚至:

awk -v cols="pandas.txt" -f pandas2.awk pandas.txt # separates keys from pandas.txt for header

Code: 码:

$ cat > pandas2.awk
BEGIN {
    PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
    if(cols) {                           # if -v cols="column_file.txt" or even "pandas.txt"
        while ((getline line< cols)>0) { # read it in line by line
            gsub(/: [^ ]+/,"",line)      # remove values from "key: value"
            split(line,a)                # split to temp array
            for(i in a)                  # collect keys to column array
                col[a[i]]
        }
        for(i in col)                    # output columns
            printf "%6s%s", i, OFS
        print ""
    }
}
NR==1 && cols=="" {                      # if the header cols are in the beginning of data file
                                         # if not, -v cols="column_file.txt"
    split($0,a," +")                     # split header record by spaces
    for(i in a) {
        col[a[i]]                        # set them to array col
        printf "%6s%s", a[i], OFS        # output the header
    }
    print ""
}
NR==1 {
    next
}
{
    gsub(/: /,"=")                       # replace key-value separator ": " with "="
    split($0,b,FS)                       # split record from separator FS
    for(i in b) {
        split(b[i],c,"=")                # split key=value to c[1]=key, c[2]=value
        b[c[1]]=c[2]                     # b[key]=value
    }
    for(i in col)                        # go thru headers in col[] and printf from b[]
        printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}

You can do this more cleanly completely in Pandas. 你可以在Pandas中更干净地完成这项工作。

Suppose you have two independent data frames with only one overlapping column: 假设您有两个独立的数据框,只有一个重叠列:

>>> df1
   A  B
0  1  2
>>> df2
   B  C
1  3  4

You can use .concat to concatenate them together: 您可以使用.concat将它们连接在一起:

>>> pd.concat([df1, df2])
    A  B   C
0   1  2 NaN
1 NaN  3   4

You can see NaN is created for row values that do not exist. 您可以看到为不存在的行值创建了NaN

This can easily be applied to your example data without preprocessing at all: 这可以很容易地应用于您的示例数据,而无需预处理:

import pandas as pd
df=pd.DataFrame()
with open(fn) as f_in:
    for i, line in enumerate(f_in):
        line_data=pd.DataFrame({k.strip():v.strip() 
                  for k,_,v in (e.partition(':') 
                        for e in line.split('\t'))}, index=[i])
        df=pd.concat([df, line_data])

>>> df
  Col_01 Col_20 Col_21  Col_22  Col_23 Col_24  Col_25
0     14     25  23432  639142     NaN    NaN     NaN
1      8     25    NaN   25134  243344    NaN     NaN
2     17    NaN     75       5   79876  73453  634534
3     19     25  32425     NaN  989423    NaN     NaN
4     12     25  23424  342421       7  13424      67
5      3     95  32121     NaN     NaN    NaN  111231

Alternatively, if your main issue is establishing the desired order of the columns in a multi chunk add of columns, just read all the column value first (not tested): 或者,如果您的主要问题是在多块添加列中建立所需列的顺序,则只需先读取所有列值(未测试):

# based on the alpha numeric sort of the example of:
# [ALPHA]_[NUM]
headers=set()
with open(fn) as f:
    for line in f:
        for record in line.split('\t'):
            head,_,datum=record.partition(":")
            headers.add(head)
# sort as you wish:             
cols=sorted(headers, key=lambda e: int(e.partition('_')[2])) 

Pandas will use the order of the list for the column order if given in the initial creation of the DataFrame. 如果在初始创建DataFrame时给出,Pandas将使用列表顺序作为列顺序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM