繁体   English   中英

Python - 加速将分类变量转换为数字索引

[英]Python - Speed up for converting a categorical variable to it's numerical index

我需要将Pandas数据框中的一列分类变量转换为一个数值,该值对应于列中唯一分类变量数组的索引(长篇故事!),这里是一个完成该代码的代码片段:

import pandas as pd
import numpy as np

d = {'col': ["baked","beans","baked","baked","beans"]}
df = pd.DataFrame(data=d)
uniq_lab = np.unique(df['col'])

for lab in uniq_lab:
    df['col'].replace(lab,np.where(uniq_lab == lab)[0][0].astype(float),inplace=True)

它转换数据框:

    col
 0  baked
 1  beans
 2  baked
 3  baked
 4  beans

进入数据框:

    col
 0  0.0
 1  1.0
 2  0.0
 3  0.0
 4  1.0

如预期的。 但我的问题是,当我尝试在大数据文件上运行类似的代码时,我的愚蠢的小循环(我想到这一点的唯一方法)就像糖蜜一样慢。 我只是好奇是否有人对是否有任何方法更有效地做到这一点有任何想法。 提前感谢任何想法。

使用factorize

df['col'] = pd.factorize(df.col)[0]
print (df)
   col
0    0
1    1
2    0
3    0
4    1

文件

编辑:

正如Jeff在评论中提到的那样,最好的是将列转换为categorical主要是因为内存使用量较少:

df['col'] = df['col'].astype("category")

时间

有趣的是,大型df pandas的速度比numpy快。 我不敢相信。

len(df)=500k

In [29]: %timeit (a(df1))
100 loops, best of 3: 9.27 ms per loop

In [30]: %timeit (a1(df2))
100 loops, best of 3: 9.32 ms per loop

In [31]: %timeit (b(df3))
10 loops, best of 3: 24.6 ms per loop

In [32]: %timeit (b1(df4))
10 loops, best of 3: 24.6 ms per loop  

len(df)=5k

In [38]: %timeit (a(df1))
1000 loops, best of 3: 274 µs per loop

In [39]: %timeit (a1(df2))
The slowest run took 6.71 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 273 µs per loop

In [40]: %timeit (b(df3))
The slowest run took 5.15 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 295 µs per loop

In [41]: %timeit (b1(df4))
1000 loops, best of 3: 294 µs per loop

len(df)=5

In [46]: %timeit (a(df1))
1000 loops, best of 3: 206 µs per loop

In [47]: %timeit (a1(df2))
1000 loops, best of 3: 204 µs per loop

In [48]: %timeit (b(df3))
The slowest run took 6.30 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 164 µs per loop

In [49]: %timeit (b1(df4))
The slowest run took 6.44 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 164 µs per loop

测试代码

d = {'col': ["baked","beans","baked","baked","beans"]}
df = pd.DataFrame(data=d)
print (df)
df = pd.concat([df]*100000).reset_index(drop=True)
#test for 5k
#df = pd.concat([df]*1000).reset_index(drop=True)


df1,df2,df3, df4 = df.copy(),df.copy(),df.copy(),df.copy()

def a(df):
    df['col'] = pd.factorize(df.col)[0]
    return df

def a1(df):
    idx,_ = pd.factorize(df.col)
    df['col'] = idx
    return df

def b(df):
    df['col'] = np.unique(df['col'],return_inverse=True)[1]
    return df

def b1(df):
    _,idx = np.unique(df['col'],return_inverse=True)
    df['col'] = idx    
    return df

print (a(df1))    
print (a1(df2))   
print (b(df3))   
print (b1(df4))  

你可以使用np.unique的可选参数return_inverse根据它们的唯一性来识别每个字符串,并在输入数据帧中设置它们,如下所示 -

_,idx = np.unique(df['col'],return_inverse=True)
df['col'] = idx

请注意, IDs对应于字符串的唯一字母排序数组。 如果你必须得到那个独特的数组,你可以用它替换_ ,就像这样 -

uniq_lab,idx = np.unique(df['col'],return_inverse=True)

样品运行 -

>>> d = {'col': ["baked","beans","baked","baked","beans"]}
>>> df = pd.DataFrame(data=d)
>>> df
     col
0  baked
1  beans
2  baked
3  baked
4  beans
>>> _,idx = np.unique(df['col'],return_inverse=True)
>>> df['col'] = idx
>>> df
   col
0    0
1    1
2    0
3    0
4    1

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM