简体   繁体   English

熊猫SparseSeries可以将值存储在float16 dtype中吗?

[英]Can pandas SparseSeries store values in the float16 dtype?

The reason why I want to use a smaller data type in the sparse pandas containers is to reduce memory usage. 我之所以要在稀疏的熊猫容器中使用较小的数据类型,是为了减少内存使用量。 This is relevant when working with data that originally uses bool (eg from to_dummies ) or small numeric dtypes (eg int8), which are all converted to float64 in sparse containers. 这与处理最初使用bool(例如,来自to_dummies )或较小数字dtype(例如int8)的数据有关,这些数据在稀疏容器中都转换为float64。

DataFrame creation DataFrame创建

The provided example uses a modest 20k x 145 dataframe. 提供的示例使用适度的20k x 145数据帧。 In practice I'm working with dataframes in the order of 1e6 x 5e3. 实际上,我正在按1e6 x 5e3的顺序处理数据帧。

In []: bool_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: bool(145)
memory usage: 2.7 MB

In []: bool_df.memory_usage(index=False).sum()
Out[]: 2878105

In []: bool_df.values.itemsize
Out[]: 1

A sparse version of this dataframe needs less memory, but is still much larger than needed, given the original dtype. 鉴于原始dtype,此数据帧的稀疏版本需要较少的内存,但仍然比所需的内存大得多。

In []: sparse_df = bool_df.to_sparse(fill_value=False)

In []: sparse_df.info()
<class 'pandas.sparse.frame.SparseDataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: float64(145)
memory usage: 1.1 MB

In []: sparse_df.memory_usage(index=False).sum()
Out[]: 1143456

In []: sparse_df.values.itemsize
Out[]: 8

Even though this data is fairly sparse, the dtype conversion from bool to float64 causes non-fill values to take up 8x more space. 即使此数据相当稀疏,从bool到float64的dtype转换也会导致非填充值占用8倍的空间。

In []: sparse_df.memory_usage(index=False).describe()
Out[]:
count      145.000000
mean      7885.903448
std      17343.762402
min          8.000000
25%        640.000000
50%       1888.000000
75%       4440.000000
max      84688.000000

Given the sparsity of the data, one would hope for a more drastic reduction in memory size: 考虑到数据的稀疏性,人们希望能够大幅度减少内存大小:

In []: sparse_df.density
Out[]: 0.04966184346992205

Memory footprint of underlying storage 基础存储的内存占用量

The columns of SparseDataFrame are SparseSeries , which use SparseArray as a wrapper for the underlying numpy.ndarray storage. SparseDataFrame的列是SparseSeries ,它使用SparseArray作为基础numpy.ndarray存储的包装器。 The number of bytes that are used by the sparse dataframe can (also) be computed directly from these ndarrays: 稀疏数据帧使用的字节数(也)可以直接从这些ndarrays计算:

In []: col64_nbytes = [
.....:     sparse_df[col].values.sp_values.nbytes
.....:     for col in sparse_df
.....: ]

In []: sum(col64_nbytes)
Out[]: 1143456

The ndarrays can be converted to use smaller floats, which allows one to calculate how much memory the dataframe would need when using eg float16s. 可以将ndarrays转换为使用较小的float,这使人们可以计算使用例如float16时数据帧需要多少内存。 This would result in a 4x smaller dataframe, as one might expect. 如人们所料,这将导致数据帧缩小4倍。

In []: col16_nbytes = [
.....:     sparse_df[col].values.sp_values.astype('float16').nbytes
.....:     for col in sparse_df
.....: ]

In []: sum(col16_nbytes)
Out[]: 285864

By using the more appropriate dtype, the memory usage can be reduced to 10% of the dense version, whereas the float64 sparse dataframe reduces to 40%. 通过使用更合适的dtype,可以将内存使用率减少到密集版本的10%,而float64稀疏数据帧减少到40%。 For my data, this could make the difference between needing 20 GB and 5 GB of available memory. 对于我的数据,这可能需要20 GB和5 GB的可用内存。

In []: sum(col64_nbytes) / bool_df.memory_usage(index=False).sum()
Out[]: 0.3972947477593764

In []: sum(col16_nbytes) / bool_df.memory_usage(index=False).sum()
Out[]: 0.0993236869398441

Issue 问题

Unfortunately, dtype conversion of sparse containers has not been implemented in pandas: 不幸的是,稀疏容器的dtype转换尚未在熊猫中实现:

In []: sparse_df.astype('float16')
---------------------------------------------------
[...]/pandas/sparse/frame.py in astype(self, dtype)
    245
    246     def astype(self, dtype):
--> 247         raise NotImplementedError
    248
    249     def copy(self, deep=True):

NotImplementedError:

How can the SparseSeries in a SparseDataFrame be converted to use the numpy.float16 data type, or another dtype that uses fewer than 64 bytes per item, instead of the default numpy.float64 ? 如何将SparseSeries中的SparseDataFrame转换为使用numpy.float16数据类型,或每个项目使用少于64个字节的numpy.float64而不是默认的numpy.float64

The SparseArray constructor can be used to convert its underlying ndarray 's dtype. SparseArray构造函数可用于转换其基础ndarray To convert all sparse series in a dataframe, one can iterate over the df's series, convert their arrays, and replace the series with converted versions. 要转换数据帧中的所有稀疏序列,可以遍历df的序列,转换其数组,然后用转换后的版本替换该序列。

import pandas as pd
import numpy as np

def convert_sparse_series_dtype(sparse_series, dtype):
    dtype = np.dtype(dtype)
    if 'float' not in str(dtype):
        raise TypeError('Sparse containers only support float dtypes')

    sparse_array = sparse_series.values
    converted_sp_array = pd.SparseArray(sparse_array, dtype=dtype)

    converted_sp_series = pd.SparseSeries(converted_sp_array)
    return converted_sp_series


def convert_sparse_columns_dtype(sparse_dataframe, dtype):
    for col_name in sparse_dataframe:
        if isinstance(sparse_dataframe[col_name], pd.SparseSeries):
            sparse_dataframe.loc[:, col_name] = convert_sparse_series_dtype(
                 sparse_dataframe[col_name], dtype
            )

This achieves the stated purpose of reducing the sparse dataframe's memory footprint: 这达到了减少稀疏数据帧的内存占用的既定目的:

In []: sparse_df.info()
<class 'pandas.sparse.frame.SparseDataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: float64(145)
memory usage: 1.1 MB

In []: convert_sparse_columns_dtype(sparse_df, 'float16')

In []: sparse_df.info()
<class 'pandas.sparse.frame.SparseDataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: float16(145)
memory usage: 279.2 KB

In []: bool_df.equals(sparse_df.to_dense().astype('bool'))
Out[]: True

It is, however, a somewhat lousy solution, because the converted dataframe behaves unpredictibly when it interacts with other dataframes. 但是,这是一个糟糕的解决方案,因为转换后的数据帧在与其他数据帧交互时会表现出不可预测的行为。 For instance, when converted sparse dataframes are concatenated with other dataframes, all contained series become dense series. 例如,当将转换后的稀疏数据帧与其他数据帧连接在一起时,所有包含的序列都变为密集序列。 This is not the case for unconverted sparse dataframes. 对于未转换的稀疏数据帧,情况并非如此。 They remain sparse series in the resulting dataframe. 它们在结果数据框中保持稀疏序列。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM