简体   繁体   中英

Dask "Column assignment doesn't support type numpy.ndarray"

I'm trying to use Dask instead of pandas since the data size I'm analyzing is quite large. I wanted to add a flag column based on several conditions.

import dask.array as da
data['Flag'] = da.where((data['col1']>0) & (data['col2']>data['col4'] | data['col3']>data['col4']), 1, 0).compute()

But, then I got the following error message. The above code works perfectly when using np.where with pandas dataframe, but didn't work with dask.array.where .

在此处输入图像描述

If numpy works and the operation is row-wise, then one solution is to use .map_partitions :

def create_flag(data):
    data['Flag'] = np.where((data['col1']>0) & (data['col2']>data['col4'] | data['col3']>data['col4']), 1, 0)
    return data

ddf = ddf.map_partitions(create_flag)

You can use dask.dataframe.Series.where to achieve the same result but without computing. Or better yet, you could make use of the fact that True/False values can be converted directly into 1/0 by simply promoting the type to int (see below).

Both of these options have the advantage of keeping all operations native to dask.dataframe and thereby giving the scheduler more visibility into the operation (and thus more freedom to optimize, manage memory, etc.) than non-dask operations called with map_partitions or directly assigning a computed result.

data['Flag'] = (
    (data['col1']>0)
    & ((data['col2']>data['col4']) | (data['col3']>data['col4']))
).astype(int)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM