简体   繁体   中英

PicklingError: Can't pickle <class 'ctypes.c_char_Array_X'>: attribute lookup c_char_Array_X on ctypes failed

There is problem using ctypes structures with multiprocessing

I can use simple ctypes variables with multiprocessing, but when I use structures passing to funcs there is problem with pickling it

Here is some code which demonstrate this problem

import concurrent.futures
from ctypes import *


def test_c_val(c_val):
    print(c_val.value)
    return c_val.value

test_int = c_int(55)
test_char = c_char(str(6).encode())
arr = [str(i).encode() for i in range(4)]
test_c_array = (c_char * len(arr))(*arr)

futures = []
with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
    futures.append(executor.submit(test_c_val, test_int))
    futures.append(executor.submit(test_c_val, test_char))
    futures.append(executor.submit(test_c_val, test_c_array))
    time.sleep(5)
    print(futures[2])
    
print(futures)
print(futures[2].exception())

How can i solve it?

ctypes pointers can point to memory of any size or location that python doesn't know about, it may even point to null, and it's unsafe for python to attempt to pickle it, so python doesn't pickle it.

the closest thing is to either have a shared memory array using multiprocessing.Array or using python array which can be pickled.

this is how it would be done using an array, as it is a python object that can be pickled, the array is sent and a pointer to it is used to form a ctypes object.

import concurrent.futures
import ctypes
import array
import time

def test_c_val(c_val):
    print(c_val.value)
    return c_val.value

def test_py_array(py_arr:array.array):
    c_array_size = py_arr.buffer_info()[1]
    c_array = (ctypes.c_char*c_array_size).from_address(py_arr.buffer_info()[0])
    print(c_array.value)
    return py_arr

if __name__ == "__main__":
    test_int = ctypes.c_int(55)
    test_char = ctypes.c_char(str(6).encode())
    arr = [str(i).encode() for i in range(4)]
    test_c_array = (ctypes.c_char * len(arr))(*arr)
    # test_py_c_array = array.array('b', b''.join(arr))
    test_py_c_array = array.array('b', test_c_array.value)
    futures = []
    with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
        futures.append(executor.submit(test_c_val, test_int))
        futures.append(executor.submit(test_c_val, test_char))
        futures.append(executor.submit(test_py_array, test_py_c_array))
        time.sleep(1)
        print(futures[2])

    print(futures)
    print(futures[2].exception())

if you want to use shared memory then you can use multiprocessing.shared_memory which is more flexible than multiprocessing.Array , note that you can't pickle multiprocessing.Array , it can only be inherited, unlike python array.

alternatively there's numpy which allows easier and serializable c arrays which are easier to use than python arrays.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM