简体   繁体   中英

How can do I convert a flat RGBA array to a Numpy array to use in OpenCV?

Blender returns texture images as a flat array of pixel values (RGBA, each stored as single value in a flat array of size width * height * 4).

How can I transform that into a Numpy array and then load that into an OpenCV image?

I am currently trying this:

i = 1

for img in bpy.data.images:
    print(img)
    print(img.name, img.users, img.file_format)
    
    print('load start')
    img_arr = np.array(img.pixels)

    print(img_arr.shape)
    img_arr = img_arr.reshape([ img.size[1], img.size[0], 4 ])
    print(img_arr.shape)

    
    print('load end')
    
    cv2.imwrite('out_cv2_' + str(i)  + '.png', img_arr)
    i = i + 1

But I get blank images of the right size.

This is similar to this question but for OpenCV in Python.

I am aware that I could save the images to file like this:

img.filepath = 'out' + str(i)  + '.png'
img.file_format = 'PNG'
img.save()

but what I'm trying to get to is an intermediate step to manipulating the images in OpenCV, which I'd like to do in memory

I've also seen this answer but unfortunately it crashes Blender.

You need to specify the dtype when you create the Numpy array. You can check yours with:

print(img_arr.dtype)

I don't know what bit-depth you have initially, but you need to have dtype=np.uint8 or dtype=np.uint16 if you want to store in a PNG.

I mean:

img_arr = np.array(img.pixels, np.uint8)

You should then look at your scaling to make sure you have a decent range with some contrast. So if your dtype is np.uint8 , you want img_arr.max() to be over say 150 for your brightnesses to be perceptible.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM