简体   繁体   English

如何将 PIL 图像转换为 numpy 数组?

[英]How to convert a PIL Image into a numpy array?

Alright, I'm toying around with converting a PIL image object back and forth to a numpy array so I can do some faster pixel by pixel transformations than PIL's PixelAccess object would allow.好吧,我正在尝试将 PIL 图像对象来回转换为 numpy 数组,这样我就可以比 PIL 的PixelAccess对象允许的PixelAccess像素转换速度更快。 I've figured out how to place the pixel information in a useful 3D numpy array by way of:我已经弄清楚如何通过以下方式将像素信息放置在有用的 3D numpy 数组中:

pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)

But I can't seem to figure out how to load it back into the PIL object after I've done all my awesome transforms.但是在我完成了所有很棒的转换之后,我似乎无法弄清楚如何将它加载回 PIL 对象。 I'm aware of the putdata() method, but can't quite seem to get it to behave.我知道putdata()方法,但似乎无法让它发挥作用。

You're not saying how exactly putdata() is not behaving.您并不是说putdata()的行为究竟如何。 I'm assuming you're doing我假设你正在做

>>> pic.putdata(a)
Traceback (most recent call last):
  File "...blablabla.../PIL/Image.py", line 1185, in putdata
    self.im.putdata(data, scale, offset)
SystemError: new style getargs format but argument is not a tuple

This is because putdata expects a sequence of tuples and you're giving it a numpy array.这是因为putdata需要一个元组序列,而您给它的是一个 numpy 数组。 This

>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)

will work but it is very slow.会工作,但速度很慢。

As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply从 PIL 1.1.6 开始,在图像和 numpy 数组之间转换“正确”方法很简单

>>> pix = numpy.array(pic)

although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).尽管生成的数组与您的格式不同(在这种情况下为 3-d 数组或行/列/rgb)。

Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix) .然后,你让你更改阵列后,你应该能够做到无论是pic.putdata(pix)或创建一个新的形象与Image.fromarray(pix)

Open I as an array:I作为数组打开:

>>> I = numpy.asarray(PIL.Image.open('test.jpg'))

Do some stuff to I , then, convert it back to an image:I做一些事情,然后将其转换回图像:

>>> im = PIL.Image.fromarray(numpy.uint8(I))

Source: Filter numpy images with FFT, Python来源: 使用 FFT、Python 过滤 numpy 图像

If you want to do it explicitly for some reason, there are pil2array() and array2pil() functions using getdata() on this page in correlation.zip.如果您出于某种原因想明确地执行此操作,则在correlation.zip 中的此页面上有使用getdata() 的pil2array() 和array2pil() 函数。

I am using Pillow 4.1.1 (the successor of PIL) in Python 3.5.我在 Python 3.5 中使用 Pillow 4.1.1(PIL 的继承者)。 The conversion between Pillow and numpy is straightforward. Pillow 和 numpy 之间的转换很简单。

from PIL import Image
import numpy as np
im = Image.open('1.jpg')
im2arr = np.array(im) # im2arr.shape: height x width x channel
arr2im = Image.fromarray(im2arr)

One thing that needs noticing is that Pillow-style im is column-major while numpy-style im2arr is row-major.需要注意的一件事是 Pillow 风格的im是列im2arr而 numpy 风格的im2arr是行im2arr However, the function Image.fromarray already takes this into consideration.但是,函数Image.fromarray已经考虑到了这一点。 That is, arr2im.size == im.size and arr2im.mode == im.mode in the above example.也就是说,上面例子中的arr2im.size == im.sizearr2im.mode == im.mode

We should take care of the HxWxC data format when processing the transformed numpy arrays, eg do the transform im2arr = np.rollaxis(im2arr, 2, 0) or im2arr = np.transpose(im2arr, (2, 0, 1)) into CxHxW format.在处理转换后的 numpy 数组时,我们应该注意 HxWxC 数据格式,例如将im2arr = np.rollaxis(im2arr, 2, 0)im2arr = np.transpose(im2arr, (2, 0, 1))转换为CxHxW 格式。

You need to convert your image to a numpy array this way:您需要通过这种方式将图像转换为 numpy 数组:

import numpy
import PIL

img = PIL.Image.open("foo.jpg").convert("L")
imgarr = numpy.array(img) 

Convert Numpy to PIL image and PIL to NumpyNumpy to PIL转换Numpy to PIL图像并将PIL to Numpy转换PIL to Numpy

import numpy as np
from PIL import Image

def pilToNumpy(img):
    return np.array(img)

def NumpyToPil(img):
    return Image.fromarray(img)

The example, I have used today:我今天用过的例子:

import PIL
import numpy
from PIL import Image

def resize_image(numpy_array_image, new_height):
    # convert nympy array image to PIL.Image
    image = Image.fromarray(numpy.uint8(numpy_array_image))
    old_width = float(image.size[0])
    old_height = float(image.size[1])
    ratio = float( new_height / old_height)
    new_width = int(old_width * ratio)
    image = image.resize((new_width, new_height), PIL.Image.ANTIALIAS)
    # convert PIL.Image into nympy array back again
    return array(image)

If your image is stored in a Blob format (ie in a database) you can use the same technique explained by Billal Begueradj to convert your image from Blobs to a byte array.如果您的图像以 Blob 格式存储(即在数据库中),您可以使用 Billal Begueradj 解释的相同技术将您的图像从 Blob 转换为字节数组。

In my case, I needed my images where stored in a blob column in a db table:就我而言,我需要将图像存储在 db 表的 blob 列中:

def select_all_X_values(conn):
    cur = conn.cursor()
    cur.execute("SELECT ImageData from PiecesTable")    
    rows = cur.fetchall()    
    return rows

I then created a helper function to change my dataset into np.array:然后我创建了一个辅助函数来将我的数据集更改为 np.array:

X_dataset = select_all_X_values(conn)
imagesList = convertToByteIO(np.array(X_dataset))

def convertToByteIO(imagesArray):
    """
    # Converts an array of images into an array of Bytes
    """
    imagesList = []

    for i in range(len(imagesArray)):  
        img = Image.open(BytesIO(imagesArray[i])).convert("RGB")
        imagesList.insert(i, np.array(img))

    return imagesList

After this, I was able to use the byteArrays in my Neural Network.在此之后,我能够在我的神经网络中使用 byteArrays。

plt.imshow(imagesList[0])
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

You can transform the image into numpy by parsing the image into numpy() function after squishing out the features( unnormalization)您可以通过在挤压出特征(非标准化)后将图像解析为 numpy() 函数来将图像转换为 numpy

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM