简体   繁体   English

改善从映像复制字节

[英]Improving copying bytes from an Image

I have the following minimal code that gets the bytes from an image: 我有以下从代码获取字节的最小代码:

import Image

im = Image.open("kitten.png")
im_data = [pix for pixdata in im.getdata() for pix in pixdata]

This is rather slow (I have gigabytes of images to process) so how could this be sped up? 这相当慢(我要处理千兆字节的图像),那么如何加快速度呢? I'm also unfamiliar with what exactly that code is trying to do. 我也不熟悉该代码到底想做什么。 All my data is 1280 x 960 x 8-bit RGB, so I can ignore corner cases, etc. 我所有的数据都是1280 x 960 x 8位RGB,所以我可以忽略极端情况等。

(FYI, the full code is here - I've already replaced the ImageFile loop with the above Image.open() .) (仅供参考,完整的代码在这里 -我已经用上面的Image.open()替换了ImageFile循环。)

你可以试试

scipy.ndimage.imread()

If you mean speeding up by algorythamically i can suggest you accessing file with multiple threads simultaneously (only if you don't have a connection between processing sequence) 如果您想通过算法提高速度,我建议您同时访问多个线程的文件(仅当处理序列之间没有连接时)

divide file logically by few sections and access each part simultaneously with threads (you have to put your operation inside a function and call it with threads) 从逻辑上将文件划分为几个部分,并使用线程同时访问每个部分(您必须将操作放入函数中并使用线程进行调用)

here is a link to tutorial about threading in python 这是有关python中线程的教程的链接

threding in python 在Python中兴旺

I solved my problem, I think: 我解决了我的问题,我认为:

>>> [pix for pixdata in im.getdata() for pix in pixdata] ==
           numpy.ndarray.tolist(numpy.ndarray.flatten(numpy.asarray(im)))
True

This cuts down the runtime by half, and with a bit of bash magic I can run the conversion on the 56 directories in parallel. 这将运行时间减少了一半,并且通过一些bash魔术,我可以在56个目录上并行运行转换。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM