简体   繁体   中英

How to process IDATA from large PNG images chunk by chunk, in Python

First of all, I'd like to refer to question 29513549 , where all respondents seem to agree that it makes sense that the PNG image format was designed to have multiple IDAT chunks in case of larger images - for reading and for writing. These IDAT chunks contain the actual image values.
My question applies to the reading process. In order to reconstruct the values, one needs to first decompress the data with zlib and then to apply inverse filter functions: uncompressed data contain differences to previous values, where possible. So far, I have only come across examples that join all IDATA chunks together - eg in this well-written blog of Paul Tan - meaning that all data have to be loaded into memory. I guess that is why the documentation of Python package pypng warns that the read method of the Reader class "may use excessive memory".
I don't know very much about decompression by means of zlib. I know that it's described here , but it seems complicated. This is eg because boundaries between IDAT chunks are arbitrary. And it is quite possible for a terminating zlib check value to be split across IDAT chunks. All the same, I'd like to find a way to decompress the IDATA chunks without loading all data into memory at the same time - even if that would maybe mean that each chunk has to be read twice. If there is no way to do this, then the possibility to retrieve multiple IDAT chunks from PNG images is useless. I'm looking forward to your answers.

zlib 流,因此您可以一次简单地将一个 IDAT 块提供给 zlib 的充气器,并将解压后的数据取出。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM