简体   繁体   中英

is jpg format good for image processing algorithms

most non-serious cameras (cameras on phones and webcams) provide lossy JPEG image as output.

while for a human eye they may not be noticed but the data loss could be critical for image processing algorithms.

If I am correct what is general approach you take when analyzing input images ? (please note: using a industry standard camera may not be an option for hobbyist programmers)

JPG is an entire family of implementations, there are actually 4 methods. The most common method is the "normal" method, based on the Discrete Cosine Transform. This simply divides the image in 8x8 blocks and calculates the DCT of this. This results in a list of coefficients. To store these coefficients efficiently, they are multiplied by some other matrix (quantization matrix), such that the higher frequencies are usually rounded to zero. This is the only lossy step in the process. The reason this is done is to be able to store the coefficients more efficiently than before.

So, your question is not answered very easily. It also depends on the size of the input, if you have a sufficiently large image (say 3000x2000), stored at a relatively high precision, you will have no trouble with artefacts. A small image with a high compression rate might cause troubles.

Remember though that an image taken with a camera contains a lot of noise, which in itself is probably far more troubling than the jpg compression.

In my work I usually converted all images to pgm format, which is a raw format. This ensures that if I process the image in a pipeline fashion, all intermediate steps do not suffer from jpg compression.

Keep in mind that operations such as rotation, scaling, and repeated saving of JPG cause data loss each iteration.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM