简体   繁体   中英

What is the difference between Binning and sub-sampling in Image Signal Processing?

As I know, there are some functions in the CMOS Image Sensor ISP (Image Signal Processor). Specifically, I'd like to know the difference between binning and sub-sampling. I think these purpose is same to reduce image size. However, I'm not sure why these functions exist? What is their purpose?

Binning and sub-sampling reduce the image size as you have suspected, but what they focus on are different things. Let's tackle each issue separately

Binning

Binning in image processing deals primarily with quantization . The closest thing I can think of is related to what is known as data binning . Basically, consider breaking up your image into distinct (non-overlapping) M x N tiles, where M and N are the rows and columns of a tile and M and N should be much smaller than the rows and columns of the image.

If you consider any grid of M x N pixels, all of these pixels get replaced with a representative colour. The way this representative colour is calculated is done in many ways... the average is a popular method. The reason why binning is performed is primarily as a data pre-processing technique which is used to reduce the effects of minor observation errors. This effectively reduces the amount of information that is representative of the image, and so it certainly reduces the image size by reducing the amount of unique colours that represent the image.

In addition, binning the data may also reduce the impact of noise that impacts the CMOS sensor on the final processed image, but at the cost of a lower dynamic range of colours.

Sub-sampling

Sub-sampling in the case of image processing mostly deals with image resizing. It's also called image scaling . The goal is to take an image and reduce its dimensions so that you get a smaller image as a result. Binning deals with keeping the image the same size (ie the same dimensions as the original) while reducing the amount of colours which ultimately reduces the amount of space the image takes up. Subsampling reduces the image size by removing information all together. Usually when you subsample, you also interpolate or smooth the image so that you reduce aliasing .

Sub-sampling has another application in video processing - especially in MPEG where video is encoded in YCbCr . Y is the luminance while Cb and Cr are the chrominance pairs. We tend to notice changes in luminance rather than chrominance, and so the chrominance is subsampled to reduce the amount of space taken up by the video. Specifically, the human visual system has poor acuity when it comes to colour information than we do with luminance / intensity . Usually, the chrominance values are filtered then subsampled by 1/2 or even 1/4 of that of the intensity . Even with a rather high subsampling rate, we don't notice any differences in terms of perceived image quality.


This is obviously a rather rough introduction on the differences between them both, but I hope this gives you enough of what you're after for your purposes.

Good luck!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM