简体   繁体   English

cv2.imread接收cv2.IMREAD_GRAYSCALE作为参数时数组的不同长度

[英]Different length of array when cv2.imread receives cv2.IMREAD_GRAYSCALE as argument

Training_Data=[]
IMG_SIZE=100

def build():
    Directory="D:\projects\Machine_learning\Dog_Cat\PetImages"
    CATEGORY=["Cat","Dog"]
    for category in CATEGORY:
        path=os.path.join(Directory,category)
        class_num=CATEGORIES.index(category)
        for img in tqdm(os.listdir(path)):
            try:
                img_array=cv2.imread(
                    os.path.join(path,img),
                    cv2.IMREAD_GRAYSCALE
                )
                new_array=cv2.resize(img_array(IMG_SIZE,IMG_SIZE))
                Training_Data.append([new_array,class_num])
            except Exception as e:
               pass

When I don't pass cv2.IMREAD_GRAYSCALE, it gives different length of array: 当我不传递cv2.IMREAD_GRAYSCALE时,它将给出不同长度的数组:

img_array=cv2.imread(os.path.join(path,img))

X = 74598 X = 74598

Y = 24886 Y = 24886

Why it is appending 3 times more element in training_data when cv2.IMREAD_GRAYSCALE is not used? 为什么不使用cv2.IMREAD_GRAYSCALE时,在training_data追加3倍的元素?

By default, when you don't pass cv2.IMREAD_GRAYSCALE as argument, cv2.imread will read your image with three channels (it will load your image as RGB). 默认情况下,当您不传递cv2.IMREAD_GRAYSCALE作为参数时, cv2.imread将使用三个通道读取图像(它将图像加载为RGB)。 When you pass cv2.IMREAD_GRAYSCALE as argument, your amount of bits per pixels is changing (you'll only need 1*8 bit per pixel instead of 3*8). 当传递cv2.IMREAD_GRAYSCALE作为参数时,每像素的位数正在变化(每像素只需要1 * 8位,而不是3 * 8)。

If you want, you can check out your image depth and channels using depth() and channels() functions, it might help you to better understand what is happening. 如果需要,可以使用depth()channels()函数检查图像深度和通道 ,这可能有助于您更好地了解正在发生的情况。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM