簡體   English   中英

當我添加類權重時,訓練模型會給出 ValueError

[英]Training the model gives ValueError when I add class weights

我正在使用多類 U-Net 分段,在通過數據模型進行訓練時遇到值錯誤。 我的多類模型分為 4 個類。

訓練模型代碼:

from simple_multi_unet_model import multi_unet_model #Uses softmax 

from tensorflow.keras.utils import normalize
import os
import glob
import cv2
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
import tensorflow as tf



#Resizing images, if needed
SIZE_X = 128 
SIZE_Y = 128
n_classes=4 #Number of classes for segmentation

#Capture training image info as a list
train_images = []

directory_path = '/home/Documents/Multiclass/images/'
list_of_files  = sorted( filter( os.path.isfile, glob.glob(directory_path + '*.png', recursive=True) ) )

 

for img_path in list_of_files:
        img = cv2.imread(img_path, 0)       
        img = cv2.resize(img, (SIZE_Y, SIZE_X))
        train_images.append(img)
       
#Convert list to array for machine learning processing        
train_images = np.array(train_images)

#Capture mask/label info as a list
train_masks = []
labels_path =  '/home/Documents/Multiclass/labels/'
list_of_labels = sorted( filter( os.path.isfile, glob.glob(labels_path + '*.png', recursive=True) ) )

 
for mask_path in list_of_labels:
        mask = cv2.imread(mask_path, 0)       
        mask = cv2.resize(mask, (SIZE_Y, SIZE_X), interpolation = cv2.INTER_NEAREST)  #Otherwise ground truth changes due to interpolation
        train_masks.append(mask)
        
#Convert list to array for machine learning processing          
train_masks = np.array(train_masks)

###############################################
#Encode labels... but multi dim array so need to flatten, encode and reshape
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
n, h, w = train_masks.shape
train_masks_reshaped = train_masks.reshape(-1,1)
train_masks_reshaped_encoded = labelencoder.fit_transform(train_masks_reshaped)
train_masks_encoded_original_shape = train_masks_reshaped_encoded.reshape(n, h, w)

np.unique(train_masks_encoded_original_shape)

#################################################
train_images = np.expand_dims(train_images, axis=3)
train_images = normalize(train_images, axis=1)
train_masks_input = np.expand_dims(train_masks_encoded_original_shape, axis=3)

#Create a subset of data for quick testing
#Picking 10% for testing and remaining for training
from sklearn.model_selection import train_test_split
x_train, X_test, y_train, y_test = train_test_split(train_images, train_masks_input, test_size = 0.10, random_state = 0)


print("Class values in the dataset are ... ", np.unique(y_train))  # 0 is the background/few unlabeled 

from tensorflow.keras.utils import to_categorical
train_masks_cat = to_categorical(y_train, num_classes=n_classes)
y_train_cat = train_masks_cat.reshape((y_train.shape[0], y_train.shape[1], y_train.shape[2], n_classes))



test_masks_cat = to_categorical(y_test, num_classes=n_classes)
y_test_cat = test_masks_cat.reshape((y_test.shape[0], y_test.shape[1], y_test.shape[2], n_classes))

   
###############################################################
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
                                                 np.unique(train_masks_reshaped_encoded),
                                                 train_masks_reshaped_encoded)
                                                

print("Class weights are...:", class_weights)


IMG_HEIGHT = x_train.shape[1]
IMG_WIDTH  = x_train.shape[2]
IMG_CHANNELS = x_train.shape[3]


def get_model():
    return multi_unet_model(n_classes=n_classes, IMG_HEIGHT=IMG_HEIGHT, IMG_WIDTH=IMG_WIDTH, IMG_CHANNELS=IMG_CHANNELS)

model = get_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

#If starting with pre-trained weights. 
#model.load_weights('???.hdf5')

history = model.fit(x_train, y_train_cat, 
                    batch_size = 16, 
                    verbose=1, 
                    epochs=100, 
                    validation_data=(X_test, y_test_cat), 
                    class_weight=class_weights,
                    shuffle=False)

我使用以下方法來定義類權重:

from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
                                                 np.unique(train_masks_reshaped_encoded),
                                                 train_masks_reshaped_encoded)
                                                

print("Class weights are...:", class_weights)

結果是class_weights : 0.276965 ,13.5112 ,5.80929,6.97915

我在訓練模型時遇到ValueError 我怎么可能解決它? 如果您認為我的方法不可行,請提出使用類權重的更好方法。

  File "/home/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1185, in _configure_dataset_and_inferred_steps
    if class_weight:

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

我對這條錯誤消息的理解是, numpy不知道是在任何元素為真時將數組評估為True ,還是僅當所有元素都為True時才將數組評估為True 因此,返回ValueError是因為布爾評估在這方面是不明確的。

因此,在評估數組時,您應該使用a.any()a.all() ,如附加的錯誤消息中所示。

當您嘗試在布爾上下文中評估類權重時,錯誤很可能發生在代碼中的其他地方(未共享?)。

我有同樣的問題,但用這個解決了它!!!

你必須把它壓縮成字典

試試下面的代碼:

從 sklearn.utils 導入 class_weight

class_weights = dict(zip(np.unique(train_masks_reshape_encoded), class_weight.compute_class_weight('平衡', np.unique(train_masks_reshape_encoded), train_masks_reshape_encoded)))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM