簡體   English   中英

InvalidArgumentError:loc 處需要可廣播的形狀(未知)

[英]InvalidArgumentError: required broadcastable shapes at loc(unknown)

背景

我對 Python 和機器學習完全陌生。 我只是嘗試根據我在 inte.net 上找到的代碼設置一個 UNet,並希望使其適應我正在一點一點地工作的情況。 嘗試將.fit到訓練數據時,我收到以下錯誤:

InvalidArgumentError:  required broadcastable shapes at loc(unknown)
     [[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]

當我搜索它時,我得到了很多結果,但大多數都是不同的錯誤。

這是什么意思? 而且,更重要的是,我該如何修復它?

導致錯誤的代碼

這個錯誤的上下文如下:我想分割圖像和 label 不同的類。 我為訓練、測試和驗證數據設置了目錄“trn”、“tst”和“val”。 dir_dat() function 應用os.path.join()來獲取相應數據集的完整路徑。 這 3 個文件夾中的每一個都有子目錄,每個 class 都標有整數。 在每個文件夾中,都有一些對應 class 的.tif圖像。

我定義了以下圖像數據生成器(訓練數據稀疏,因此進行了擴充):

classes = np.array([ 0,  2,  4,  6,  8, 11, 16, 21, 29, 30, 38, 39, 51])
bs = 15 # batch size

augGen = ks.preprocessing.image.ImageDataGenerator(rotation_range = 365,
                                                   width_shift_range = 0.05,
                                                   height_shift_range = 0.05,
                                                   horizontal_flip = True,
                                                   vertical_flip = True,
                                                   fill_mode = "nearest") \
    .flow_from_directory(directory = dir_dat("trn"),
                         classes = [str(x) for x in classes.tolist()],
                         class_mode = "categorical",
                         batch_size = bs, seed = 42)
    
tst_batches = ks.preprocessing.image.ImageDataGenerator() \
    .flow_from_directory(directory = dir_dat("tst"),
                         classes = [str(x) for x in classes.tolist()],
                         class_mode = "categorical",
                         batch_size = bs, shuffle = False)

val_batches = ks.preprocessing.image.ImageDataGenerator() \
    .flow_from_directory(directory = dir_dat("val"),
                         classes = [str(x) for x in classes.tolist()],
                         class_mode = "categorical",
                         batch_size = bs)

然后我根據這個例子建立了UNet。 在這里,我更改了一些參數以使 UNet 適應這種情況(多類),即最后一層的激活和損失 function:

layer_in = ks.layers.Input(shape = (imgr, imgc, imgdim))
# convert pixel integer values to float
inVals = ks.layers.Lambda(lambda x: x / 255)(layer_in)

# Contraction path
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(inVals)
c1 = ks.layers.Dropout(0.1)(c1)
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c1)
p1 = ks.layers.MaxPooling2D((2, 2))(c1)

c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p1)
c2 = ks.layers.Dropout(0.1)(c2)
c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c2)
p2 = ks.layers.MaxPooling2D((2, 2))(c2)
 
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p2)
c3 = ks.layers.Dropout(0.2)(c3)
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c3)
p3 = ks.layers.MaxPooling2D((2, 2))(c3)
 
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p3)
c4 = ks.layers.Dropout(0.2)(c4)
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c4)
p4 = ks.layers.MaxPooling2D(pool_size = (2, 2))(c4)
 
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p4)
c5 = ks.layers.Dropout(0.3)(c5)
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c5)

# Expansive path 
u6 = ks.layers.Conv2DTranspose(128, (2, 2), strides = (2, 2), padding = "same")(c5)
u6 = ks.layers.concatenate([u6, c4])
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u6)
c6 = ks.layers.Dropout(0.2)(c6)
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c6)
 
u7 = ks.layers.Conv2DTranspose(64, (2, 2), strides = (2, 2), padding = "same")(c6)
u7 = ks.layers.concatenate([u7, c3])
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u7)
c7 = ks.layers.Dropout(0.2)(c7)
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c7)
 
u8 = ks.layers.Conv2DTranspose(32, (2, 2), strides = (2, 2), padding = "same")(c7)
u8 = ks.layers.concatenate([u8, c2])
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u8)
c8 = ks.layers.Dropout(0.1)(c8)
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c8)
 
u9 = ks.layers.Conv2DTranspose(16, (2, 2), strides = (2, 2), padding = "same")(c8)
u9 = ks.layers.concatenate([u9, c1], axis = 3)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u9)
c9 = ks.layers.Dropout(0.1)(c9)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c9)
 
out = ks.layers.Conv2D(1, (1, 1), activation = "softmax")(c9)
 
model = ks.Model(inputs = layer_in, outputs = out)
model.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
model.summary()

最后,我定義了回調並運行了產生錯誤的訓練:

cllbs = [
    ks.callbacks.EarlyStopping(patience = 4),
    ks.callbacks.ModelCheckpoint(dir_out("Checkpoint.h5"), save_best_only = True),
    ks.callbacks.TensorBoard(log_dir = './logs'),# log events for TensorBoard
    ]

model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)

完整控制台 output

這是運行最后一行時的完整 output(以防它有助於解決問題):

trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
Epoch 1/5
Traceback (most recent call last):

  File "<ipython-input-68-f1422c6f17bb>", line 1, in <module>
    trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1183, in fit
    tmp_logs = self.train_function(iterator)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in __call__
    result = self._call(*args, **kwds)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 950, in _call
    return self._stateless_fn(*args, **kwds)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 3023, in __call__
    return graph_function._call_flat(

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
    return self._build_call_outputs(self._inference_function.call(

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
    outputs = execute.execute(

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,

InvalidArgumentError:  required broadcastable shapes at loc(unknown)
     [[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]

Function call stack:
train_function

當 Class 標簽的數量與 Output 層的 output 形狀的形狀不匹配時,我遇到了這個問題。

例如,如果有 10 個 Class 標簽,我們將 Output 層定義為:

output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)

由於 Class 標簽的數量 ( 10 ) 不等於 Output 形狀 ( 5 )。 然后,我們會得到這個錯誤。

確保 class 標簽的數量與 Output 層的 output 形狀匹配。

我在這里發現了幾個問題。 model 旨在用於具有多個類的語義分割(這就是為什么我將 output 層激活更改為"softmax"並設置"sparse_categorical_crossentropy"損失)。 因此,在 ImageDataGenerators 中, class_mode必須設置為None 不提供classes 相反,我需要將手動分類的圖像插入為y 我猜初學者會犯很多初學者的錯誤。

嘗試檢查ks.layers.concatenate layers的輸入是否具有相同的維度。 例如ks.layers.concatenate([u7, c3]) ,這里檢查 u7 和 c3 張量的形狀是否相同,除了 function ks.layers.concatenate的軸輸入。 Axis = -1默認值,這是最后一個維度。 為了說明您是否給出ks.layers.concatenate([u7,c3],axis=0) ,那么除了 u7 和 c3 的第一個軸之外,所有其他軸的尺寸都應該完全匹配,例如u7.shape = [3,4,5], c3.shape = [6,4,5].

我遇到了同樣的問題,因為我在 model(對於 output 層)中使用了許多與標簽/掩碼數組中的實際類數不同的 n_classes。 我看到你在這里有一個類似的問題:你有 13 個類,但是你的 output 層只有 1 個。最好的方法是避免硬編碼類的數量,並且只在 model 中傳遞一個變量(如 n_classes),然后在調用 model 之前聲明這個變量。 例如 n_classes = y_Train.shape[-1] 或 n_classes = len(np.unique(y_Train))

只需在完全連接層之前添加Flatten()層。

即使我遇到了同樣的錯誤,請檢查您所有的 Concat 層和乘法層,或者任何發生類似操作的層。

當大小不匹配並且 model 無法執行該操作時,此錯誤是可見的。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM