簡體   English   中英

ValueError:logits 和標簽必須具有相同的形狀 ((None, 1) vs (None, 2))

[英]ValueError: logits and labels must have the same shape ((None, 1) vs (None, 2))

所以我正在嘗試建立一個具有多個輸出的神經網絡。 我想使用面部圖像識別性別和年齡,然后在解決此問題后進一步添加更多輸出。

輸入類型 = 圖像(最初為 200 200,調整為 64 64)
輸出類型 = Array(len = 2)

(Pdb) x_train.shape <br/>
(18965, 64, 64, 1)<br/>
(Pdb) y_train.shape <br/>
(18965, 2)<br/>
(Pdb) x_test.shape<br/>  
(4742, 64, 64, 1)<br/>
(Pdb) y_test.shape <br/>
(4742, 2)<br/>

神經網絡:-

   input_layer = Input((x_train[0].shape))
   conv1 = Conv2D(64,(4,4),activation="relu",strides=(2,2))(input_layer)
   batchnorm1 = BatchNormalization()(conv1)
   maxpool1 = MaxPooling2D((2,2))(batchnorm1)
   drop1 = Dropout(0.3)(maxpool1)
   conv2 = Conv2D(64,(4,4),activation="relu",strides=(2,2))(drop1)
   batchnorm2 = BatchNormalization()(conv2)
   maxpool2 = MaxPooling2D((2,2))(batchnorm2)
   drop2 = Dropout(0.3)(maxpool2)
   conv3 = Conv2D(64,(2,2), activation='relu',strides=(1,1))(drop2)
   dense = Dense(32,activation = 'relu')(conv3)
   flat = Flatten()(dense)
   dense1 = Dense(32,activation = 'relu')(flat)
   dense2 = Dense(32,activation = 'relu')(flat)
   age_out = Dense(1,activation = 'relu',name = 'age')(dense1)
   gen_out = Dense(1,activation= 'sigmoid', name = 'gen')(dense2)
   def scheduler(epoch, lr):
           if epoch<25:
               return lr
           elif epoch%25==0:
               return lr * 0.5
   learning_rate = tf.keras.callbacks.LearningRateScheduler(scheduler)
   adam = Adam(learning_rate=5e-5)
   model = Model(inputs=input_layer, outputs = [gen_out,age_out])
   # model._set_output_names  
   model.compile(loss={'gen':"binary_crossentropy",'age':"mae"},optimizer = adam, metrics=['accuracy'])
   model.fit(x_train,y_train,batch_size=50,validation_data = (x_test,y_test),epochs=100, callbacks=[learning_rate])

錯誤:-

Traceback (most recent call last):
  File "F:\projects\Ultimate Project\age.py", line 86, in <module>
    neural_network()
  File "F:\projects\Ultimate Project\age.py", line 78, in neural_network
    model.fit(x_train,y_train,batch_size=50,validation_data = (x_test,y_test),epochs=100, callbacks=[learning_rate])
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1100, in fit
    tmp_logs = self.train_function(iterator)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\def_function.py", line 828, in __call__
    result = self._call(*args, **kwds)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\def_function.py", line 871, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\def_function.py", line 726, in _initialize
    *args, **kwds))
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\function.py", line 2969, in _get_concrete_function_internal_garbage_collected
    graph_function, _ = self._maybe_define_function(args, kwargs)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\function.py", line 3361, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\function.py", line 3206, in _create_graph_function
    capture_by_value=self._capture_by_value),
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\framework\func_graph.py", line 990, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\eager\def_function.py", line 634, in wrapped_fn
    out = weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\framework\func_graph.py", line 977, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\engine\training.py:805 train_function  *
        return step_function(self, iterator)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\engine\training.py:795 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1259 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2730 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3417 _call_for_each_replica
        return fn(*args, **kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\engine\training.py:788 run_step  **
        outputs = model.train_step(data)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\engine\training.py:756 train_step
        y, y_pred, sample_weight, regularization_losses=self.losses)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\engine\compile_utils.py:203 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\losses.py:152 __call__
        losses = call_fn(y_true, y_pred)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\losses.py:256 call  **
        return ag_fn(y_true, y_pred, **self._fn_kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
        return target(*args, **kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\losses.py:1608 binary_crossentropy
        K.binary_crossentropy(y_true, y_pred, from_logits=from_logits), axis=-1)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
        return target(*args, **kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\keras\backend.py:4979 binary_crossentropy
        return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
        return target(*args, **kwargs)
    F:\projects\Ultimate Project\env\lib\site-packages\tensorflow\python\ops\nn_impl.py:174 sigmoid_cross_entropy_with_logits
        (logits.get_shape(), labels.get_shape()))

    ValueError: logits and labels must have the same shape ((None, 1) vs (None, 2))

據我所知,這個錯誤意味着神經網絡給出的 output 的形狀為 (none,1),但我的實際 output 的形狀為 (none,2)。 If I am using this config for loss model.compile(loss=["binary_crossentropy","mae"],optimizer = adam, metrics=['accuracy']) with model = Model(inputs=input_layer, outputs = tf.keras.layers.concatenate([gen_out,age_out], axis=-1))然后它運行但它只給我一個單一的損失,而不是兩個 output 層的單獨損失,並且在一些時期之后也變成負數。

如果有任何混淆,我會盡力解釋我的問題,那么我很抱歉,讓我知道,我會添加它。

您還應該像這樣拆分y_trainy_test

model.fit(x_train, [y_train[:, 0], y_train[:, 1]], batch_size=50,
          validation_data = (x_test, [y_test[:, 0], y_test[:, 1]] ),
          epochs=100, callbacks=[learning_rate])

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM