简体   繁体   中英

Custom keras loss with 'sparse_softmax_cross_entropy_with_logits' - Rank mismatch

I have been working on writing a keras model using a tensorflow loss ( sparse_softmax_cross_entropy_with_logits ) and I ran into this issue. For this model, the true values should be a tensor with shape (batch_size), and the output of the model will have shape (batch_size, num_classes). I have verified that the output of the model is of shape (?, num_classes), and I have created a target tensor for the true values but that does not seem to address the issue. Does anybody have any ideas on how to fix it? Is there something I am missing? Below is the relevant code.

def tf_loss(y_true, y_pred):
    return tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)

pred = tf.placeholder(dtype=tf.int32, shape=[None])
model.compile(optimizer='adam', loss=tf_loss, target_tensor=pred)

When I look inside the loss function, I find that y_true has shape (?, num_classes) and y_pred has shape (?, ?).

Well, I am extremely embarrassed, but the bug was simply a typo. 'target_tensors' instead of 'target_tensor'. If that wasn't it the other alternative was simply to modify the tensor in the loss function.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM