简体   繁体   中英

Get training data shape inside keras custom loss function

I have written the below custom loss function, where I need to create a factor by dividing the input shape with the output shape.

def distance_loss(x,y):
    x_shape = K.int_shape(x)[1]
    y_shape = K.int_shape(y)[1]
    print(x_shape,y_shape)
    factor = x_shape/y_shape
    loss = tf.sqrt(factor) * tf.norm(x-y)
    return tf.math.abs(loss)

This is the model architecture is:

model = Sequential()
model.add(Dense(32,input_dim=4))
model.add(Dense(64,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(64,activation='relu'))
model.add(Dense(2,activation='relu'))
opt = Adam(lr = 0.001)
model.compile(optimizer = opt, loss=distance_loss,metrics=['accuracy'])

When I ran the model.compile line. The custom loss prints

None 2

and throws an error

TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

I read that the input shape of the training data is only known during the training phase. Is there any way to bypass this issue?

Use K.shape instead:

def distance_loss(x,y):
    x_shape = K.shape(x)[1]
    y_shape = K.shape(y)[1]
    factor = K.cast(x_shape, x.dtype) / K.cast(y_shape, y.dtype)
    loss = tf.sqrt(factor) * tf.norm(x-y)
    return tf.math.abs(loss)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM