In PyTorch the cross entropy loss function works something like
CrossEntropyLoss(x, y) = H(one_hot(y), softmax(x))
so you can have a linear output layer. Is there a way to do that with tf.keras.Sequential?
I have wirtten this little CNN for MNIST
model = tf.keras.Sequential()
model.add(tfkl.Input(shape=(28, 28, 1)))
model.add(tfkl.Conv2D(32, (5, 5), padding="valid", activation=tf.nn.relu))
model.add(tfkl.MaxPool2D((2, 2)))
model.add(tfkl.Conv2D(64, (5, 5), padding="valid", activation=tf.nn.relu))
model.add(tfkl.MaxPool2D((2, 2)))
model.add(tfkl.Flatten())
model.add(tfkl.Dense(1024, activation=tf.nn.relu))
model.add(tfkl.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
model.summary()
model.fit(x_train, y_train, epochs=1)
and I would like to have
model.add(tfkl.Dense(10))
as the last layer.
I am trying to implement the ADef algorithm but the entries of the gradient wrt. the input seem to be too small and I guess with a linear output they would be right.
I know there is tf.nn.softmax_cross_entropy_with_logits but I don't know how to use it in this context.
Edit:
Changing
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
to
model.compile(optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
has done the trick.
Thank you @Moe1234. For the benefit of community providing solution here
Issue was resolved after changing
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
to
model.compile(optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.