简体   繁体   中英

Issue Tensorflow 2.2 with model.fit and model.evaluate

When I whant to train my model in tf it seems like tf don't get right values (cf screen ).
I expect to have 21759 and not 680

It's appening since I changed of OS (fedora 30 xfce -> fedora 32 gnome) and on others laptops there is not this issue.

I am using Tf 2.2.
My dataset is made by somes csv created by tshark: A screen of my DS
Here is few lines of my code:

My model:

model = Sequential()

model.add(LSTM(9, input_shape=dataset[0].shape, activation='relu', return_sequences=True))
model.add(Dropout(0.3))

model.add(LSTM(9, input_shape=dataset[0].shape, activation='relu', return_sequences=True))
model.add(Dropout(0.3))

model.add(Dense(9, activation='relu'))
model.add(Flatten())

model.add(Dense(2, activation='softmax'))

opt = tf.keras.optimizers.Adam(lr=1e-4, decay=1e-5)

model.compile(loss='sparse_categorical_crossentropy',
             optimizer=opt,
             metrics=['accuracy'])


Do you have any ideas?

PS: It happens too with this.PY

import tensorflow as tf

dataset = [[1, 1],[2, 2]] * 50
label  = [0, 1] * 50

print(len(dataset))

model = tf.keras.Sequential([
  tf.keras.layers.Dense(1, activation="relu", input_shape=(2,)),
  tf.keras.layers.Dense(2, activation="softmax")
])
model.compile(
    loss="sparse_categorical_crossentropy",
    optimizer="sgd",
    metrics=["accuracy"]
)
history = model.fit(dataset, label, epochs=1)

Ouput:

100
4/4 [==============================] - 0s 611us/step - loss: 0.6578 - accuracy: 0.5000

Like Koralp Catalsakal said it was just an "configuration difference" issue. So I just had to set up manually the batch_size.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM