![](/img/trans.png)
[英]In Keras, how to use Model.predict function when looping over a tensorflow Dataset?
[英]model.predict() - TensorFlow Keras gives same output for all images when the dataset size increases?
我一直在嘗試使用預訓練模型(XceptionNet)來獲取與分類任務的每個輸入圖像對應的特征向量。 但是當數據集大小發生變化時,由於 model.predict() 為同一圖像提供不可靠且變化的 output 矢量,因此我被卡住了。
在下面的代碼中, batch
是包含圖像的數據,對於這些圖像中的每一個,我都想要一個特征向量,我使用預訓練的 model 獲得該特征向量。
batch.shape
TensorShape([803, 800, 600, 3])
只是為了清楚地表明所有輸入圖像都不同,這里顯示的輸入圖像很少。
plt.imshow(batch[-23])
plt.figure()
plt.imshow(batch[-15])
我的model如下
model_xception = Xception(weights="imagenet", input_shape=(*INPUT_SHAPE, 3), include_top=False)
model_xception.trainable = False
inp = Input(shape=(*INPUT_SHAPE, 3)) # INPUT_SHAPE=(800, 600)
out = model_xception(inp, training=False)
output = GlobalAvgPool2D()(out)
model = tf.keras.Model(inp, output, name='Xception-kPiece')
現在問題出現在以下代碼輸出中
model.predict(batch[-25:]) # prediction on the last 25 images
1/1 [==============================] - 1s 868ms/step
array([[4.99584060e-03, 4.25433293e-02, 9.93836671e-02, ...,
3.21301445e-03, 2.59823762e-02, 9.08260979e-03],
[2.50613055e-04, 1.18759666e-02, 0.00000000e+00, ...,
1.77203789e-02, 7.71604702e-02, 1.28602296e-01],
[3.41954082e-02, 1.82092339e-02, 5.07147610e-03, ...,
7.09404126e-02, 9.45318267e-02, 2.69510925e-01],
...,
[0.00000000e+00, 5.16504236e-03, 4.90547449e-04, ...,
4.62833559e-04, 9.43152513e-03, 1.17826145e-02],
[0.00000000e+00, 4.64747474e-03, 0.00000000e+00, ...,
1.21422185e-04, 4.47714329e-03, 1.92385539e-02],
[0.00000000e+00, 1.29655155e-03, 4.02751788e-02, ...,
0.00000000e+00, 0.00000000e+00, 3.20959717e-01]], dtype=float32)
model.predict(batch)[-25:] # prediction on entire dataset of 803 images and then extracting the vectors corresponding to the last 25 images
26/26 [==============================] - 34s 1s/step
array([[1.7320104e-05, 3.6561250e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924271e-02, 0.0000000e+00],
[1.7320104e-05, 3.6561250e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924271e-02, 0.0000000e+00],
[1.7320104e-05, 3.6561250e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924271e-02, 0.0000000e+00],
...,
[1.7318112e-05, 3.6561041e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924841e-02, 0.0000000e+00],
[1.7318112e-05, 3.6561041e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924841e-02, 0.0000000e+00],
[1.7318112e-05, 3.6561041e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924841e-02, 0.0000000e+00]], dtype=float32)
這種行為有兩個問題:
我對這個問題的看法:
model_xception
中為training=False
和model_xception.trainable=False
傳遞參數,仍然 output 對於所有輸入都是相同的。任何人都可以幫助修復錯誤嗎?
這個問題似乎正在出現,因為我正在使用 tensorflow-macos ,它有這個主要的預測錯誤,超過特定數量的輸入圖像是錯誤的。
請參閱下面的實際問題:
model.predict(batch[-57:])
1/1 [==============================] - 2s 2s/step
array([[0.00000000e+00, 2.56574154e-02, 1.79693177e-01, ...,
2.85670068e-03, 1.08444700e-02, 2.34257965e-03],
[0.00000000e+00, 1.28444552e-03, 0.00000000e+00, ...,
4.11680201e-03, 4.49061068e-03, 1.83695972e-01],
[0.00000000e+00, 2.29660165e-03, 7.84890354e-03, ...,
1.86224483e-04, 1.81426702e-03, 1.54079705e-01],
...,
[0.00000000e+00, 5.16504236e-03, 4.90547449e-04, ...,
4.62833559e-04, 9.43152513e-03, 1.17826145e-02],
[0.00000000e+00, 4.64747474e-03, 0.00000000e+00, ...,
1.21422185e-04, 4.47714329e-03, 1.92385539e-02],
[0.00000000e+00, 1.29655155e-03, 4.02751788e-02, ...,
0.00000000e+00, 0.00000000e+00, 3.20959717e-01]], dtype=float32)
model.predict(batch[-55:])
2/2 [==============================] - 2s 1s/step
array([[0.00000000e+00, 2.29660165e-03, 7.84890354e-03, ...,
1.86224483e-04, 1.81426702e-03, 1.54079705e-01],
[4.94572960e-05, 8.04292504e-04, 5.08825444e-02, ...,
4.58029518e-03, 2.09121332e-02, 5.57549708e-02],
[0.00000000e+00, 1.62312540e-03, 0.00000000e+00, ...,
4.35817856e-05, 2.16606092e-02, 1.30677417e-01],
...,
[0.00000000e+00, 5.16504236e-03, 4.90547449e-04, ...,
4.62833559e-04, 9.43152513e-03, 1.17826145e-02],
[0.00000000e+00, 4.64747474e-03, 0.00000000e+00, ...,
1.21422185e-04, 4.47714329e-03, 1.92385539e-02],
[0.00000000e+00, 1.29655155e-03, 4.02751788e-02, ...,
0.00000000e+00, 0.00000000e+00, 3.20959717e-01]], dtype=float32)
model.predict(batch[-58:])
1/1 [==============================] - 2s 2s/step
array([[5.3905282e-04, 2.8516021e-02, 1.2775734e-03, ..., 5.4674568e-03,
1.7451918e-02, 9.4717339e-02],
[0.0000000e+00, 2.8345605e-02, 1.2786543e-03, ..., 0.0000000e+00,
2.4870334e-03, 1.2716405e-01],
[4.3588653e-03, 8.2868971e-02, 1.8764129e-02, ..., 2.5320805e-03,
5.9973758e-02, 6.9927111e-02],
...,
[1.7320104e-05, 3.6561250e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924271e-02, 0.0000000e+00],
[1.7320104e-05, 3.6561250e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924271e-02, 0.0000000e+00],
[1.7320104e-05, 3.6561250e-04, 0.0000000e+00, ..., 0.0000000e+00,
3.5924271e-02, 0.0000000e+00]], dtype=float32)
如果有人可以在 mac 上仍在使用 tensorflow 的同時提出修復或解決方法,那將非常有幫助。
還有一個 github 問題,此處仍未解決。
這是正確的行為,即使相同的圖像預測結果不是:
1.1 learning function:學習過程的標識,不應該比估計時間訓練的范圍有變化(working sets input provided same output patterns)
1.2 在 output 層映射 label,重要數據 output 示例測量,比例,縮放,alignment,對比,0 到 1 輸入數據映射,.networks 類型,字母協作等。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.