簡體   English   中英

.h5 keras 模型和 .json tensorflow.js 模型的預測結果完全不同

[英]Completely Different prediction results from .h5 keras model and .json tensorflow.js model

所以,我的模型為我提供了測試圖像的准確度結果

import cv2
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode

import matplotlib.pyplot as plt
face_haar_cascade = cv2.CascadeClassifier('/content/gdrive/My Drive/New FEC Facial Expression/haarcascade_frontalface_default.xml')
from IPython.display import Image
try:
 filename = '/content/gdrive/My Drive/photo-1533227268428-f9ed0900fb3b.jpg'
 img = cv2.imread(filename)

 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

 faces = face_haar_cascade.detectMultiScale(gray, 1.3,6)
 print('faces', faces)
 for(x,y,w,h) in faces:
   cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
   roi_gray = gray[y:y+h, x:x+w]
   roi_color = img[y:y+h, x:x+w]
   plt.grid(None)
   plt.xticks([])
   plt.yticks([])
   imgplot = plt.imshow(img)
 # Show the image which was just taken.
 # display(Image(filename))
except Exception as err:
 # Errors will be thrown if the user does not have a webcam or if they do not
 # grant the page permission to access it.
 print(str(err))


import cv2
import sys

imagePath ='/content/gdrive/My Drive/photo-1533227268428-f9ed0900fb3b.jpg'
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

faceCascade = cv2.CascadeClassifier('/content/gdrive/My Drive/New FEC Facial Expression/haarcascade_frontalface_default.xml')
faces = faceCascade.detectMultiScale(
   gray,
   scaleFactor=1.3,
   minNeighbors=3,
   minSize=(30, 30)
)

print("[INFO] Found {0} Faces.".format(len(faces)))

for (x, y, w, h) in faces:
   cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
   roi_color = image[y:y + h, x:x + w]
   print("[INFO] Object found. Saving locally.")
   cv2.imwrite('/content/gdrive/My Drive/converted Images/faces.jpg', roi_color)

status = cv2.imwrite('faces_detected.jpg', image)
print("[INFO] Image faces_detected.jpg written to filesystem: ", status)
# from skimage import io
from keras.preprocessing import image
img = image.load_img('/content/gdrive/My Drive/converted Images/faces.jpg', color_mode = "grayscale", target_size=(48, 48))
x = image.img_to_array(img)
x = np.expand_dims(x, axis = 0)
x /= 255
show_img=image.load_img('/content/gdrive/My Drive/converted Images/faces.jpg', grayscale=False, target_size=(200, 200))
plt.gray()
plt.imshow(show_img)
plt.show()
if len(faces): 
 custom = model.predict(x)
 index = np.argmax(custom[0])
 emotion1 = custom[0][index]*100
 print(custom)
 print(emotion_label_to_text[index],' => ',  emotion1)
else:
 print('No Face Detected')

這給出了很好的結果,並且相同結果的輸出是正確的,我插入的圖像是快樂圖像,opencv 用於檢測人臉並裁剪它,然后使用裁剪后的圖像放入模型中,並給了我很好的結果,

但是 tf.js 部分我使用 tfjs 轉換器將 keras 模型轉換為 .json 並編寫了以下代碼

 const classifier = new cv.CascadeClassifier(cv.HAAR_FRONTALFACE_ALT2);
    try {
        const canvImg = await canvas.loadImage(
            path.join(__dirname, `images/${req.file.filename}`)
        );
        const image = await cv.imread(path.join(__dirname, `/images/${req.file.filename}`));
        const classifier = new cv.CascadeClassifier(cv.HAAR_FRONTALFACE_ALT2);
        const { objects, numDetections } = classifier.detectMultiScale(image.bgrToGray());
        if (!objects.length) {
            return next({
                msg: 'No face detected'
            })
        } else {
            const model = await tf.loadLayersModel(
                "http://localhost:8000/models/model.json"
            );
            const obj = objects[0]
            const cnvs = canvas.createCanvas(48, 48);
            const ctx = cnvs.getContext("2d");
            ctx.drawImage(canvImg, obj.x, obj.y, obj.width, obj.height, 0, 0, cnvs.width, cnvs.height);
            var tensor = tf.browser
                .fromPixels(cnvs)
                .mean(2)
                .toFloat()
                .expandDims(-1)
                .expandDims(0, 'None')



            const prediction = await model.predict(tensor).data();
            console.log(prediction);
            var emotions = [
                "angry",
                "disgust",
                "fear",
                "happy",
                "sad",
                "surprise",
                        ];
            var index = Object.values(prediction).findIndex(
                (p) => p === Math.max(...Object.values(prediction))
            );
            res.status(200).json(emotions[index])
            fs.unlink(
                path.join(process.cwd(), "./faceDetection/images/" + req.file.filename),
                function(err, removed) {
                    if (err) console.log("file removing err");
                    else console.log("file removed");
                }
            );
        }

    } catch (e) {
        return next(e)
    }

我使用 opencv4nodejs 檢測圖像,canvas 裁剪圖像(canvas 為我裁剪面部部分提供了很好的結果)和 tf.js 用於預測,但輸出每次都給我相同的結果在對象一中的所有這些鍵中他們中的人將得到 1(在這種情況下是恐懼),並繼續為我在 keras 中測試的相同圖像提供相同的結果。

我在操縱張量時做錯了什么嗎?

一種可能的原因。 在python中,您使用x /= 255將圖像輸入“標准化”為 [0,1] 。 你不是用 Javascript 做的。

js 中的預處理與 python 中的不同。

在 python 中,圖像通過除以 255 進行歸一化

在 Js 中,通過計算第三軸上的平均值 (mean(2)) 將圖像轉換為灰度。 這是張量應該是什么:

 const tensor = tf.browser.fromPixels(cnvs)
  .div(255)
  .toFloat()
  .expandDims(0)       

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM