I want to analyze my own images using an SVM trained on the MNIST dataset. How can I preprocessed my image so it can be accepted by the model?
dataset = datasets.fetch_openml("mnist_784", version=1)
(trainX, testX, trainY, testY) = train_test_split(
dataset.data / 255.0, dataset.target.astype("int0"), test_size = 0.33)
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", type=str, default="3scenes",
help="path to directory containing the '3scenes' dataset")
ap.add_argument("-m", "--model", type=str, default="knn",
help="type of python machine learning model to use")
args = vars(ap.parse_args())
#user input image to classify
userImage = cv.imread('path_to_image/1.jpg')
#preprocess user image
#...
models = {
"svm": SVC(kernel="linear"),
}
# train the model
print("[INFO] using '{}' model".format(args["model"]))
model = models[args["model"]]
model.fit(trainX, trainY)
print("[INFO] evaluating image...")
predictions = model.predict(userImage)
print(classification_report(userImage, predictions))
MNIST images have the following shape: 28x28x1, width 28 pixels, height 28 pixels and one color channel ie grayscale.
Assuming your model takes the same input shape, you can use the following:
import cv2
userImage = cv2.imread('path_to_image/1.jpg')
# resize image to 28x28
userImage = cv2.resize(userImage,(28,28))
# convert to grayscale
userImage = cv2.cvtColor(userImage,cv2.COLOR_BGR2GRAY)
# normalize
userImage /= 255.
Depending on how large your image is, you may want to select an 28x28 patch manually. Otherwise you risk of losing image quality and thus information.
If you model takes a vector as input, you can use the following to flatten your image before feeding it to the model:
userImage = np.reshape(userImage,(784,))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.