简体   繁体   English

使用SVM作为图像分类器时,精度/ F分数是否可以达到10%?

[英]Is a precision/F-score of 10% to be expected on using SVM as image classifier?

As my learning project in Machine Learning I am trying to use an SVM (Support Vector Machine) to classify different images of domino tiles. 作为我在机器学习中的学习项目,我试图使用SVM(支持向量机)对多米诺瓷砖的不同图像进行分类。 I am basing this project heavily on this project https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html#sphx-glr-auto-examples-applications-plot-face-recognition-py which I have re-created and understood as well as gotten a precision/F1 of about 70% (if I recall correctly). 我在这个项目上以这个项目https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html#sphx-glr-auto-examples-applications-plot-face-recognition-py为基础-创建和理解以及获得大约70%的精度/ F1(如果我没记错的话)。 I am using much of the same code in my project. 我在项目中使用了许多相同的代码。

In my project I have 28 different folders and in each 100 different 100x100px images of domino tiles(ie 2800 images). 在我的项目中,我有28个不同的文件夹,并且每100张不同的100x100px的多米诺瓷砖图像(即2800张图像)。 The domino tiles are photographed with different background, different zoom and different rotation. 用不同的背景,不同的缩放比例和不同的旋转度拍摄多米诺瓷砖。 These images could be found here: https://www.kaggle.com/wallcloud/photographs-of-28-different-domino-tiles 这些图像可以在这里找到: https : //www.kaggle.com/wallcloud/photographs-of-28-different-domino-tiles

I have tested: 我已经测试:

  • All sorts of combinations of C, gamma, kernels on the SVC and found the optimal combination SVC上C,γ,内核的各种组合,并找到了最佳组合
  • PCA with different number of components (500 seems to be an optimal number) 具有不同数量组件的PCA(500个似乎是最佳数量)
  • Using LabelEncoders (no difference) 使用LabelEncoders(没有区别)
  • Different test sizes (0.1 seems to be optimal) 不同的测试大小(0.1似乎是最佳选择)
  • Cropping images (improves the score), using filters on the images (worsens the score) as well as making them B/W (worsens the score). 裁剪图像(提高分数),使用图像上的过滤器(恶化分数),并使它们黑白(恶化分数)。

With all this I still can't make my score to exceed 10% which is VERY far from what the Scikit-Learn project achieves on the faces. 有了这些,我的分数仍然无法超过10%,这与Scikit-Learn项目在脸上所取得的成绩相差甚远。

According to feedback I have received from experienced ML-engineers, the data should be sufficient to classify the dominoes. 根据我从经验丰富的机器学习工程师那里获得的反馈,这些数据应该足以对多米诺骨牌进行分类。 I was suspicious whether SVM:s would actually be suitable as an image classifier but as the Scikit-Learn project uses it I would assume this SHOULD work as well. 我怀疑SVM:s是否真的适合作为图像分类器,但是当Scikit-Learn项目使用它时,我也认为这也应该工作。 I am sure a CNN would work great for this but that is not my question. 我确信CNN可以很好地解决这个问题,但这不是我的问题。

When I output the "eigenfaces" for the domino tiles they appear as a "motion blur" which seems to have to do with the dominos being rotated. 当我输出多米诺骨牌的“特征面”时,它们显示为“运动模糊”,这似乎与旋转多米诺骨牌有关。 This could be a potential reason (the Scikit-Learn images of faces are not rotated). 这可能是一个潜在原因(Scikit-Learn的面部图像未旋转)。 I would, however, expect the model pick up on the dots of the domino tiles better but that assumption may be erronous. 但是,我希望模型能更好地适应多米诺骨牌的点,但这种假设可能是错误的。

My question is now: 我的问题是:

Q: Is my score of 10% to be expected considering the amount, and type, of data and using SVM as a classifier - or am I missing out on something crucial? 问:考虑到数据的数量和类型以及使用SVM作为分类器,我的得分是10%吗?还是我错过了一些关键的东西?

My python code 我的python代码

import time
import matplotlib.pyplot as plt
from sklearn import svm, metrics
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
#from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing 
from sklearn.decomposition import PCA
import numpy as np
import os # Working with files and folders
from PIL import Image # Image processing
from PIL import ImageFilter

### 
### Data can be downloaded from https://www.dropbox.com/sh/s5f38k4l2on5mba/AACNQgXuw1edwEb6oO1w3CfOa?dl=0
### 


start = time.time()
rootdir = os.getcwd()

image_file = 'images.npy'
key_file = 'keys.npy'

def predict_me(image_file_name, scaler, pca):
  pm = Image.open(image_file_name)
  pm = pm.resize([66,66])
  a = np.array(pm.convert('L')).reshape(1,-1)
  #a = np.array(pm.resize([66,66]).convert('L')).reshape(1,-1)) # array 66x66
  a = scaler.transform(a)
  a = pca.transform(a)
  return classifier.predict(a)

def crop_image(im, sq_size):
  new_width = sq_size
  new_height = sq_size
  width, height = im.size   # Get dimensions 
  left = (width - new_width)/2
  top = (height - new_height)/2
  right = (width + new_width)/2
  bottom = (height + new_height)/2
  imc = im.crop((left, top, right, bottom))
  return imc 

#def filter_image(im):
  # All filter makes it worse
  #imf = im.filter(ImageFilter.EMBOSS)
  #return imf

def provide_altered_images(im):
  im_list = [im]
  im_list.append(im.rotate(90))
  im_list.append(im.rotate(180))
  im_list.append(im.rotate(270))
  return im_list

if (os.path.exists(image_file) and os.path.exists(key_file)):
  print("Loading existing numpy's")
  pixel_arr = np.load(image_file)
  key = np.load(key_file)
else:
  print("Creating new numpy's")  
  key_array = []
  pixel_arr = np.empty((0,66*66), "uint8")

  for subdir, dirs, files in os.walk('data'):
    dir_name = subdir.split("/")[-1]    
    if "x" in dir_name:
      for file in files:
        if ".DS_Store" not in file:
          im = Image.open(os.path.join(subdir, file))
          if im.size == (100,100):  
            use_im = crop_image(im,66) # Most images are shot from too far away. This removes portions of it.
            #use_im = filter_image(use_im) # Filters image, but does no good at all
            im_list = provide_altered_images(use_im) # Create extra data with 3 rotated images of every image
            for alt_im in im_list:
              key_array.append(dir_name)  # Each image here is still the same as directory name
              numpied_image = np.array(alt_im.convert('L')).reshape(1,-1) # Converts to grayscale
              #Image.fromarray(np.reshape(numpied_image,(-1,100)), 'L').show()
              pixel_arr = np.append(pixel_arr, numpied_image, axis=0)
          im.close()

  key = np.array(key_array)
  np.save(image_file, pixel_arr)
  np.save(key_file, key)



# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001, C=10, kernel='rbf', class_weight='balanced') # gamma and C from tests
#le = preprocessing.LabelEncoder()
#le.fit(key)
#transformed_key = le.transform(key)
transformed_key = key


X_train, X_test, y_train, y_test = train_test_split(pixel_arr, transformed_key, test_size=0.1,random_state=7)

#scaler = preprocessing.StandardScaler()

pca = PCA(n_components=500, svd_solver='randomized', whiten=True)
# Fit on training set only.
#scaler.fit(X_train)
pca.fit(X_train)

# Apply transform to both the training set and the test set.
#X_train = scaler.transform(X_train)
#X_test = scaler.transform(X_test)

X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)


print ("Fit classifier")
classifier = classifier.fit(X_train_pca, y_train)
print ("Score = " + str(classifier.score(X_test_pca, y_test)))

# Now predict the value of the domino on the test data:
expected = y_test

print ("Predicting")
predicted = classifier.predict(X_test_pca)

print("Classification report for classifier %s:\n%s\n"
      % (classifier, metrics.classification_report(expected, predicted)))
#print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted, labels  =list(set(key))))
end = time.time()
print(end - start)

Output (the last being the time in seconds) 输出(最后一个是以秒为单位的时间)

Score = 0.09830205540661305
Predicting
Classification report for classifier SVC(C=10, cache_size=200, class_weight='balanced', coef0=0.0, decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False):
          precision    recall  f1-score   support

  b'0x0'       0.22      0.44      0.30        27
  b'1x0'       0.24      0.23      0.24        43
  b'1x1'       0.15      0.12      0.13        49
  b'2x0'       0.13      0.15      0.14        34
  b'2x1'       0.16      0.16      0.16        44
  b'2x2'       0.02      0.03      0.03        36
  b'3x0'       0.05      0.06      0.05        36
  b'3x1'       0.05      0.05      0.05        42
  b'3x2'       0.08      0.09      0.08        46
  b'3x3'       0.15      0.16      0.15        50
  b'4x0'       0.15      0.15      0.15        40
  b'4x1'       0.07      0.05      0.06        42
  b'4x2'       0.02      0.02      0.02        41
  b'4x3'       0.09      0.08      0.09        49
  b'4x4'       0.11      0.10      0.11        39
  b'5x0'       0.18      0.12      0.14        42
  b'5x1'       0.00      0.00      0.00        38
  b'5x2'       0.02      0.02      0.02        43
  b'5x3'       0.07      0.08      0.07        36
  b'5x4'       0.07      0.04      0.05        51
  b'5x5'       0.11      0.14      0.12        42
  b'6x0'       0.03      0.03      0.03        37
  b'6x1'       0.07      0.10      0.08        31
  b'6x2'       0.03      0.03      0.03        33
  b'6x3'       0.09      0.07      0.08        45
  b'6x4'       0.02      0.03      0.03        30
  b'6x5'       0.16      0.19      0.17        37
  b'6x6'       0.10      0.08      0.09        36

   micro avg       0.10      0.10      0.10      1119
   macro avg       0.09      0.10      0.10      1119
   weighted avg       0.10      0.10      0.10      1119


115.74487614631653

I think one of the reasons is that you should not give your raw images as the input of your SVM classifier directly, even if you applied a PCA. 我认为原因之一是,即使您应用了PCA,也不应将原始图像直接作为SVM分类器的输入。 You should whether compute features which describe the shape, contrast and colors of your images and put them in the classifier, or use a CNN . 您应该计算表示图像形状,对比度和颜色的特征并将其放入分类器中,还是使用CNN CNNs were made to classify images, and there structures automatically compute features for the images. CNN用于对图像进行分类,并且那里的结构会自动计算图像的特征。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM