简体   繁体   中英

My model got 94% I dont think its right andit doesnt make sense

Hi so I recently made an AI program that can classify flowers and it says when I ran Model.fit_generator it says I got 94% accuracy and when I searched up the record for highest accuracy is was also 94% and it just doesn't sound right like I just started neural.networks 2 months ago and it says I got the same accuracy of a company and I just started.Also I used a TPU from colab

import os 
import numpy as np
import pickle 
from tensorflow import keras 
import cv2
import matplotlib.pyplot as plt
import random
from keras.preprocessing.image import ImageDataGenerator
os.getcwd()
data = []
img_size = 175
categories = ["daisy", "dandelion", "rose", "sunflower", "tulip","Lily","aster","blue bonet","Camellia","gardinia","Lavander","Azalea"]
def create_data():
    for category in categories:
        path = os.path.join('image_data/image_data/Train', category)
        class_num = categories.index(category)
        for img in os.listdir(path):
            img_arr = cv2.imread(os.path.join(path, img))
            try:
                new_arr = cv2.resize(img_arr, (img_size, img_size))
            except cv2.error as e:
                print('Not valid')
            cv2.waitKey()
            
            data.append([new_arr, class_num])

create_data()
random.shuffle(data)
X=[]
y=[]

for features, labels in data:
    X.append(features)
    y.append(labels)

X = np.array(X).reshape(-1, img_size, img_size, 3)
y = np.array(y)
print('Shape of X: ', X.shape)
print('Shape of y: ', y.shape)
pickle_out = open('X.pickle', 'wb')

pickle.dump(X, pickle_out)

pickle_out_2 = open('y.pickle', 'wb')

pickle.dump(y, pickle_out_2)
X = X / 255.0
from sklearn.preprocessing import LabelEncoder
from keras.utils import to_categorical
le = LabelEncoder()
y= le.fit_transform(y)
y = to_categorical(y,12)
y.shape
from sklearn.model_selection import train_test_split

X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size = 0.8, random_state= 7)
datagen = ImageDataGenerator(
        featurewise_center=False,  
        samplewise_center=False,  
        featurewise_std_normalization=False, 
        samplewise_std_normalization=False, 
        zca_whitening=False,  
        rotation_range=10,  
        zoom_range = 0.1,
        width_shift_range=0.2,  
        height_shift_range=0.2, 
        horizontal_flip=True, 
        vertical_flip=False) 
datagen.fit(X_train)
model = keras.models.Sequential([keras.layers.Conv2D(32, 5, activation='relu', padding='SAME', input_shape=X.shape[1:]),
                                keras.layers.MaxPooling2D(2),
                                
                                keras.layers.Conv2D(64, 3, activation='relu', padding='SAME'),
                                keras.layers.MaxPooling2D(pool_size=2),
                        
                                
                                keras.layers.Conv2D(96, 3, activation="relu", padding="same"),
                                keras.layers.MaxPooling2D(pool_size=2),
                               
                                keras.layers.Conv2D(128, 3, activation="relu", padding="same"),
                                
                                keras.layers.MaxPooling2D(pool_size=2),
                                
                                keras.layers.Flatten(),
                                
                                keras.layers.Dense(500, activation='relu'),
                                keras.layers.Dropout(0.7),
                                keras.layers.Dense(12, activation='softmax')
                                ])
model.compile(optimizer = keras.optimizers.Adam(learning_rate=0.001), loss = 'categorical_crossentropy', metrics=['accuracy'])
#history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=128), epochs=70, validation_data=(X_valid, y_valid))
model.fit(X_train, y_train, batch_size=128, epochs=70,validation_data=(X_valid, y_valid))
model.save("model1.h5")

could you please see if I actually got 94% acc. Sorry for my English it is not my first language

First of all, you need to go through the basic ML course and build up some base to know what exactly you're trying to achieve here. Getting your model to overfit and to get 99.9% accuracy is pretty straightforward and EASY, but also wrong. I'm pretty sire your code can overfit and give you the accuracy that you've mentioned. There's nothing wrong with your code.

I would recommand you to plot validation loss and accuracy, might help you to understand what happened and will definitely tells you if your model overfit.

You can also use model.predict on new images preprocessed to actually check if your.network performs well.

I would recommand you splitting your data into three different sets: training, validation and test.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM