簡體   English   中英

在keras訓練中准確性不會改變,損失幾乎不會減少

[英]Accuracy doesn't change over keras training, loss barely decreases

我正在嘗試訓練神經網絡以使用keras在5 * 5網格上解決picross(aka nonogram)難題。 這意味着理想情況下,網絡將針對每個訓練案例進行多次正確激活。

我已經找到了一種隨機生成訓練數據並初始化神經網絡的方法,但是在運行它時,網絡的准確性永遠不會改變,並且損失只會略有減少:

時代1/100 100000/100000 [=============================]-13s 133us / sample-損失:1.6282-acc :0.5001

時代2/100 100000/100000 [=============================]-13s 131us / sample-損失:1.6233-acc :0.5001

時代3/100 100000/100000 [=============================]-13秒132us / sample-損耗:1.6175-acc :0.5001

...

時代99/100 100000/100000 [=============================]-14s 136us / sample-損耗:1.4704-acc :0.5001

時代100/100 100000/100000 [=============================]-14s 136us / sample-損失:1.4696-acc :0.5001

我正在使用Jupyter筆記本運行它。

有人告訴我,使用“ binary_crossentropy”作為損失函數是解決該問題的理想選擇,但是我不知道如何格式化此訓練數據標簽。 它應該是一和零的列表,還是數字的列表,還是數組...?

輸出層是25個神經元,每個神經元對應5 * 5網格上的一個塊。 它們將正確激活為1或0,具體取決於該塊是否為空。

import random
import numpy as np
import tensorflow as tf
from keras.optimizers import SGD

network = tf.keras.models.Sequential()
network.add(tf.keras.layers.Flatten())
network.add(tf.keras.layers.Dense(750, activation=tf.nn.relu))
network.add(tf.keras.layers.Dense(500, activation=tf.nn.relu))
network.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))
network.add(tf.keras.layers.Dense(25, activation=tf.nn.softmax))
network.compile(optimizer='SGD',
             loss='binary_crossentropy',
             metrics=['accuracy'])
network.fit(scaled_x_train, y_train, epochs=100, batch_size=50)

我預計精度會隨着訓練的進行而增加,即使只是一點點,但精度會停留在開始時的任何值,並且損失函數只會稍微減少一點。

編輯:提供給神經網絡輸入的數據是“提示”,按比例縮小為0到1之間的值。這是創建數據的代碼:

import random
import numpy as np
from sklearn.preprocessing import MinMaxScaler

x_train = []
y_train = []

for m in range(100000):  #creating a data set with m items in it
    grid = [[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0]]
    hints = [[[],[],[],[],[]],[[],[],[],[],[]]]

    for i in range(5):
        for j in range(5):
            grid[i][j] = random.randint(0,1)   #All items in the grid are random, either 0s or 1s


    sub_y_train = []
    for z in range(5):
        for x in range(5):
            sub_y_train.append(grid[z][x])

    sub_y_train = np.array(sub_y_train)
    y_train.append(sub_y_train)         #the grids are added to the data set first



    ##figuring out the hints along the vertical axis
    for i in range(5):
        counter = 0
        for j in range(4):
            if grid[i][j] == 1:
                counter += 1
                if grid[i][j+1] == 0:
                    hints[0][i].append(counter)
                    counter = 0
        if grid[i][4] == 1:
            hints[0][i].append(counter+1)
            counter = 0


    ##figuring out the hints along the horizontal axis
    for i in range(5):
        counter = 0
        for j in range(4):
            if grid[j][i] == 1:
                counter += 1
                if grid[j+1][i] == 0:
                    hints[1][i].append(counter)
                    counter = 0
        if grid[4][i] == 1:
            hints[1][i].append(counter+1)
            counter = 0

    for i in range(2):
        for j in range(5):
            while len(hints[i][j]) != 3:
                hints[i][j].append(0)

    new_hints = []
    for i in range(2):
        for j in range(5):
            for k in range(3):
                new_hints.append(hints[i][j][k])

    new_hints.append(5)

    x_train.append(new_hints)    #Once the hints are created and formalized, they are added to x_train


x_train = np.array(x_train)      #Both x_train and y_train are converted into numpy arrays
y_train = np.array(y_train)



scaler = MinMaxScaler(feature_range=(0,1))
scaled_x_train = scaler.fit_transform((x_train))

for i in range(5):
    print(scaled_x_train[i])
    print(y_train[i])

Peteris是正確的,似乎在網絡輸出層上用“ Sigmoid”代替了“ softmax”激活功能,現在已經幫助精度穩步提高。 目前,該網絡幾乎達到了95%的穩定精度。 (非常感謝,我已經嘗試了好幾個星期了)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM