簡體   English   中英

sparse_softmax_cross_entropy_with_logits結果比softmax_cross_entropy_with_logits差

[英]sparse_softmax_cross_entropy_with_logits results is worse than softmax_cross_entropy_with_logits

我使用tensorflow實現經典圖像分類問題,我有9個類,首先我使用softmax_cross_entropy_with_logits作為分類器和訓練網絡,經過一些步驟后它給出了大約99%的訓練精度,

然后使用sparse_softmax_cross_entropy_with_logits測試相同的問題,這次它根本不收斂,(列車精度大約是0.10和0.20)

僅為了您的信息,對於softmax_cross_entropy_with_logits ,我使用[batch_size,num_classes]和dtype float32作為標簽,對於sparse_softmax_cross_entropy_with_logits我使用[batch_size]和dtype int32作為標簽。

有人有什么想法嗎?

更新:

this is code:

def costFun(self):  
    self.y_ = tf.reshape(self.y_, [-1]) 
    return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(self.score_, self.y_))

def updateFun(self):
    return tf.train.AdamOptimizer(learning_rate = self.lr_).minimize(self.cost_)

def perfFun(self):
    correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))
    return(tf.reduce_mean(tf.cast(correct_pred, tf.float32)))

def __init__(self,x,y,lr,lyr1FilterNo,lyr2FilterNo,lyr3FilterNo,fcHidLyrSize,inLyrSize,outLyrSize, keepProb):

    self.x_            = x
    self.y_            = y
    self.lr_           = lr
    self.inLyrSize     = inLyrSize
    self.outLyrSize_   = outLyrSize
    self.lyr1FilterNo_ = lyr1FilterNo
    self.lyr2FilterNo_ = lyr2FilterNo
    self.lyr3FilterNo_ = lyr3FilterNo
    self.fcHidLyrSize_ = fcHidLyrSize
    self.keepProb_     = keepProb

    [self.params_w_, self.params_b_] = ConvNet.paramsFun(self) 
    self.score_, self.PackShow_      = ConvNet.scoreFun (self) 
    self.cost_                       = ConvNet.costFun  (self) 
    self.update_                     = ConvNet.updateFun(self) 
    self.perf_                       = ConvNet.perfFun  (self) 

主要:

lyr1FilterNo = 32 
lyr2FilterNo = 64 
lyr3FilterNo = 128 

fcHidLyrSize = 1024
inLyrSize    = 32 * 32 

outLyrSize   = 9
lr           = 0.001
batch_size   = 300

dropout      = 0.5
x            = tf.placeholder(tf.float32, [None, inLyrSize ])
y            = tf.placeholder(tf.int32,    None             ) 

ConvNet_class = ConvNet(x,y,lr,lyr1FilterNo,lyr2FilterNo,lyr3FilterNo,fcHidLyrSize,inLyrSize,outLyrSize, keepProb)
initVar = tf.global_variables_initializer()


with tf.Session() as sess:
    sess.run(initVar)   

    for step in range(10000): 

        trData_i  = np.reshape( trData_i , ( -1, 32 * 32 ) ) 
        trLabel_i = np.reshape( trLabel_i, ( -1, 1       ) )  

        update_i, PackShow, wLyr1_i, wLyr2_i, wLyr3_i = sess.run([ConvNet_class.update_, ConvNet_class.PackShow_,
                            ConvNet_class.params_w_['wLyr1'], ConvNet_class.params_w_['wLyr2'], ConvNet_class.params_w_['wLyr3']], 
                            feed_dict = { x:trData_i, y:trLabel_i, keepProb:dropout} )

我找到了問題,感謝@mrry的有用評論,實際上我錯誤地計算了准確性,事實上,“sparse_softmax”和“softmax”對輸入logits有相同的損失(或成本),

為了計算的准確性,我改變了

correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))

correct_pred = tf.equal(tf.argmax(self.score_,1), y ))

因為在“sparse_softmax”中,地面實況標簽不是單熱矢量格式,而是真正的int32或int64數字。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM