[英]Tensorflow Extracting Classification Predictions
我有一個Tensorflow NN模型用於對一個熱編碼的組標簽(組是排他的)進行分類,以( layerActivs[-1]
是最后一層的激活)結尾:
probs = sess.run(tf.nn.softmax(layerActivs[-1]),...)
classes = sess.run(tf.round(probs))
preds = sess.run(tf.argmax(classes))
包含tf.round
可以將任何低概率強制為0。如果觀察的所有概率均低於50%,則意味着將不會預測任何類別。 即,如果有4個類,我們可以有probs[0,:] = [0.2,0,0,0.4]
,所以classes[0,:] = [0,0,0,0]
; preds[0] = 0
。
顯然這是模棱兩可的,因為如果我們有probs[1,:]=[.9,0,.1,0]
classes[1,:] = [1,0,0,0]
probs[1,:]=[.9,0,.1,0]
-> classes[1,:] = [1,0,0,0]
-> 1 preds[1] = 0
。 當使用tensorflow內置指標類時,這是一個問題,因為函數無法區分無預測和類0中的預測。這由以下代碼演示:
import numpy as np
import tensorflow as tf
import pandas as pd
''' prepare '''
classes = 6
n = 100
# simulate data
np.random.seed(42)
simY = np.random.randint(0,classes,n) # pretend actual data
simYhat = np.random.randint(0,classes,n) # pretend pred data
truth = np.sum(simY == simYhat)/n
tabulate = pd.Series(simY).value_counts()
# create placeholders
lab = tf.placeholder(shape=simY.shape, dtype=tf.int32)
prd = tf.placeholder(shape=simY.shape, dtype=tf.int32)
AM_lab = tf.placeholder(shape=simY.shape,dtype=tf.int32)
AM_prd = tf.placeholder(shape=simY.shape,dtype=tf.int32)
# create one-hot encoding objects
simYOH = tf.one_hot(lab,classes)
# create accuracy objects
acc = tf.metrics.accuracy(lab,prd) # real accuracy with tf.metrics
accOHAM = tf.metrics.accuracy(AM_lab,AM_prd) # OHE argmaxed to labels - expected to be correct
# now setup to pretend we ran a model & generated OHE predictions all unclassed
z = np.zeros(shape=(n,classes),dtype=float)
testPred = tf.constant(z)
''' run it all '''
# setup
sess = tf.Session()
sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])
# real accuracy with tf.metrics
ACC = sess.run(acc,feed_dict = {lab:simY,prd:simYhat})
# OHE argmaxed to labels - expected to be correct, but is it?
l,p = sess.run([simYOH,testPred],feed_dict={lab:simY})
p = np.argmax(p,axis=-1)
ACCOHAM = sess.run(accOHAM,feed_dict={AM_lab:simY,AM_prd:p})
sess.close()
''' print stuff '''
print('Accuracy')
print('-known truth: %0.4f'%truth)
print('-on unprocessed data: %0.4f'%ACC[1])
print('-on faked unclassed labels data (s.b. 0%%): %0.4f'%ACCOHAM[1])
print('----------\nTrue Class Freqs:\n%r'%(tabulate.sort_index()/n))
其輸出:
Accuracy
-known truth: 0.1500
-on unprocessed data: 0.1500
-on faked unclassed labels data (s.b. 0%): 0.1100
----------
True Class Freqs:
0 0.11
1 0.19
2 0.11
3 0.25
4 0.17
5 0.17
dtype: float64
Note freq for class 0 is same as faked accuracy...
我嘗試將preds
的值設置為np.nan
進行無預測的觀測,但是tf.metrics.accuracy
拋出ValueError: cannot convert float NaN to integer
; 還嘗試了np.inf
但出現OverflowError: cannot convert float infinity to integer
。
如何將四舍五入的概率轉換為類預測,但如何適當地處理意外預測?
這已經持續了足夠長的時間,沒有答案,所以我將在這里發布答案作為我的解決方案。 我使用一個新的函數將擁有概率轉換為類預測,該函數具有3個主要步驟:
1/num_classes
以下的任何概率設置為0 np.argmax()
提取預測的類,然后將所有未分類的觀察值設置為統一選擇的類 整數類標簽的結果向量可以傳遞給tf.metrics
函數。 我的功能如下:
def predFromProb(classProbs):
'''
Take in as input an (m x p) matrix of m observations' class probabilities in
p classes and return an m-length vector of integer class labels (0...p-1).
Probabilities at or below 1/p are set to 0, as are NaNs; any unclassed
observations are randomly assigned to a class.
'''
numClasses = classProbs.shape[1]
# zero out class probs that are at or below chance, or NaN
probs = classProbs.copy()
probs[np.isnan(probs)] = 0
probs = probs*(probs > 1/numClasses)
# find any un-classed observations
unpred = ~np.any(probs,axis=1)
# get the predicted classes
preds = np.argmax(probs,axis=1)
# randomly classify un-classed observations
rnds = np.random.randint(0,numClasses,np.sum(unpred))
preds[unpred] = rnds
return preds
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.