簡體   English   中英

在softmax_cross_entropy_with_logits中獲取InvalidArgumentError

[英]Getting InvalidArgumentError in softmax_cross_entropy_with_logits

我對tensorflow很陌生,並嘗試使用Iris數據集進行一些實驗。 我創建了以下模型函數(MWE):

def model_fn(features, labels, mode):
    net = tf.feature_column.input_layer(features, [tf.feature_column.numeric_column(key=key) for key in FEATURE_NAMES])

    logits = tf.layers.dense(inputs=net, units=3)

    loss = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)

    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
    train_op = optimizer.minimize(
        loss=loss,
        global_step=tf.train.get_global_step())

    return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)

不幸的是,我得到以下錯誤:

InvalidArgumentError: Input to reshape is a tensor with 256 values, but the requested shape has 1
 [[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_with_logits_sg, Reshape/shape)]]

張量的形狀似乎有些問題。 但是,logit和標簽的形狀相同(256,3)-根據文檔的要求。 同樣,兩個張量都具有float32類型。


僅出於完整性考慮,以下是估算器的輸入函數:

import pandas as pd
import tensorflow as tf
import numpy as np

IRIS_DATA = "data/iris.csv"

FEATURE_NAMES = ["sepal_length", "sepal_width", "petal_length", "petal_width"]
CLASS_NAME = ["class"]

COLUMNS = FEATURE_NAMES + CLASS_NAME

# read dataset
iris = pd.read_csv(IRIS_DATA, header=None, names=COLUMNS)

# encode classes
iris["class"] = iris["class"].astype('category').cat.codes

# train test split
np.random.seed(1)
msk = np.random.rand(len(iris)) < 0.8
train = iris[msk]
test = iris[~msk]

def iris_input_fn(batch_size=256, mode="TRAIN"):
    def prepare_input(data=None):

        #do mean normaization across all samples
        mu = np.mean(data)
        sigma = np.std(data)

        data = data - mu
        data = data / sigma
        is_nan = np.isnan(data)
        is_inf = np.isinf(data)
        if np.any(is_nan) or np.any(is_inf):
            print('data is not well-formed : is_nan {n}, is_inf: {i}'.format(n= np.any(is_nan), i=np.any(is_inf)))


        data = transform_data(data)
        return data

    def transform_data(data):
        data = data.astype(np.float32)
        return data


    def load_data():
        global train

        trn_all_data=train.iloc[:,:-1]
        trn_all_labels=train.iloc[:,-1]


        return (trn_all_data.astype(np.float32),
                                              trn_all_labels.astype(np.int32))

    data, labels = load_data()
    data = prepare_input(data)

    labels = tf.one_hot(labels, depth=3)

    labels = tf.cast(labels, tf.float32)
    dataset = tf.data.Dataset.from_tensor_slices((data.to_dict(orient="list"), labels))

    dataset = dataset.shuffle(1000).repeat().batch(batch_size)

    return dataset.make_one_shot_iterator().get_next()

來自UCI回購的數據集

通過替換nn模塊的損失函數解決了該問題:

loss = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)

通過損失模塊的損失函數

loss = tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits))

輸入到GradientDescentOptimizer的minimum方法的損失必須是標量。 整個批次的單個值。

問題是,我計算了批處理中每個元素的softmax交叉熵,這導致一個張量包含256(批大小)的交叉熵值,並嘗試以最小化方法提供該張量。 因此錯誤消息

Input to reshape is a tensor with 256 values, but the requested shape has 1

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM