简体   繁体   中英

How do I access the value of a label in Tensorflow during training/eval?

I'm working on a neural network in Tensorflow with a custom loss function, which based on the label of the input image I sample a vector (from another data set corresponding to the label) and take the dot product of the input image's embedding (the softmax preactivation) and the sampled vector. I also negatively sample a mismatch vector that conflicts with the input label, along with another random training input image embedding whose label differs from the current input label. Obviously these are all constructed as same-dimensional tensors, and the custom loss is:

Loss = max(0, input_embedding * mismatching_sample - input_embedding * matching_sample + 1) \ 
+ max(0, random_embedding * matching_sample - input_embedding * matching_sample + 1)

That was mainly for background, but the issue I'm having is how to access the values of labels corresponding to the input images? I need to be able to access these labels' values in order to sample the correct vectors and compute my loss.

I read in documentation that you can use .eval() to get the value of a tensor by running in a session, but when I tried this my terminal just hung... Technically I'm already running a session when I'm training my neural network, so not sure if there is an issue with running a second session inside another session and trying to evaluate values that are technically part of the other running session. Anyway, I'm completely out of ideas as to how I can make this work. Any help would be greatly appreciated!

Here's my original attempt that proved problematic:

# compute custom loss function using tensors and tensor operations
def compute_loss(e_list, labels, i):
    embedding = e_list[i] #getting the current embedding tensor
    label = labels[i] #getting the matching label tensor, this is value I need.
    y_index = np.nonzero(label)[0][0] #this always returns 0, doesn't work :(
    target = get_mnist_embedding(y_index)
    wrong_mnist = get_mismatch_mnist_embedding(y_index)
    wrong_spec = get_random_spec_embedding(y_index, e_list, labels)
    # compute the loss:
    zero = tf.constant(0,dtype="float32")
    one = tf.constant(1,dtype="float32")
    mul1 = tf.mul(wrong_mnist,embedding)
    dot1 = tf.reduce_sum(mul1)
    mul2 = tf.mul(target,embedding)
    dot2 = tf.reduce_sum(mul2)
    mul3 = tf.mul(target,wrong_spec)
    dot3 = tf.reduce_sum(mul3)
    max1 = tf.maximum(zero, tf.add_n([dot1, tf.negative(dot2), one]))
    max2 = tf.maximum(zero, tf.add_n([dot3, tf.negative(dot2), one]))
    loss = tf.add(max1,max2)
    return loss

The issue was I was not using feed_dict to pass in the sampled values. I was trying to identify the value of the tensor for the input's label at the time of loss computation, in order to then sample the other values needed for my loss function.

I have worked around this issue by precomputing and organizing the mismatching_sample and matching_sample values for example, using a tf.placeholder for these values initially, and passing in the values using feed_dict dictionary object when I execute sess.run([train_op, loss...], feed_dict=feed_dict) to access the values for my loss calculation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM