简体   繁体   中英

How to define loss function on Deep dream like Gradient decent on Model input

So I have a trained sequential model (Categorizer) in python Kearns / Tensorflow and I have some Input. I want to optimize the input to maximize the Category hit.

import tensorflow as tf
import numpy as np

def dream(input, model, iterations, learning_rate):
    target = model.predict(input)
    var = tf.Variable("var", Input.shape)
    loss = np.linalg.norm(model.predict(var)-goal)
    for i in range(iterations):
        input -= learning_rate * tf.gradients(loss,input)
    return input

However, this doesn't work.

How do I define loss correctly?

It is hard to say exactly what the problem is without seeing the rest of your code, so this is going to be based on some assumptions.

  1. The first is that this is a problem where you intend to learn a variable input image to produce a model output that better fits into one of several categories.
  2. The second is that the model.predict() function is the one specified here .
  3. The third is that goal is an ideal classification of your modified input image into the class into which it was most strongly classified before modification.
  4. The fourth is that the argument image has been instantated as a variable datatype.

If all of this is true then it seems what you are looking for is a means of defining a loss function that compares a set of logits to a set of labels. In which case a function such as tf.keras.metrics.categorical_crossentropy() , should do that for you. Using a loss function based on a numpy operation will not work because numpy lacks the gradient handling functionality required for back-propagation to function. tf. functions only, please.

However if, as you say, "var has shape ()" this is unlikely to work. Possibly because you have defined it using Input.shape() , when the argument passed to the function is called input (cases very definitely matter).

See below for a rough adaption of your code to something that seems plausible.


def dream(input, model, iterations, learning_rate):
    # decide into which class your input should fit
    target = model.predict(input)
    # create ideal output of classification (1 for target class, 0 for others)
    goal = tf.one_hot(indices=tf.cast(target, tf.int32), depth=1)
    # define loss according to the cross entropy of the model prediciton vs goal
    loss =  tf.keras.metrics.categorical_crossentropy(model.predict(input), goal)
    # iterateively modify the image according to the gradients produced by 
    # differentiating the loss with respect to the variable image pixels
    for i in range(iterations):
        input -= learning_rate * tf.gradients(loss,input)
    return input

# SETUP
original_image = tf.keras.Input(shape=(a,b,3))         
input = tf.Variable(original_image, "variable_image")  # create trainable image

# add some layers
x = tf.keras.layers.Dense(n_layers, activation=tf.nn.relu)(input)
output = tf.keras.layers.Dense(n_categories, activation=tf.nn.softmax)(x)
# link model
model = tf.keras.Model(inputs=input, outputs=output)

# call function
dream(input, model, iterations, learning_rate)

CAVEAT: This code is a hypothetical framework, based on assumptions about the intended purpose of the code. This code is untested and is not intended to be run verbatim .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM