I want to use a trained model to change the input so it minimizes the loss (rather than changing the trainable variables) a la Deep Dreaming in Tensorflow 2.0 but I am not having success.
Say I have a basic NN as the one in the docs
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
model = MyModel()
Which I train using a simple tf.GradientTape function
@tf.function
def train_step(image, label):
with tf.GradientTape() as tape:
predictions = model(image)
loss = loss_object(label, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
What's the idiomatic way to create a function that will instead calculate and apply the gradients to the input - images.
I assumed it will be as simple as
def train_step(image, label):
with tf.GradientTape() as tape:
predictions = model(image)
loss = loss_object(label, predictions)
gradients = tape.gradient(loss, image)
optimizer.apply_gradients(zip(gradients, image))
However, that doesn't work.
tf.GradientTape.gradients
can only differentiate wrt to a watched tensor. Variables are automatically watched on first access. In order to differentiate wrt an arbitrary tensor, you have to explicitly watch
it:
>>> x = tf.constant([4.0])
>>> y = tf.constant([2.0])
>>> with tf.GradientTape() as tape:
... tape.watch([x, y])
... z = x * y
...
>>> tape.gradient(z, [x, y])
[<tf.Tensor: id=9, shape=(1,), dtype=float32, numpy=array([ 2.], dtype=float32)>,
<tf.Tensor: id=10, shape=(1,), dtype=float32, numpy=array([ 4.], dtype=float32)>]
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.