简体   繁体   中英

How to visualize DNNs dependent of the output class in TensorFlow?

In TensorFlow it is pretty straight forward to visualize filters and activation layers given a single input.

But I'm more interested in the opposite way: feeding a class (as one-hot vector) to the output layer and see something like the optimal input image for that specific class.

Is there a way to do so or to run the graph reversed?

Background: I'm using Googles Inception V3 with 15 classes and I've trained the network already with a large amount of data up to a good precision. Now I'm interested in understanding why and how the model distinguishes the different classes.

The "basic" version of this is straightforward. You use the same graph as for training the network, but instead of optimizing wrt the parameters of the network, you optimize wrt the input (which has to be a variable with the shape of your input image). Your optimization target is the negative (because you want to maximize, but TF optimizers minimize) logit of your target class. You want to run it with a couple of different initial values for the image.

There's also a few related techniques, if you search for DeepDream and adversarial examples you should find a lot of literature.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM