简体   繁体   中英

How can I see which variable or functions are running in CPU tensorflow

I am running a CNN in tensorflow. I am using tf.device(/gpu:0) to put all of my variables in gpu, but it seems that some of them are still in cpu. I am looking at my gpu utils when I am running my code, it goes up to 100% and then goes down to 0%.

I know that if I use config.log_device_placement = True I can see which variable are assigned to which device. but because the number of variable are a lot in my code, I couldn't find out which one are in cpu.

So, is there any way that I just see which variables are pinned to cpu? Or, do you have any idea why some of my variable should be pinned to cpu while I am using tf.device to assign them to gpu?

By the way, after I changed upsampler (simple interpolator tf.image.resize_images) by following code for upsampling this issue happened:

def unravel_argmax(argmax, shape):
    with tf.device(gpu_n):
        argmax_shape = argmax.get_shape()
        new_1dim_shape = tf.shape(tf.constant(0, shape=[tf.Dimension(4), argmax_shape[0]*argmax_shape[1]*argmax_shape[2]*argmax_shape[3]]))
        batch_shape = tf.constant(0, dtype=tf.int64, shape=[argmax_shape[0], 1, 1, 1]).get_shape()
        b = tf.multiply(tf.ones_like(argmax), tf.reshape(tf.range(shape[0]), batch_shape))
        y = argmax // (shape[2] * shape[3])
        x = argmax % (shape[2] * shape[3]) // shape[3]
        c = tf.ones_like(argmax) * tf.range(shape[3])
        pack = tf.stack([b, y, x, c])
        pack = tf.reshape(pack, new_1dim_shape)
        pack = tf.transpose(pack)
        return pack


def unpool_layer2x2_batch(updates, mask, ksize=[1, 2, 2, 1]):
    with tf.device(gpu_n):
        input_shape = updates.get_shape()
        new_dim_y = input_shape[1] * ksize[1]
        new_dim_x = input_shape[2] * ksize[2]
        output_shape = tf.to_int64((tf.constant(0, dtype=tf.int64, shape=[input_shape[0], new_dim_y, new_dim_x, input_shape[3]]).get_shape()))
        indices = unravel_argmax(mask, output_shape)
        new_1dim_shape = tf.shape(tf.constant(0, shape=[input_shape[0] * input_shape[1] * input_shape[2] * input_shape[3]]))
        values = tf.reshape(updates, new_1dim_shape)
        ret = tf.scatter_nd(indices, values, output_shape)
        return ret

I got this code from here for unpooling.

You can log device placement via configuration of Session. Setting:

  sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM