简体   繁体   中英

How to reduce usage of memory?

This is a sample of my code

def normalize_3D(input):
    for i in range(input.shape[0]):
        s = tf.concat([tf.reshape(input[i, 9, 0], shape=[1, 1]),
                       tf.reshape(input[i, 9, 1], shape=[1, 1]),
                       tf.reshape(input[i, 9, 2], shape=[1, 1])], axis=1)

        output = input[i, :, :] - s
        output2 = output / tf.sqrt(tf.square(input[i, 9, 0] - input[i, 0, 0]) +
                                   tf.square(input[i, 9, 1] - input[i, 0, 1]) +
                                   tf.square(input[i, 9, 2] - input[i, 0, 2]))
        output2 = tf.reshape(output2, [1, input.shape[1], input.shape[2]])
        if i == 0:
            output3 = output2
        else:
            output3 = tf.concat([output3, output2], axis=0)

    return output3

like this sample I used 'for' state many times to calculate the data which has just a few batch. However, while I'm writing my code, I noticed that it uses a lot of memory and error message came out. Some of my predictions just showing 'nan' and after that the program is stucked.

Is there any way to reduce this kind of memory abuse when I calculate batch data?

You function can be expressed in a simpler and more efficient way like this:

import tensorflow as tf

def normalize_3D(input):
    shift = input[:, 9]
    scale = tf.norm(input[:, 9] - input[:, 0], axis=1, keepdims=True)
    output = (input - tf.expand_dims(shift, 1)) / tf.expand_dims(scale, 1)
    return output

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM