简体   繁体   中英

killed error in tensorflow when I try load convolutional pretrained model in jetson tx1

I have a face recognition model trained on a inception_resnet Model.

When I run my tensorflow code to load trained model on Nvidia Jetson TX1 , it just outputs "killed". How do I debug this?

What can I do? I think it's because memory problem!

According to this issue 'killed' on the jetson means it ran out of memory. It may not be possible to run the inception_resnet model on the TX1.

您可以尝试将batch_size的数字减小,例如从32减少到16,这将减少内存消耗并增加训练时间。

Finally I find the answer!

If you don't set the maximum fraction of GPU memory, it allocates almost the whole free memory. My problem was lack of enough memory for GPU.

You can pass the session configuration.

I set the per_process_gpu_memory_fraction configuration in tf.GPUOptions to 0.8 and the problem is solved.

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.8)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM