简体   繁体   中英

YOLO - tensorflow works on cpu but not on gpu

I've used YOLO detection with trained model using my GPU - Nvidia 1060 3Gb, and everything worked fine.

Now I am trying to generate my own model, with param --gpu 1.0. Tensorflow can see my gpu, as I can read at start those communicates: "name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705" "totalMemory: 3.00GiB freeMemory: 2.43GiB"

Anyway, later on, when program loads data, and is trying to start learning i got following error: "failed to allocate 832.51M (872952320 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY"

I've checked if it tries to use my other gpu (Intel 630) , but it doesn't.

As i run the train process without "--gpu" mode option, it works fine, but slowly. ( I've tried also --gpu 0.8, 0.4 ect.)

Any idea how to fix it?

Problem solved. Changing batch size and image size in config file didn't seem to help as they didn't load correctly. I had to go to defaults.py file and change them up there to lower, to make it possible for my GPU to calculate the steps.

Look like your custom model use to much memory and the graphic card cannot support it. You only need to use the --batch option to control the size of memory.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM