![](/img/trans.png)
[英]Why does my program run significantly faster on my CPU device than on my GPU device?
[英]Why is TensorFlow using my GPU when the device is set to the CPU
TensorFlow正在使用分配我的所有GPU內存並忽略我的命令來使用CPU,我該如何解決這個問題呢?
這是我的testprog
的代碼摘錄
Session *session;
SessionOptions opts = SessionOptions();
//force to allocate 0 memory on gpu
opts.config.mutable_gpu_options()->set_per_process_gpu_memory_fraction(0);
opts.config.mutable_gpu_options()->set_allow_growth(false);
//create session with these settings
TF_CHECK_OK(NewSession(opts, &session));
TF_CHECK_OK(session->Create(graph_def));
//set device to cpu
graph::SetDefaultDevice("/cpu:0", &graph_def);
//run arbitrary model
Status status = session->Run(classifierInput, {output_layer},{},&outputs);
TF_CHECK_OK(session->Close());
打電話給nvidi-smi
告訴我:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P4000 Off | 0000:01:00.0 Off | N/A |
| N/A 50C P0 28W / N/A | 7756MiB / 8114MiB | 42% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1784 G /usr/bin/X 139MiB |
| 0 3828 G qtcreator 28MiB |
| 0 7721 C ...testprog/build/testprog 7585MiB |
+-----------------------------------------------------------------------------+
為什么會這樣?
由於這個問題用C ++標記。 解決方案是
tensorflow::Session *sess;
tensorflow::SessionOptions options;
tensorflow::ConfigProto* config = &options.config;
// disabled GPU entirely
(*config->mutable_device_count())["GPU"] = 0;
// place nodes somewhere
config->set_allow_soft_placement(true);
請參閱此處的示例 。 我的另一篇文章, TensorFlow如何放置節點 。
編輯 :有GitHub問題 。 你可以試試:
#include <stdlib.h>
setenv("CUDA_VISIBLE_DEVICES", "", 1);
要么
auto gpu_options = config->gpu_options();
gpu_options.set_visible_device_list("");
但是這可能會讓你failed call to cuInit: CUDA_ERROR_NO_DEVICE
。
當您將參數設置為cpu:1時,它不會阻止tensorflow初始化GPU設備。
session_conf = tf.ConfigProto(
device_count={'CPU' : 1, 'GPU' : 0},
allow_soft_placement=True,
log_device_placement=False
)
還有......不得已:
alias nogpu='export CUDA_VISIBLE_DEVICES=-1;'
nogpu python disable_GPU_tensorflow.py
要么
setenv("CUDA_VISIBLE_DEVICES", "", 1);
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.