简体   繁体   English

在 TensorFlow 中打印 GPU 和 CPU 使用率

[英]Print GPU and CPU usage in TensorFlow

I'm running some TensorFlow examples on Google Colab (so I can have a GPU), like this one.我在 Google Colab 上运行了一些TensorFlow示例(所以我可以有一个 GPU), 就像这个。

Is there a way to print the CPU and GPU usage , in the code, for every training step, in order to see how the GPU is used and the performance difference between CPU-only and GPU?有没有办法在代码中为每个训练步骤打印 CPU 和 GPU 的使用情况,以便查看 GPU 的使用方式以及 CPU-only 和 Z52F9EC21735243AD9917CDA3CA077D23 之间的性能差异?

In a standard environment maybe I could use nvidia-smi to track the GPU usage, but with the notebook, I can only run a cell at a time.标准环境中,也许我可以使用nvidia-smi来跟踪 GPU 的使用情况,但是对于笔记本电脑,我一次只能运行一个单元。

Thanks谢谢

There is a snippet code that I scraped from Internet.我从互联网上抓取了一个片段代码。 You can run the printm function whenever you want.您可以随时运行 printm function。

# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
 process = psutil.Process(os.getpid())
 print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
 print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()

Here is the output from my Google Colab:这是来自我的 Google Colab 的 output:

Gen RAM Free: 12.8 GB  | Proc size: 155.7 MB
GPU RAM Free: 11441MB | Used: 0MB | Util   0% | Total 11441MB

You need to start a thread that print this for you.您需要启动一个为您打印此内容的线程。 While this thread is running you will see the output only when other cell is running.当此线程运行时,您将仅在其他单元运行时看到 output。

The code:编码:

!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os, time
import GPUtil as GPU

GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def worker():
  while True:
    process = psutil.Process(os.getpid())
    print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available `enter code here`), " I Proc size: " + humanize.naturalsize( process.memory_info().rss))
    print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
    time.sleep(6)

import threading
t = threading.Thread(target=worker, name='Monitor')
t.start()

The test:考试: 在此处输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM