繁体   English   中英

如何跟踪CPU使用时间与GPU进行深度学习?

[英]How do I keep track of the time the CPU is used vs the GPUs for deep learning?

我很想知道我的脚本运行时花在CPU和GPU上的时间 - 有没有办法跟踪这个?

寻找一个通用的答案,但如果这个玩具解决方案过于抽象(来自keras的multi_gpu_model示例)会很棒。

import tensorflow as tf
from keras.applications import Xception
from keras.utils import multi_gpu_model
import numpy as np
num_samples = 1000
height = 224
width = 224
num_classes = 1000
# Instantiate the base model (or "template" model).
# We recommend doing this with under a CPU device scope,
# so that the model's weights are hosted on CPU memory.
# Otherwise they may end up hosted on a GPU, which would
# complicate weight sharing.
with tf.device('/cpu:0'):
    model = Xception(weights=None,
                     input_shape=(height, width, 3),
                     classes=num_classes)
# Replicates the model on 8 GPUs.
# This assumes that your machine has 8 available GPUs.
parallel_model = multi_gpu_model(model, gpus=8)
parallel_model.compile(loss='categorical_crossentropy',
                       optimizer='rmsprop')
# Generate dummy data.
x = np.random.random((num_samples, height, width, 3))
y = np.random.random((num_samples, num_classes))
# This `fit` call will be distributed on 8 GPUs.
# Since the batch size is 256, each GPU will process 32 samples.
parallel_model.fit(x, y, epochs=20, batch_size=256)
# Save model via the template model (which shares the same weights):
model.save('my_model.h5')

您需要添加的是从Tensorflow API到Keras模型的基于Chrome的CPU / GPU timeline分析!

以下是Tensorflow问题跟踪器中提供的示例:

https://github.com/tensorflow/tensorflow/issues/9868#issuecomment-306188267

这是Keras问题跟踪器中一个更复杂的例子:

https://github.com/keras-team/keras/issues/6606#issuecomment-380196635

最后,这是这个分析的输出如何:

https://towardsdatascience.com/howto-profile-tensorflow-1a49fb18073d

在此输入图像描述

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM