简体   繁体   English

使用keras和tensorflow作为后端在aws sagemaker中配置GPU

[英]Configuring GPU in aws sagemaker with keras and tensorflow as backend

I am a newbie to aws sagemaker. 我是ssmaker的新手。 I am trying to setup a model in aws sagemaker using keras with GPU support. 我正在尝试使用支持GPU的keras在aws sagemaker中设置模型。 The docker base image used to infer the model is given below 下面给出了用于推断模型的docker基础图像

FROM tensorflow/tensorflow:1.10.0-gpu-py3

RUN apt-get update && apt-get install -y --no-install-recommends nginx curl
...

This is the keras code I'm using to check if a GPU is identified by keras in flask. 这是我用来检查是否由烧瓶中的keras识别GPU的keras代码。

import keras
@app.route('/ping', methods=['GET'])
def ping():

    keras.backend.tensorflow_backend._get_available_gpus()

    return flask.Response(response='\n', status=200,mimetype='application/json')

When I spin up a notebook instance in a sagemaker using the GPU the keras code shows available GPUs. 当我使用GPU在sagemaker中启动笔记本实例时,keras代码显示可用的GPU。 So, in order to access GPU in the inference phase(model) do I need to install any additional libraries in the docker file apart from the tensorflow GPU base image? 那么,为了在推理阶段(模型)中访问GPU,我是否需要在docker文件中安装除tensorflow GPU基础映像之外的任何其他库?

Thanks in advance. 提前致谢。

You shouldn't need to install anything else. 您不需要安装任何其他东西。 Keras relies on TensorFlow for GPU detection and configuration. Keras依靠TensorFlow进行GPU检测和配置。

The only thing worth noting is how to use multiple GPUs during training. 唯一值得注意的是如何在训练期间使用多个GPU。 I'd recommend passing 'gpu_count' as an hyper parameter, and setting things up like so: 我建议将'gpu_count'作为超参数传递,并设置如下:

from keras.utils import multi_gpu_model
model = Sequential()
model.add(...)
...
if gpu_count > 1:
    model = multi_gpu_model(model, gpus=gpu_count)
model.compile(...)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM