[英]TensorRT increasing memory usage (leak?)
I'm having a loop where I parse an ONNX model into TensorRT, create an engine and do inference.我有一个循环,我将 ONNX model 解析为 TensorRT,创建引擎并进行推理。 I make sure I call x->destroy() on all objects and I use cudaFree for each cudaMalloc.
我确保我在所有对象上调用 x->destroy() 并为每个 cudaMalloc 使用 cudaFree。 Yet, I keep getting an increase in memory usage through nvidia-smi over consecutive iterations.
然而,我通过 nvidia-smi 在连续迭代中不断增加 memory 的使用量。 I'm really not sure where the problem comes from.
我真的不确定问题出在哪里。 The cuda-memcheck tool reports no leaks either.
cuda-memcheck 工具也没有报告泄漏。 Running Ubuntu 18.04, TensorRT 7.0.0, CUDA 10.2 and using a GTX 1070. The code, the ONNX file along with a CMakeLists.txt are available on this repo
运行 Ubuntu 18.04、TensorRT 7.0.0、CUDA 10.2 并使用 GTX 1070。代码、ONNX 文件以及 CMakeLists.txt 可在此 repo上找到
Here's the code这是代码
#include <memory>
#include <iostream>
#include <cuda_runtime_api.h>
#include <NvOnnxParser.h>
#include <NvInfer.h>
class Logger : public nvinfer1::ILogger
{
void log(Severity severity, const char* msg) override
{
// suppress info-level messages
if (severity != Severity::kINFO)
std::cout << msg << std::endl;
}
};
int main(int argc, char * argv[])
{
Logger gLogger;
auto builder = nvinfer1::createInferBuilder(gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = builder->createNetworkV2(explicitBatch);
auto config = builder->createBuilderConfig();
auto parser = nvonnxparser::createParser(*network, gLogger);
parser->parseFromFile("../model.onnx", static_cast<int>(0));
builder->setMaxBatchSize(1);
config->setMaxWorkspaceSize(128 * (1 << 20)); // 128 MiB
auto engine = builder->buildEngineWithConfig(*network, *config);
builder->destroy();
network->destroy();
parser->destroy();
config->destroy();
for(int i=0; i< atoi(argv[1]); i++)
{
auto context = engine->createExecutionContext();
void* deviceBuffers[2]{0};
int inputIndex = engine->getBindingIndex("input_rgb:0");
constexpr int inputNumel = 1 * 128 * 64 * 3;
int outputIndex = engine->getBindingIndex("truediv:0");
constexpr int outputNumel = 1 * 128;
//TODO: Remove batch size hardcoding
cudaMalloc(&deviceBuffers[inputIndex], 1 * sizeof(float) * inputNumel);
cudaMalloc(&deviceBuffers[outputIndex], 1 * sizeof(float) * outputNumel);
cudaStream_t stream;
cudaStreamCreate(&stream);
float inBuffer[inputNumel] = {0};
float outBuffer[outputNumel] = {0};
cudaMemcpyAsync(deviceBuffers[inputIndex], inBuffer, 1 * sizeof(float) * inputNumel, cudaMemcpyHostToDevice, stream);
context->enqueueV2(deviceBuffers, stream, nullptr);
cudaMemcpyAsync(outBuffer, deviceBuffers[outputIndex], 1 * sizeof(float) * outputNumel, cudaMemcpyDeviceToHost, stream);
cudaStreamSynchronize(stream);
cudaFree(deviceBuffers[inputIndex]);
cudaFree(deviceBuffers[outputIndex]);
cudaStreamDestroy(stream);
context->destroy();
}
engine->destroy();
return 0;
}
Looks like the issue was coming from the repetitive IExecutionContext creation despite destroying it at the end of every iteration.看起来问题来自重复的 IExecutionContext 创建,尽管在每次迭代结束时都会销毁它。 Creating/deleting the context at the same time as the engine fixed the issue for me.
在引擎为我解决问题的同时创建/删除上下文。 Nevertheless, it could still be a bug where context creation leaks a little bit of memory and that leak accumulates over time.
尽管如此,它仍然可能是一个错误,上下文创建会泄漏一点 memory,并且泄漏会随着时间的推移而累积。 Filed a github issue.
提交了 github 问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.