简体   繁体   English

Live555客户端流式内存泄漏

[英]Live555 client streaming memory leak

I'm using Live555 to realize a C++ RTPS client for IP cameras. 我正在使用Live555实现用于IP摄像机的C ++ RTPS客户端。 I'm using most of the testRTSPClient code. 我正在使用大多数testRTSPClient代码。

I used Poco library and Poco::Thread class too. 我也使用了Poco库和Poco :: Thread类。

In other words any client for each camera runs in a separate thread tha owns his instance of Live555 objects (as live555-devel suggests, any thread uses an instance with his UsageEnvironment and TaskScheduler.). 换句话说,每个摄像机的任何客户端都在单独的线程中运行,例如拥有Live555对象的实例(如live555-devel所建议的,任何线程都将实例与他的UsageEnvironment和TaskScheduler一起使用)。 This to avoid shared variables and synchronization stuff. 这样可以避免共享变量和同步的东西。 It seems to works well and fast. 它似乎运作良好且快速。

My runnable (following the Poco library requirements) object IPCamera has the run method as simple as: 我的可运行对象(遵循Poco库要求)对象IPCamera的运行方法非常简单:

void IPCamera::run()
{
  openURL(_myEnv, "", _myRtspCommand.c_str(), *this); //taken from the testRTSPClient example 

  _myEnv->TaskScheduler().doEventLoop(&_watchEventLoopVariable); 
  //it runs until _watchEventLoopVariable change to a value != 0

  //exit from the run;
}

When the run is finished I call join() to close the thread (by the way I figured that if I don't call myThread->join(), the memory is not freed totally). 运行完成后,我调用join()关闭线程(通过这种方式,我发现如果不调用myThread-> join(),则不会完全释放内存)。

Upon shutdown, following the requirements in Live555-devel I put in my code: 关闭后,按照Live555-devel中的要求,我输入了代码:

 void IPCamera::shutdown() 
 {
    ...
    _myEnv->reclaim(); 
    delete _myScheduler; 
 }

Using Valgrind to detect memory leaks I saw a strange behaviour: 使用Valgrind检测内存泄漏时,我看到了一个奇怪的行为:

1) case: Run the program - Close the program with all the IPCameras that run in the proper manner. 1)情况:运行程序-使用正确运行的所有IPCamera关闭程序。

a) At the end of the program all the destructors are invoked. a)在程序结束时,将调用所有析构函数。

b) exit from doEventLoop(). b)从doEventLoop()退出。

c) join the thread (actually is terminated because it exits from run method. c)加入线程(实际上是终止的,因为它从run方法退出。

d) destroy the _myEnv and _myScheduler as showed. d)如图所示销毁_myEnv和_myScheduler。

e) destroy all the others objects, including IPCamera and Thread associated. e)销毁所有其他对象,包括IPCamera和关联的线程。

-> no memory leaks are found by Valgrind. -> Valgrind没有发现内存泄漏。 Ok

Now comes the problem. 现在出现了问题。

2) case: I'm implementing a use case where a Poco::Timer checks every X seconds if the camera is alive using ICMP ping. 2)用例:我正在实现一个用例,其中Poco :: Timer每隔X秒检查一次使用ICMP ping的摄像机是否处于活动状态。 It raises an event (using Poco events) in case it doesn't answer because the network is down and I do the follow: 它会引发一个事件(使用Poco事件),以防由于网络中断而导致无法响应,请执行以下操作:

IPCamera down : IPCamera下:

a) put the _watchEventLoopVariable = 1 to exit from the run method; a)将_watchEventLoopVariable = 1退出run方法;

b) shutdown the client associated to the IPCamera as showed b)如图所示关闭与IPCamera关联的客户端

c) join the thread c)加入线程

I don't destroy the thread because I would like to reuse it when the network is up again and the camera works again.And in that case: a)I set the _eventWatchVariable = 0. b) Let start again the thread with: myThread->run() 我不销毁线程,因为我想在网络再次启动并且摄像机可以再次使用时重用它。在这种情况下:a)我将_eventWatchVariable设置为0。b)让线程重新开始:myThread ->运行()

Valgrind tells me that memory leaks are found: 60 bytes direct, 20.000 indirect bytes are lost in the thread, in the H264BufferdPackedFactory::createNewPacket(...), a class of Live555. Valgrind告诉我,发现了内存泄漏:在H264BufferdPackedFactory :: createNewPacket(...)(一个Live555类)中,直接丢失了60个字节,线程中丢失了20.000个间接字节。

SOLVED I found out that the problem was the tunneling over TCP. 解决了我发现问题出在TCP隧道上。 In LIVE55 you can select the kind of protocol. 在LIVE55中,您可以选择协议的种类。 If I select: 如果我选择:

 #define REQUEST_STREAMING_OVER_TCP false

I don't have any leak. 我没有泄漏。 I used many times Valgrind to be sure (it discovered the problem). 我用了很多次Valgrind来确定(它发现了问题)。

If I use TCP then the above problem is manifested. 如果我使用TCP,则会出现上述问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM