简体   繁体   English

在C ++中对算法的内存使用情况进行实验的一种好方法是什么?

[英]what is a good way to run experiments for the memory usage of an algorithm in C++?

I have algorithm A and algorithm B that were implemented in C++. 我有用C ++实现的算法A和算法B A uses more space than B in theory, and it turns out this is also the case in the practice. 理论上, AB使用更多的空间,事实证明,在实践中也是如此。 I would like to generate some nice graphs to illustrate this. 我想生成一些漂亮的图形来说明这一点。 Both algorithms receive an input n and I would like my experiments to vary for different n , so the x axis of the graph must be something like n = 10^6, 2*10^6, ... 两种算法都接收输入n ,我希望我的实验针对不同的n进行变化,因此图的x轴必须​​类似n = 10^6, 2*10^6, ...

Usually when it comes to data like time or cache misses, my most preferred way of setting up the experiments is as follows. 通常,当涉及到时间或缓存丢失之类的数据时,我最喜欢的实验设置方法如下。 Inside a C++ file I have the algorithm that is implemented like this: 在C ++文件中,我具有如下实现的算法:

#include <iostream>
using namespace std;
int counters[1000];
void init_statistics(){
   //use some library for example papi (http://icl.cs.utk.edu/papi/software/)
  //to start counting, store the results in the counters array
}

void stop_statistics(){
   //this is just to stop counting
}
int algA(int n){
//algorithm code
int result = ...
return result;
}

void main(int argc, const char * argv[]){

   int n = atoi(argv[1]);
   init_statistics(); //function that initializes the statistic counters
   int res = algA(n);
   end_statistics(); //function that ends the statistics counters
   cout<<res<<counter[0]<<counter[1]<<....<<endl;

}

I would then create a python script that for different n calls result = subprocess.check_output(['./algB',...]) . 然后,我将创建一个python脚本,用于不同的n调用result = subprocess.check_output(['./algB',...]) After that, parse the result string in python and print it in a suitable format. 之后,用python解析结果字符串并以适当的格式打印。 For example if I used R for the plots, I could print the data to an external file, where each counter is separated by a \\t . 例如,如果将R用于绘图,则可以将数据打印到外部文件中,其中每个计数器用\\t分隔。

This has worked very well for me, but now is the first time that I need data about the space used by the algorithm, and I am not sure how to count this space. 这对我来说效果很好,但是现在是我第一次需要有关算法使用的空间的数据,而且我不确定如何计算该空间。 One way would be to use valgrind, this is a possible output by valgrind: 一种方法是使用valgrind,这是valgrind可能的输出:

==15447== Memcheck, a memory error detector
==15447== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==15447== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==15447== Command: ./algB 1.txt 2.txt
==15447== 
==15447== 
==15447== HEAP SUMMARY:
==15447==     in use at exit: 72,704 bytes in 1 blocks
==15447==   total heap usage: 39 allocs, 38 frees, 471,174,306 bytes allocated
==15447== 
==15447== LEAK SUMMARY:
==15447==    definitely lost: 0 bytes in 0 blocks
==15447==    indirectly lost: 0 bytes in 0 blocks
==15447==      possibly lost: 0 bytes in 0 blocks
==15447==    still reachable: 72,704 bytes in 1 blocks
==15447==         suppressed: 0 bytes in 0 blocks
==15447== Rerun with --leak-check=full to see details of leaked memory
==15447== 
==15447== For counts of detected and suppressed errors, rerun with: -v
==15447== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

The interesting number is 471,174,306 bytes . 有趣的数字是471,174,306 bytes However, valgrind slows down the execution time a lot, and at the same time doesn't just return this number but this large string. 但是,valgrind大大减慢了执行时间,同时,不仅返回此数字,而且返回了这个大字符串。 And I am not sure how to parse it because for some reason if with python I call result = subprocess.check_output(['valgrind','./algB',...]) , the result string only stores the output by ./algB and completely ignores what valgrind returns. 而且我不确定如何解析它,因为由于某种原因,如果使用python我调用result = subprocess.check_output(['valgrind','./algB',...]) ,则result字符串仅存储by的输出./algB并且完全忽略valgrind返回的内容。

thank you in advace! 谢谢你的放心!

memcheck是用于发现内存泄漏的工具,您应该使用massif (valgrind中的另一种工具)进行内存分配分析。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM