I'm working on a project, and need a char buffer to send a struct over a network. I was using:
char* buf = new char[sizeof(obj)];
Later, someone told me that I would be leaking memory if I did it that way without deleting. I debugged my program with Visual Studio 2017 Professional, and my memory sat constant, no matter how many times I made this buffer.
My question is this: did C++17 fix the memory leak issues that the 'new' operator could cause, or is this something specific to the compiler I'm using? Thanks in advance!
Calling new[]
without delete[]
will indeed leak memory. You might not see it unless you are leaking lots of buffers, depending on how your app's RTL allocates/caches memory under the hood.
Traditionally, using a std::vector<char>
, or even a std::string
, is the preferred choice for dynamic buffers, let them handle their own memory for you, eg:
std::vector<char> buf(sizeof(obj));
or
std::string buf(sizeof(obj), 0);
If you absolutely need new[]
/ delete[]
for some reason, consider using std::unique_ptr<char[]>
instead, let it call delete[]
for you, eg:
std::unique_ptr<char[]> buf(new char[sizeof(obj)]);
or
auto buf = std::make_unique<char[]>(sizeof(obj));
Yep, that looks like a memory leak.
If you don't want to explicitly call delete
on dynamically allocated memory, the standard solution is to use a smart pointer, which generally uses "reference counting" in order to de-allocate the memory safely.
More info on smart pointers: What is a smart pointer and when should I use one?
As for why you didn't allocate more memory -- what are you measuring? There are generally two "allocations" that happen:
(Strictly speaking, this is not specific to your compiler , this is based on the implementation of new
. If you're using the standard Visual C++ toolchain, then in that sense, yeah, the implementation of new
depends on Visual C++. Here's a starting point: https://docs.microsoft.com/en-us/cpp/standard-library/memory?view=vs-2019 )
It's possible that you're measuring #1 -- how much memory is being allocated by the operating system. For instance, sizeof(obj)
may be 1 kilobyte, and your allocator may have requested 256K from the operating system. So you have space to perform new
and receive allocations within that memory buffer, without changing the OS-level memory footprint of your process.
As an experiment, I'd suggest allocating successively larger chunks of memory:
const int alloc_size = 4; // then 5, 6, 7...
for (int i = 0; i < (2<<alloc_size); i++) {
// do your allocation here
}
and then finding out how many allocations you can perform before your measurement changes.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.