简体   繁体   中英

Do additional function/method definitions increase a program's memory footprint?

In C++, does defining additional methods or functions that don't get used result in a larger memory footprint or slower execution speed?

Basically, I have several utility debugging methods in a class, none of which are required for the normal use of the class. Would it make a difference in terms of memory footprint or speed whether or not these definitions remain if they're never used? For example:

class myClass
{
    public:

        //Something the user of this class would use
        int doSomething() {...}

        //Something used solely to make sure I wrote the class properly
        bool isClassValid() {...}
};

...

myClass classInstance();
myClass.doSomething();

Your methods generate code. That code is going to exist somewhere. When your executable is built it will probably exist in your executable. This will increase the size of the executable, increase its load time and may impact its cache behaviour. So the answer is "yes".

Except... It gets muddier.

A good compiler and a good linker can interact such that any code you do not actually use does not get built into your executable. The granularity of this varies but can go all the way down to individual functions (and possibly even lower in some languages). If the compiler can signal that the method in question is not ever actually called, and the linker is smart enough to bring code in at the function level, then the answer changes to "no".

So, in short, the answer is "yes" or "no" depending on a host of factors you'll have to research related to the tools you're using and the platform you're running things on.

Unused methods are generally present in the executable unless you tell the linker to find them and strip them out.

For example, on Mac, you can pass -dead_strip to ld to strip out such dead code. If you're on Windows using Visual C++, you can pass /OPT:REF to link.exe (I imagine Visual Studio automatically sets your project up to pass this option in Release builds, but not in Debug builds.)

Note that most OSes don't always keep all the code in the memory. Since the code is constant data the OS can always load it from file on demand like it would load dynamic data from swap. But it doesn't mean that unused code is never loaded since the OS loads it not by separate methods but by pages. In other words, it is very hard to predict which parts of your code segment actually end up in the memory unless you have very deep knowledge of your OS and the structure of your code segment. The only thing that can be said for sure is that it is perfectly possibly for your code to consume less physical memory than its actual size.

As for the execution speed, the answer is no I think. It may increase application loading speed, but when the code is executed, nobody cares how large it is and it has absolutely no effect on speed. That is, unless you are near your memory limit and the OS starts to swap a lot and everything becomes very slow.

As the others have already mentioned, the compiler may optimize your code out. But it is also something you can do yourself by using #ifdefs for your debugging methods and it is usually recommended to do so.

Determining whether code is unused or not is a fairly hard problem. Just write a hello world program, static link it with glibc, and use objdump to look at all the junk that ends up in your binary. The vast majority of this code is not used , but it's referenced in ways that makes it difficult or impossible for a compiler or linker to optimize it out. Unless as a library author you work very diligently to avoid introducing this kind of dependency, unused functions/methods will waste space, and probably lots of it. I suspect it's even harder in C++ than in C.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM