简体   繁体   中英

Is it bad to use #include in the middle of code?

I keep reading that it's bad to do so, but I don't feel those answers fully answer my specific question.

It seems like it could be really useful in some cases. I want to do something like the following:

class Example {
    private:
        int val;
    public:
        void my_function() {
#if defined (__AVX2__)
    #include <function_internal_code_using_avx2.h>
#else
    #include <function_internal_code_without_avx2.h>
#endif
        }
};

If using #include in the middle of code is bad in this example, what would be a good practice approach for to achieve what I'm trying to do? That is, I'm trying to differentiate a member function implementation in cases where avx2 is and isn't available to be compiled.

No it is not intrinsically bad. #include was meant to allow include anywhere. It's just that it's uncommon to use it like this, and this goes against the principle of least astonishment .

The good practices that were developed around includes are all based on the assumption of an inclusion at the start of a compilation unit and in principle outside any namespace.

This is certainly why the C++ core guidelines recommend not to do it, being understood that they have normal reusable headers in mind:

SF.4: Include .h files before other declarations in a file

Reason

Minimize context dependencies and increase readability.

Additional remarks: How to solve your underlying problem

Not sure about the full context. But first of all, I wouldn't put the function body in the class definition. This would better encapsulate the implementation specific details for the class consumers, which should not need to know.

Then you could use conditional compilation in the body, or much better opt for some policy based design , using templates to configure the classes to be used at compile time.

I agree with what @Christophe said. In your case I would write the following code

Write a header commonInterface.h

#pragma once
#if defined (__AVX2__)
    void commonInterface (...) {
        #include <function_internal_code_using_avx2.h>
    }
#else
    void commonInterface (...) {
        #include <function_internal_code_without_avx2.h>
    }
#endif

so you hide the #if defined in the header and still have good readable code in the implementation file.

#include <commonInterface>
class Example {
    private:
        int val;
    public:
        void my_function() {
            commonInterface(...);
        }
};
#ifdef __AVX2__
#   include <my_function_avx2.h>
#else
#   include <my_function.h>
#endif

class Example {
    int val;
public:
    void my_function() {
#       ifdef __AVX2__
        my_function_avx2(this);
#       else
        my_function(this);
#       endif
    }
};

Whether it is good or bad really depends on the context. The technique is often used if you have to write a great amount of boilerplate code. For example, the clang compiler uses it all over the place to match/make use of all possible types, identifiers, tokens, and so on. Here is an example, and here another one.

If you want to define a function differently depending on certain compile-time known parameters, it's seen cleaner to put the definitions where they belong to be. You should not split up a definition of foo into two seperate files and choose the right one at compile time, as it increases the overhead for the programmer (which is often not just you) to understand your code. Consider the following snippet which is, at least in my opinion, much more expressive:

// platform.hpp 
constexpr static bool use_avx2 = #if defined (__AVX2__)
                               true;
                             #else
                               false;
                             #endif
// example.hpp
class Example {
private:
    int val;
public:
    void my_function() {
        if constexpr(use_avx2) {
            // code of "functional_internal_code_using_avx2.h"
        }
        else {
            // code of "functional_internal_code_without_avx2.h"
        }
};

The code can be improved further by generalizing more, adding layers of abstractions that "just define the algorithm" instead of both the algorithm and platform-specific weirdness.

Another important argument against your solution is the fact that both functional_internal_code_using_avx2.h and functional_internal_code_without_avx2.h require special attention: They do not build without example.h and it is not obvious without opening any of the files that they require it. So, specific flags/treatment when building the project have to be added, which is difficult to maintain as soon as you use more than one such functional_internal_code -files.

I am not sure what you the bigger picture is in your case, so whatever follows should be taken with a grain of salt.

Anyway: #include COULD happen anywhere in the code, BUT you could think of it as a way of separating code / avoiding redundancy. For definitions, this is already well covered by other means. For declarations, it is the standard approach.

Now, this #include s are placed at the beginning as a courtesy to the reader who can catch up more quickly on what to expect in the code to follow, even for #ifdef guarded code.

In your case, it looks like you want a different implementation of the same functionality. The to-go approach in this case would be to link a different portion of code (containing a different implementation), rather than importing a different declaration.

Instead, if you want to really have a different signature based on your #ifdef then I would not see a more effective way than having #ifdef in the middle of the code. BUT, I would not consider this a good design choice!

I define this as bad coding for me. It makes code hard to read.

My approach would be to create a base class as an abstract interface and create specialized implementations and then create the needed class.

Eg:

class base_functions_t
{
public:
    virtual void function1() = 0;
}

class base_functions_avx2_t : public base_functions_t
{
public:
    virtual void function1()
    {
        // code here
    } 
}

class base_functions_sse2_t : public base_functions_t
{
public:
    virtual void function1()
    {
        // code here
    }
}

Then you can have a pointer to your base_functions_t and instanciate different versions. Eg:

base_functions_t *func;
if (avx2)
{
    func = new base_functions_avx2_t();
}
else
{
    func = new base_functions_sse2_t();
}
func->function1();

As a general rule I would say that it's best to put headers that define interfaces first in your implementation files.

There are of course also headers that don't define any interfaces. I'm thinking mainly of headers that use macro hackery and are intended to be included one or more times. This type of header typically doesn't have include guards. An example would be <cassert> . This allows you to write code something like this

#define NDEBUG 1
#include <cassert>

void foo() {
    // do some stuff
    assert(some_condition);
}

#undef NDEBUG
#include <cassert>

void bar() {
    assert(another_condition);
}

If you only include <cassert> at the start of your file you will have no granularity for asserts in your implementation file other than all on or all off. See here for more discussion on this technique.

If you do go down the path of using conditional inclusion as per your example then I would strongly recommend that you use an editor like Eclipse or Netbeans that can do inline preprocessor expansion and visualization. Otherwise the loss of locality that inclusion brings can severely hurt readability.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM