简体   繁体   中英

Why does this piece of code written using uint8_t run faster than analogous code written with uint32_t or uint64_t on a 64bit machine?

Isn't the common knowledge that math operations on 64bit systems run faster on 32/64 bit datatypes than the smaller datatypes like short due to implicit promotion? Yet while testing my bitset implementation(where the majority of the time depends on bitwise operations), I found I got a ~40% improvement using uint8_t over uint32_t. I'm especially surprised because there is hardly any copying going on that would justify the difference. The same thing occurred regardless of the clang optimisation level.

8bit:

#define mod8(x) x&7
#define div8(x) x>>3

template<unsigned long bits>
struct bitset{
private:
    uint8_t fill[8] = {};
    uint8_t clear[8];
    uint8_t band[(bits/8)+1] = {};

public:
    template<typename T>
    inline bool operator[](const T ind) const{
        return band[div8(ind)]&fill[mod8(ind)];
    }

    template<typename T>
    inline void store_high(const T ind){
        band[div8(ind)] |= fill[mod8(ind)];
    }


    template<typename T>
    inline void store_low(const T ind){
        band[div8(ind)] &= clear[mod8(ind)];

    }
    bitset(){
        for(uint8_t ii = 0, val = 1; ii < 8; ++ii){
            fill[ii] = val;
            clear[ii] = ~fill[ii];
            val*=2;
        }
    }
};

32bit:

#define mod32(x) x&31
#define div32(x) x>>5

template<unsigned long bits>
struct bitset{
private:
    uint32_t fill[32] = {};
    uint32_t clear[32];
    uint32_t band[(bits/32)+1] = {};

public:
    template<typename T>
    inline bool operator[](const T ind) const{
        return band[div32(ind)]&fill[mod32(ind)];
    }

    template<typename T>
    inline void store_high(const T ind){
        band[div32(ind)] |= fill[mod32(ind)];
    }


    template<typename T>
    inline void store_low(const T ind){
        band[div32(ind)] &= clear[mod32(ind)];

    }
    bitset(){
        for(uint32_t ii = 0, val = 1; ii < 32; ++ii){
            fill[ii] = val;
            clear[ii] = ~fill[ii];
            val*=2;
        }
    }
};

And here is the benchmark I used(just moves a single 1 from position 0 till the end iteratively):

const int len = 1000000;   
bitset<len> bs;

    {
        auto start = std::chrono::high_resolution_clock::now();
        bs.store_high(0);
        for (int ii = 1; ii < len; ++ii) {
            bs.store_high(ii);
            bs.store_low(ii-1);
        }
        auto stop = std::chrono::high_resolution_clock::now();
        std::cout << std::chrono::duration_cast<std::chrono::microseconds>((stop-start)).count()<<std::endl;
    }

Isn't the common knowledge that math operations on 64bit systems run faster on 32/64 bit datatypes than the smaller datatypes like short due to implicit promotion?

This isn't a universal truth. As always, fit depends on details.

Why does this piece of code written using uint_8 run faster than analogous code written with uint_32 or uint_64 on a 64bit machine?

The title doesn't match the question. There are no such types as uint_X and you aren't using uintX_t . You are using uint_fastX_t . uint_fastX_t is an alias for an integer type that is at least X bytes, that is deemed by the language implementers to provide fastest operations.

If we were to take your earlier mentioned assumption for granted, then it should logically follow that the language implementers would have chosen to use 32/64 bit type as uint_fast8_t . That said, you cannot assume that they have done so and whatever generic measurement (if any) has been used to make that choice doesn't necessarily apply to your case.

That said, regardless of which type uint_fast8_t is an alias of, your test isn't fair for comparing the relative speeds of calculation of potentially different integer types:

uint_fast8_t fill[8] = {};
uint_fast8_t clear[8];
uint_fast8_t band[(bits/8)+1] = {};

uint_fast32_t fill[32] = {};
uint_fast32_t clear[32];
uint_fast32_t band[(bits/32)+1] = {};

Not only are the types (potentially) different, but the sizes of the arrays are too. This can certainly have an effect on the efficiency.

TL:DR: large "buckets" for a bitset mean you access the same one repeatedly when you iterate linearly, creating longer dependency chains that out-of-order exec can't overlap as effectively.

Smaller buckets give instruction-level parallelism, making operations on bits in separate bytes independent of each other.


On possible reason is that you iterate linearly over bits, so all the operations within the same band[] element form one long dependency chain of &= and |= operations, plus store and reload (if the compiler doesn't manage to optimize that away with loop unrolling).

For uint32_t band[] , that's a chain of 2x 32 operations, since ii>>5 will give the same index for that long.

Out-of-order exec can only partially overlap execution of these long chains if their latency and instruction-count is too large for the ROB (ReOrder Buffer) and RS (Reservation Station, aka Scheduler). With 64 operations probably including store/reload latency (4 or 5 cycles on modern x86), that's a dep chain length of probably 6 x 64 = 384 cycles, composed of probably at least 128 uops, with some parallelism for loading (or better calculating) 1U<<(n&31) or rotl(-1U, n&31) masks that can "use up" some of the wasted execution slots in the pipeline.

But for uint8_t band[] , you've moving to a new element 4x as frequently , after only 2x 8 = 16 operations, so the dep chains are 1/4 the length.

See also Understanding the impact of lfence on a loop with two long dependency chains, for increasing lengths for another case of a modern x86 CPU overlapping two long dependency chains (a simple chain of imul with no other instruction-level parallelism), especially the part about a single dep chain becoming longer than the RS (scheduler for un-executed uops) being the point at which we start to lose some of the overlap of execution of the independent work. (For the case without lfence to artificially block overlap.)

See also Modern Microprocessors A 90-Minute Guide! and https://www.realworldtech.com/sandy-bridge/ for some background on how modern OoO exec CPUs decode and look at instructions.


Small vs. large buckets

Large buckets are only useful when scanning through for the first non-zero bit, or filling the whole thing or something . Of course, really you'd want to vectorize that with SIMD, checking 16 or 32 bytes at once to see if there's a non-zero element anywhere in that. Current compilers will vectorize for you in loops that fill the whole array, but not search loops (or anything with a trip-count that can't be calculated ahead of the first iteration), except for ICC which can handle that. Re: using fast operations over bit-vectors, see Howard Hinnant's article (in the context of vector<bool> , which is an unfortunate name for a sometimes-useful data structure.)

C++ unfortunately doesn't make it easy in general to use different sized accesses to the same data, unless you compile with g++ -O3 -fno-strict-aliasing or something like that.

Although unsigned char can always alias anything else, so you could use that for your single-bit accesses, only using uintptr_t (which is likely to be as wide as a register, except on ILP32-on-64bit ISAs) for init or whatever. Or in this case, uint_fast32_t being a 64-bit type on many x86-64 C++ implementations would make it useful for this, unlike usual when that sucks, wasting cache footprint when you're only using the value-range of a 32-bit number and being slower for non-constant division on some CPUs.

On x86 CPU, a byte store is naturally fully efficient, but even on an ARM or something, coalescing in the store buffer could still make adjacent byte RMWs fully efficient. ( Are there any modern CPUs where a cached byte store is actually slower than a word store? ). And you'd still gain ILP; a slower commit to cache is still not as bad as coupling loads to stores that could have been independent if narrower. Especially important on lower-end CPUs with smaller out-of-order schedulers buffers.

(x86 byte loads need to use movzx to zero-extend to avoid false dependencies, but most compilers know that. Clang is reckless about it which can occasionally hurt.)

(Different sized accesses close to each other can lead to store-forwarding stalls, eg a byte store and an unsigned long reload that overlaps that byte will have extra latency: What are the costs of failed store-to-load forwarding on x86? )


Code review:

Storing an array of masks is probably worse than just computing 1u32<<(n&31)) as needed, on most CPUs. If you're really lucky, a smart compiler might manage constant propagation from the constructor into the benchmark loop, and realize that it can rotate or shift inside the loop to generate the bitmask instead of indexing memory in a loop that already does other memory operations.

(Some non-x86 ISAs have better bit-manipulation instructions and can materialize 1<<n cheaply, although x86 can do that in 2 instructions as well if compilers are smart. xor eax,eax / bts eax, esi , with the BTS implicitly masking the shift count by the operand-size. But that only works so well for 32-bit operand-size, not 8-bit. Without BMI2 shlx , x86 variable-count shifts run as 3-uops on Intel CPUs, vs. 1 on AMD.)

Almost certainly not worth it to store both fill[] and clear[] constants. Some ISAs even have an andn instruction that can NOT one of the operands on the fly, ie implements (~x) & y in one instruction. For example, x86 with BMI1 extensions has andn . ( gcc -march=haswell ).

Also, your macros are unsafe: wrap the expression in () so operator-precedence doesn't bits you if you use foo[div8(x) - 1] . As in #define div8(x) (x>>3)

But really, you shouldn't be using CPP macros for stuff like this anyway. Even in modern C, just define static const shift = 3; shift counts and masks. In C++, do that inside the struct/class scope, and use band[idx >> shift] or something. (When I was typing ind , my fingers wanted to type int ; idx is probably a better name.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM