简体   繁体   中英

conversion of multiplication of int32 to int64

I have following structure

struct {
    int myData;
    int myAnotherData;
}Value;

struct AnotherStructure {
    unsigned int uiLowData;
    unsigned int uiHighData;
};

AnotherStructure m_AnotherStructure;
Value val;
val.myData = 10;
#define MULTIPLY 36000000000

unsigned __int64 &internalStructure = *(unsigned __int64*)&m_AnotherStructure;
internalStructure  = 0;
internalStructure += ((unsigned __int64)val.myData * MULTIPLY );

My questions is there any overflow of data in above case as we are multiplying unsigned int with big value, is result stored in temp value of type unsigned int and then stored in int 64? If now how there won't be any overflow?

Thanks

val.myData is casted to unsigned __int64 before the multiplication, because you explicitly cast. still overflows can occur, depending on the value stored in val.myData - the maximum int multiplied by 36000000000 wont fit into 64 bits. and you loose your algebraic sign with the cast.

you should try this:

struct AnotherStructure {
    int64_t uiLowData;
    int64_t uiHighData;
};
// signed_128bit_integer: look into your compiler documentation
signed_128bit_integer &internalStructure = *(signed_128bit_integer*)&m_AnotherStructure;
internalStructure  = 0;
internalStructure += ((signed_128bit_integer)val.myData * MULTIPLY );

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM