简体   繁体   中英

Strange bitwise operation on a 16-bit integer

i'm looking at ac source file and i found this macro:

#define random ( (float) rand() / (float) ((1 << 31) -1) )

while in standard ANSI C rand() returns an integer in [0,32767], i really appreciate an help to understand what kind of normalization factor is the denominator, because signed integer are 16 bit and the expression does a 31-bit shift.

Thank you very much for your attention Best regards

rand does not return an integer in [0,32767] in "ANSI C". §7.20.2:

The rand function computes a sequence of pseudo-random integers in the range 0 to RAND_MAX .

It seems likely that whoever wrote that macro was working on a platform on which RAND_MAX was 2147483647.


You also seem to be confused about signed integers. int must be at least 16 bits wide, but it is often wider.

#define random ( (float) rand() / (float) ((1 << 31) -1) )

in a system with 16-bit int , this macro is undefined behavior because of 1 << 31 expression ( 1 is of int type).

(C99, 6.5.7p3) "If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand , the behavior is undefined."

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM