简体   繁体   中英

Difference between %d and %.d in C language

I wrote the following code and received a blank output :

int main(){

    int y=0;

    printf("%.d", y);

    return 0;
}

Instead, If I use only %d , I get 0 as the output.

What is the difference between %d and %.d .

Your suggestions are greatly appreciated!

A . before the conversion specifier is called the "precision". From the printf man page (emphasis mine):

An optional precision, in the form of a period ('.') followed by an optional decimal digit string. Instead of a decimal digit string one may write "*" or "*m$" (for some decimal integer m) to specify that the precision is given in the next argument, or in the m-th argument, respectively, which must be of type int. If the precision is given as just '.' , or the precision is negative, the precision is taken to be zero. This gives the minimum number of digits to appear for d, i, o, u, x, and X conversions ,

and

d, i

The int argument is converted to signed decimal notation. The precision, if any, gives the minimum number of digits that must appear; if the converted value requires fewer digits, it is padded on the left with zeros. The default precision is 1. When 0 is printed with an explicit precision 0, the output is empty

So for the example code, %.d means an integer conversion with zero precision. Zero precision means a minimum of zero digits. Since the value to be converted is 0 it is not displayed at all due to the zero precision. In contrast, %d has no explicit precision and hence defaults to a precision of one resulting in printing 0 .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM