简体   繁体   中英

If i enter a value above 16383 in this C program that converts decimal to binary, it doesn't work. Why?

This is a decimal to binary converter, where the user inputs a decimal number and its binary version is outputted. It works fine except when the input number (shown as the variable 'a' in this code) is greater than 16383. I'm not quite sure why. Another curious thing is that when 16383 is used as the input number, the binary output is just a long series of 1's. Not sure if that's a clue to the answer.

Anyways, here's the code:

#include<stdio.h>
#include<conio.h>

void main()
{
  clrscr();
  int a,x=1;

  printf("Enter your number in base 10:");
  scanf("%d",&a);//max value:16383 for some reason??

  while(x<=a)
  {
    x=x*2;
  }

  x=x/2;
  printf("\nBinary version:");

  if(x==1)
    printf("1");
  else
  {
    while(x>=1)
    {
      if(a/x==1)
      {
        printf("1");
        a=a-x;
      }
      else
        printf("0");

      x=x/2;
    }
  }

  getch();
}

Are you, by any chance, working on a 16-bit machine (where sizeof(int) is 2 )?

Because 16383 is 0x3fff . One more is 0x4000 , which when doubled here...

while(x<=a)
{
    x=x*2;
}

...would give 0x8000 , which would wrap into negative values on a 16-bit machine.

(Just in case you are not familiar with 0x... , that is hexadecimal notation, which makes it easier to see bit patterns.)


int is a signed type, ie it can hold negative numbers. On most modern platforms , negatives are those with their most significant bit set. That would be 0x8000-0xffff for 16bit machines, and 0x80000000-0xffffffff on 32bit machines.

So, ever-larger positive numbers ( 0x7ffe , 0x7fff ) can suddenly become small negative numbers ( 0x8000 ). If you're using unsigned types (ie unsigned int ), you get a similar "wraparound" from "really large" to "zero".

On your machine, 16383 times two is 32766.

But 16384 times two is (thanks to the limited range of numbers that can be represented in 16 bits) actually -2 -- at which point your program breaks.

Check the size of your integer, they may be 16 bits, making 16383 the largest integer to fit in a signed 16 bit integer, I think.

printf("%i",sizeof(int));

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM