简体   繁体   中英

Convert integer to binary and store it in an integer array of specified size:c++

I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?

Pseudo code:

int value = ????  // assuming a 32 bit int
int i;

for (i = 0; i < 32; ++i) {
    array[i] = (value >> i) & 1;
}
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number, 
         output_iterator first, output_iterator last) 
{
    const unsigned number_bits = CHAR_BIT*sizeof(int);
    //extract bits one at a time
    for(unsigned i=0; i<number_bits && first!=last; ++i) {
        const unsigned shift_amount = number_bits-i-1;
        const unsigned this_bit = (number>>shift_amount)&1;
        *first = this_bit;
        ++first;
    }
    //pad the rest with zeros
    while(first != last) {
        *first = 0;
        ++first;
    }
}

int main() {
    int number = 413523152;
    int array[32];
    convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
    for(int i=0; i<32; ++i)
        std::cout << array[i] << ' ';
}

Proof of compilation here

You could use C++'s bitset library , as follows.

#include<iostream>
#include<bitset>

int main()
{
  int N;//input number in base 10
  cin>>N;
  int O[32];//The output array
  bitset<32> A=N;//A will hold the binary representation of N 
  for(int i=0,j=31;i<32;i++,j--)
  {
     //Assigning the bits one by one.
     O[i]=A[j];
  }
  return 0;
}

A couple of points to note here: First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes. Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library. You can print out the bitset variable A as cout<<A; and see how it works.

You can do like this:

while (input != 0) {

        if (input & 1)
            result[index] = 1; 
        else
            result[index] =0;
   input >>= 1;// dividing by two
   index++;
}

As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:

// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
  arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}

Decimal to Binary: Size independent

Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).

First Method:

#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
  int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
  if(bits == NULL) return NULL;
  int i = 0;

  // conversion
  int left = sizeof(int) * CHAR_BIT - 1; 
  for(i = 0; left >= 0; left--, i++){
    bits[i] = !!(dec & ( 1u << left ));      
  }

  return bits;
}

Second Method:

#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
   unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);   
                      //mask = 1000 0000 0000 0000
   int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
   if(bits == NULL) return NULL;
   int i = 0;

   //conversion 
   while(mask > 0){
     if((num & mask) == 0 )
         bits[i] = 0;
     else
         bits[i] = 1;
     mask = mask >> 1 ;  // Right Shift 
     i++;
   }

   return bits;
}

I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)

int BinToDec(int Value, int Padding = 8)
{
    int Bin = 0;

    for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
    {
        Bin += ((Value >> I - 1) & 1) * Pos;
    }
    return Bin;
}

This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.

std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
    std::vector<int> r;

    // make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
    while (num_to_convert_to_binary > 0)
    {
        //cout << " top of loop" << endl;
        if (num_to_convert_to_binary % 2 == 0)
            r.push_back(0);
        else
            r.push_back(1);
        num_to_convert_to_binary = num_to_convert_to_binary / 2;
    }

    while(r.size() < num_bits_in_out_vec)
        r.push_back(0);

    return r;
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM