简体   繁体   中英

Character strings to binary string - why are some characters multi-byte?

This code is supposed to convert a character strings to binary ones, but with a few strings, it returns a String with 16 binary digits, not 8 as I expected them to be.

public class aaa {        
    public static void main(String argv[]){
        String nux="ª";
        String nux2="Ø";
        String nux3="(";
        byte []bites = nux.getBytes();
        byte []bites2 = nux2.getBytes();
        byte []bites3 = nux3.getBytes();
               System.out.println(AsciiToBinary(nux));
               System.out.println(AsciiToBinary(nux2));
               System.out.println(AsciiToBinary(nux3));
               System.out.println("number of bytes :"+bites.length);
               System.out.println("number of bytes :"+bites2.length);
               System.out.println("number of bytes :"+bites3.length);


    }

    public static String AsciiToBinary(String asciiString){  

          byte[] bytes = asciiString.getBytes();  
          StringBuilder binary = new StringBuilder();  
          for (byte b : bytes)  
          {  
             int val = b;  
             for (int i = 0; i < 8; i++)  
             {  
                binary.append((val & 128) == 0 ? 0 : 1);  
                val <<= 1;  
             }  
             binary.append(' ');
          }  
          return binary.toString();  
    } 

}

in the first two strings, I don't understand why they return 2 bytes, since they are single-character strings.

Compiled here to: https://ideone.com/AbxBZ9

This returns:

11000010 10101010 
11000011 10011000 
00101000 
number of bytes :2
number of bytes :2
number of bytes :1

I am using this code: Convert A String (like testing123) To Binary In Java

NetBeans IDE 8.1

A character is not always 1-byte long. Think about it - many languages, such as Chinese or Japanese, have thousands of characters, how would you map those characters to bytes?

You are using UTF-8 (one of the many, many ways of mapping characters to bytes) - looking up a character table for UTF-8, and searching for the sequence 11000010 10101010 , I arrive at

U+00AA  ª   11000010 10101010

Which is the UTF-8 encoding for ª . UTF-8 is often the default character encoding (charset) for Java -- but you cannot rely on this. That is why you should always specify a charset when converting strings to bytes or vice-versa

you can understand why some character are two bytes by running this simple code

    // integer - binary 
    System.out.println(Byte.MIN_VALUE);             
    // -128 - 0b11111111111111111111111110000000

    System.out.println(Byte.MAX_VALUE);             
    // 127 - 0b1111111

    System.out.println((int) Character.MIN_VALUE);  
    // 0   - 0b0

    System.out.println((int) Character.MAX_VALUE);  
    // 65535 - 0b1111111111111111

as you can see ,we can show Byte.MAX_VALUE with just 7 bits or 1 byte (01111111)

if you cast Character.MIN_VALUE to integer, it will be : 0
we can show it's binary format with one bit or 1 byte (00000000) !

but what about Character.MAX_VALUE ?

in binary format it's 1111111111111111 which is 65535 in decimal format
and can be shown with 2 bytes (11111111 11111111) .

so characters which their decimal format is between 0 and 65535 can be shown with 1 or 2 bytes .

hope you understand.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM