This is a program in which line no 4 generate a compilation error ( possible loss of precision ,required char find int
public class test {
public static void main(String args[]) {
char c;
int i;
c = 'A'; // 1
char ch=32; //2
i = c; //3
c = i + 1; //4
c++; //5
}
}
in line no. 2 char ch=32; we assign in a char variable ch a value 32 but no any error generate. I want the difference between this two line; char ch=32 and c=i+1;
The reason of the error is that char
is 2 bytes and int
is 4 bytes. And java won't let you do implicit casts where the variable would loose it's high order bits. You have to make an explicit cast from int
to char
.
In the other direction, assigning an int
value to a 'numeric' holder will work until you don't try to assign something out of range. ( More than 2^16-1 in this case)
In both cases you are requesting a conversion from a signed 32-bit int
into an unsigned 16-bit char
. In this line the compiler can make sure the number 32 fits into the char
:
char ch=32;
In this line the compiler only knows you are converting some int
value to a char
so it doesn't know for sure it can fit:
c = i + 1;
Java insists on explicit cast operator whenever it is not 100% certain that there will be no precision loss.
loss of precision means that INTEGER has a larger value and CHAR is smaller so you can't fit something which is to large for that space .
that's why you are getting that error .
you need to type cast the INTEGER into CHAR but still the precision will be lost because you just can't fit INTEGER into CHAR but if the integer value is smaller than loss precision can be ignored.
Converting shouldn't be confused with casting. In this case you are only casting. Some operations have implicit casting, and others do not. eg =
does not so you have specify you can to cast. ++
and +=
does implicit casting.
eg
char ch = '0';
char ch2 = 2; // implicit casting
ch *= 1.1; // implicit casting
ch2++; // no casting required.
ch2 += 1; // implicit casting.
ch2 = ch2 + 1; // won't compile, casting required.
Use this:
c = (char) i + 1;
But make sure that your i
is in range of char
values
As defined in the Character class, chars in java are UTF-16 representations of characters. So char
as a size of 16 bits.
In your first assignment, you're assigning 32
, which can be stored in 16 bits, to an UTF-16 character so that's fine.
But when you try to convert any kind an int as i
, you're converting from a 32 bits data to a 16 bits one, an operation that can't be implicit.
short a = 32; // fine
short b = 3243334; // can't compile
char c = 32; // fine
char d = 3243334; // can't compile
I am not sure I understand what it is that you want....but:
Now, if what you want is just doing this kind of char arithmetic, you can:
One of the two will work but, from the snippet you show, probably making 'i' a char makes more sense....
The reason behind it is that compiler can not automatically convert higher data type value to lower datatype value..
Here int is Higher Datatype and Char is lower datatype...
i+1 evaluates to integer result which cannot be fit into char type..
Image for Datatype Automatic Conversion
Higher to Lower is not possible
Lower to higher is possible
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.