简体   繁体   中英

Java UTF8 encoding

I have a scenario in which some special characters are stored in a database (sybase) in the system's default encoding and I have to fetch this data and send it to a third-party in UTF-8 encoding using a Java program.

There is precondition that the data sent to the third-party should not exceed a defined maximum size. Since upon conversion to UTF-8 a character may be replaced by 2 or 3 characters then my logic dictates that after getting the data from the database I must encode it into the UTF-8 string and then split the string. The following are my observations:

When any special character like Chinese or Greek characters or any special character > ASCII 256 is encountered and when I convert it into UTF-8, a single character maybe represented by more than 1 byte.

So how can I be sure that the conversion is proper? For conversion I am using the following

// storing the data from database into string
string s = getdata from the database;

// converting all the data in byte array utf8 encoding
byte [] b = s.getBytes("UTF-8");

// creating a new string as my split logic is based on the string format

String newString = new String(b,"UTF-8");

But when I output this newString to the console I get ? for the special characters.

So I have some doubts:

  • If my conversion logic is wrong , then how could I correct it.
  • After doing my conversion to UTF-8, can I double-check whether my conversion is OK or not? I mean is it the correct message which needs to be sent to the third-party, I assume that if the message is not user-readable after conversion then there is some problem with the conversion.

Would like to have some points of view from all the experts out there.

Please do let me know if any further info is needed from my side.

You say you're writing the Unicode to a text file, but that requires a conversion from Unicode.

But a conversion to what? That depends on how you open the file.

For example, System.out.println(myUnicodeString) will convert the Unicode to the encoding that System.out was constructed with, most likely your platform's default encoding. If you're running Windows, then this is likely to be windows-1252 .

If you tell Java to use UTF-8 encoding when it writes to a file, you'll get a file containing UTF-8:

PrintWriter pw = new PrintWriter(new FileOutputStream("filename.txt"), "UTF-8");
pw.println(myUnicodeString);

Please use a hex-editor to verify if your output is correctly formatted UTF8. There is no other way to tell for sure if what you see is corrector not.

And read this if you have not ready: http://www.joelonsoftware.com/articles/Unicode.html

Use this for proper converstion - this one is from iso-8859-1 to utf-8:

public String to_utf8(String fieldvalue) throws UnsupportedEncodingException{

        String fieldvalue_utf8 = new String(fieldvalue.getBytes("ISO-8859-1"), "UTF-8");
        return fieldvalue_utf8;
}

Java strings are unicode, but not all java components support full unicode strings, especially AWT components and lightweight swing components. So you may have perfectly good strings, but get junk in your console output.

thanks all for your replies..

As suggested by some of you , I already tried writing it to a text file , however in text file also I got ? for the my special characters. So i have the following observations:-

a) Encoding is a two fold process, frst u change the string from one encoding to another encoding on byte level and then u also have to have the required font for the new character set.

b) If we are encoding some string that means we are encoding the bytes , for the current scenario, I am using the double quotes from the MS word and then inserting into a sybase databse, and after fetching the data from db , i am writing it to a txt file , where i am getting the same ? for double quotes , however if i directly copy the same stuff from the db to MS word or edit plus I can see the actual characters . so i am not able to comprehend this problem. As per my understanding, during encoding we should be concerned only about the byte value which are the real representations and not the string object whcih we constitute out of these byte arrays.However, unless my encoded information is not human readable how can other party validate it and read it (I am guessing these would be reading bytes , but if for a special character some ? like junk character has been introduced while utf8 encoding , then is not is an info loss).

Would really appreciate your views on my observations and what correct approach should I follow further?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM