简体   繁体   中英

UTF-8 character encoding in Java

I am having some problems getting some French text to convert to UTF8 so that it can be displayed properly, either in a console, text file or in a GUI element.

The original string is

HANDICAP╔ES

which is supposed to be

HANDICAPÉES

Here is a code snippet that shows how I am using the jackcess Database driver to read in the Acccess MDB file in an Eclipse/Linux environment.

Database database = Database.open(new File(filepath));
Table table = database.getTable(tableName, true);
Iterator rowIter = table.iterator();
while (rowIter.hasNext()) {
    Map<String, Object> row = this.rowIter.next();
    // convert fields to UTF
    Map<String, Object> rowUTF = new HashMap<String, Object>();
    try {
        for (String key : row.keySet()) {
            Object o = row.get(key);
            if (o != null) {
                String valueCP850 = o.toString();
                // String nameUTF8 = new String(valueCP850.getBytes("CP850"), "UTF8"); // does not work!
                String valueISO = new String(valueCP850.getBytes("CP850"), "ISO-8859-1");
                String valueUTF8 = new String(valueISO.getBytes(), "UTF-8"); // works!
                rowUTF.put(key, valueUTF8);
            }
        }
    } catch (UnsupportedEncodingException e) {
        System.err.println("Encoding exception: " + e);
    }   
}

In the code you'll see where I want to convert directly to UTF8, which doesn't seem to work, so I have to do a double conversion. Also note that there doesn't seem to be a way to specify the encoding type when using the jackcess driver.

Thanks, Cam

New analysis, based on new information.
It looks like your problem is with the encoding of the text before it was stored in the Access DB. It seems it had been encoded as ISO-8859-1 or windows-1252, but decoded as cp850, resulting in the string HANDICAP╔ES being stored in the DB.

Having correctly retrieved that string from the DB, you're now trying to reverse the original encoding error and recover the string as it should have been stored: HANDICAPÉES . And you're accomplishing that with this line:

String valueISO = new String(valueCP850.getBytes("CP850"), "ISO-8859-1");

getBytes("CP850") converts the character to the byte value 0xC9 , and the String constructor decodes that according to ISO-8859-1, resulting in the character É . The next line:

String valueUTF8 = new String(valueISO.getBytes(), "UTF-8");

...does nothing. getBytes() encodes the string in the platform default encoding, which is UTF-8 on your Linux system. Then the String constructor decodes it with the same encoding. Delete that line and you should still get the same result.

More to the point, your attempt to create a "UTF-8 string" was misguided. You don't need to concern yourself with the encoding of Java's strings--they're always UTF-16. When bringing text into a Java app, you just need to make sure you decode it with the correct encoding.

And if my analysis is correct, your Access driver is decoding it correctly; the problem is at the other end, possibly before the DB even comes into the picture. That's what you need to fix, because that new String(getBytes()) hack can't be counted on to work in all cases.


Original analysis, based on no information. :-/
If you're seeing HANDICAP╔ES on the console, there's probably no problem. Given this code:

System.out.println("HANDICAPÉES");

The JVM converts the (Unicode) string to the platform default encoding, windows-1252, before sending it to the console. Then the console decodes that using its own default encoding, which happens to be cp850. So the console displays it wrong, but that's normal. If you want it to display correctly, you can change the console's encoding with this command:

CHCP 1252

To display the string in a GUI element, such as a JLabel, you don't have to do anything special. Just make sure you use a font that can display all the characters, but that shouldn't be problem for French.

As for writing to a file, just specify the desired encoding when you create the Writer:

OutputStreamWriter osw = new OutputStreamWriter(
    new FileOutputStream("myFile.txt"), "UTF-8");
String s = "HANDICAP╔ES";
System.out.println(new String(s.getBytes("CP850"), "ISO-8859-1")); // HANDICAPÉES

This shows the correct string value. This means that it was originally encoded/decoded with ISO-8859-1 and then incorrectly encoded with CP850 (originally CP1252 aka Windows ANSI as pointed in a comment is indeed also possible since the É has the same codepoint there as in ISO-8859-1).

Align your environment and binary pipelines to use all the one and same character encoding. You can't and shouldn't convert between them. You would risk losing information in the non- ASCII range that way.

Note: do NOT use the above code snippet to "fix" the problem! That would not be the right solution.


Update : you are apparently still struggling with the problem. I'll repeat the important parts of the answer:

  1. Align your environment and binary pipelines to use all the one and same character encoding.

  2. You can not and should not convert between them. You would risk losing information in the non- ASCII range that way.

  3. Do NOT use the above code snippet to "fix" the problem! That would not be the right solution.

To fix the problem you need to choose character encoding X which you'd like to use throughout the entire application. I suggest UTF-8 . Update MS Access to use encoding X. Update your development environment to use encoding X. Update the java.io readers and writers in your code to use encoding X. Update your editor to read/write files with encoding X. Update the application's user interface to use encoding X. Do not use Y or Z or whatever at some step. If the characters are already corrupted in some datastore (MS Access, files, etc), then you need to fix it by manually replacing the characters right there in the datastore. Do not use Java for this.

If you're actually using the "command prompt" as user interface, then you're actually lost. It doesn't support UTF-8. As suggested in the comments and in the article linked in the comments, you need to create a Swing application instead of relying on the restricted command prompt environment.

You can specify encoding when establishing connection. This way was perfect and solve my encoding problem:

    DatabaseImpl open = DatabaseImpl.open(new File("main.mdb"), true, null, Database.DEFAULT_AUTO_SYNC, java.nio.charset.Charset.availableCharsets().get("windows-1251"), null, null);
    Table table = open.getTable("FolderInfo");

使用“ ISO-8859-1 ”帮助我处理法语特征。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM