简体   繁体   中英

Java UTF-8 to ASCII conversion with supplements

we are accepting all sorts of national characters in UTF-8 string on the input, and we need to convert them to ASCII string on the output for some legacy use. (we don't accept Chinese and Japanese chars, only European languages)

We have a small utility to get rid of all the diacritics:

public static final String toBaseCharacters(final String sText) {
    if (sText == null || sText.length() == 0)
        return sText;

    final char[] chars = sText.toCharArray();
    final int iSize = chars.length;
    final StringBuilder sb = new StringBuilder(iSize);

    for (int i = 0; i < iSize; i++) {
        String sLetter = new String(new char[] { chars[i] });
        sLetter = Normalizer.normalize(sLetter, Normalizer.Form.NFC);

        try {
            byte[] bLetter = sLetter.getBytes("UTF-8");
            sb.append((char) bLetter[0]);
        } catch (UnsupportedEncodingException e) {
        }
    }
    return sb.toString();
}

The question is how to replace all the german sharp s (ß, Đ, đ) and other characters that get through the above normalization method, with their supplements (in case of ß, supplement would probably be "ss" and in case od Đ supplement would be either "D" or "Dj").

Is there some simple way to do it, without million of .replaceAll() calls?

So for example: Đonardan = Djonardan, Blaß = Blass and so on.

We can replace all "problematic" chars with empty space, but would like to avoid this to make the output as similar to the input as possible.

Thank you for your answers,

Bozo

You want to use ICU4J . It includes the com.ibm.icu.text.Transliterator class, which apparently can do what you are looking for.

我正在使用这样的东西:

Transliterator transliterator = Transliterator.getInstance("Any-Latin; Upper; Lower; NFD; [:Nonspacing Mark:] Remove; NFC", Transliterator.FORWARD);

Here's my converter which uses lucene...

private final KeywordTokenizer keywordTokenizer = new KeywordTokenizer(new StringReader(""));
private final ASCIIFoldingFilter asciiFoldingFilter = new ASCIIFoldingFilter(keywordTokenizer);
private final TermAttribute termAttribute = (TermAttribute) asciiFoldingFilter.getAttribute(TermAttribute.class);

public String process(String line)
{
    if (line != null)
    {
        try
        {
            keywordTokenizer.reset(new StringReader(line));
            if (asciiFoldingFilter.incrementToken())
            {
                return termAttribute.term();
            }
        }
        catch (IOException e)
        {
            logger.warn("Failed to parse: " + line, e);
        }
    }
    return null;
}

Is there some simple way to do it, without million of .replaceAll() calls?

If you just support European, Latin-based languages, around 100 should be enough; that's definitely doable: Grab the Unicode charts for Latin-1 Supplement and Latin Extended-A and get the String.replace party started. :-)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM