简体   繁体   English

如何在Python中找到字符串中的中文或日文字符?

[英]How to find out Chinese or Japanese Character in a String in Python?

Such as: 如:

str = 'sdf344asfasf天地方益3権sdfsdf'

Add () to Chinese and Japanese Characters: 添加()到中文和日文字符:

strAfterConvert = 'sdfasfasf(天地方益)3(権)sdfsdf'

As a start, you can check if the character is in one of the following unicode blocks: 首先,您可以检查该字符是否位于以下unicode块之一:


After that, all you need to do is iterate through the string, checking if the char is Chinese, Japanese or Korean (CJK) and append accordingly: 之后,您需要做的就是遍历字符串,检查字符是中文,日文还是韩文(CJK)并相应地追加:

# -*- coding:utf-8 -*-
ranges = [
  {"from": ord(u"\u3300"), "to": ord(u"\u33ff")},         # compatibility ideographs
  {"from": ord(u"\ufe30"), "to": ord(u"\ufe4f")},         # compatibility ideographs
  {"from": ord(u"\uf900"), "to": ord(u"\ufaff")},         # compatibility ideographs
  {"from": ord(u"\U0002F800"), "to": ord(u"\U0002fa1f")}, # compatibility ideographs
  {'from': ord(u'\u3040'), 'to': ord(u'\u309f')},         # Japanese Hiragana
  {"from": ord(u"\u30a0"), "to": ord(u"\u30ff")},         # Japanese Katakana
  {"from": ord(u"\u2e80"), "to": ord(u"\u2eff")},         # cjk radicals supplement
  {"from": ord(u"\u4e00"), "to": ord(u"\u9fff")},
  {"from": ord(u"\u3400"), "to": ord(u"\u4dbf")},
  {"from": ord(u"\U00020000"), "to": ord(u"\U0002a6df")},
  {"from": ord(u"\U0002a700"), "to": ord(u"\U0002b73f")},
  {"from": ord(u"\U0002b740"), "to": ord(u"\U0002b81f")},
  {"from": ord(u"\U0002b820"), "to": ord(u"\U0002ceaf")}  # included as of Unicode 8.0
]

def is_cjk(char):
  return any([range["from"] <= ord(char) <= range["to"] for range in ranges])

def cjk_substrings(string):
  i = 0
  while i<len(string):
    if is_cjk(string[i]):
      start = i
      while is_cjk(string[i]): i += 1
      yield string[start:i]
    i += 1

string = "sdf344asfasf天地方益3権sdfsdf".decode("utf-8")
for sub in cjk_substrings(string):
  string = string.replace(sub, "(" + sub + ")")
print string

The above prints 以上打印

sdf344asfasf(天地方益)3(権)sdfsdf

To be future-proof, you might want to keep a lookout for CJK Unified Ideographs Extension E. It will ship with Unicode 8.0 , which is scheduled for release in June 2015 . 为了面向未来,您可能需要留意CJK Unified Ideographs Extension E.它将附带Unicode 8.0计划于2015年6月发布 I've added it to the ranges, but you shouldn't include it until Unicode 8.0 is released. 我已将它添加到范围中,但在发布Unicode 8.0之前不应包含它。

[EDIT] [编辑]

Added CJK compatibility ideographs , Japanese Kana and CJK radicals . 增加了CJK兼容性表意文字日本假名CJK激进派

You can do the edit using the regex package , which supports checking the Unicode " Script " property of each character and is a drop-in replacement for the re package: 您可以使用regex进行编辑,该支持检查每个字符的Unicode“ Script ”属性,并且是re包的替代品:

import regex as re

pattern = re.compile(r'([\p{IsHan}\p{IsBopo}\p{IsHira}\p{IsKatakana}]+)', re.UNICODE)

input = u'sdf344asfasf天地方益3権sdfsdf'
output = pattern.sub(r'(\1)', input)
print output  # Prints: sdf344asfasf(天地方益)3(権)sdfsdf

You should adjust the \\p{Is...} sequences with the character scripts/blocks that you consider to be "Chinese or Japanese". 您应该使用您认为是“中文或日文”的字符脚本/块来调整\\p{Is...}序列。

From one of the bleeding edge branch of NLTK inspired by the Moses Machine Translation Toolkit : 从受摩西机器翻译工具包启发NLTK最前沿分支之一

def is_cjk(character):
    """"
    Checks whether character is CJK.

        >>> is_cjk(u'\u33fe')
        True
        >>> is_cjk(u'\uFE5F')
        False

    :param character: The character that needs to be checked.
    :type character: char
    :return: bool
    """
    return any([start <= ord(character) <= end for start, end in 
                [(4352, 4607), (11904, 42191), (43072, 43135), (44032, 55215), 
                 (63744, 64255), (65072, 65103), (65381, 65500), 
                 (131072, 196607)]
                ])

For the specifics of the ord() numbers: 有关ord()数字的细节:

class CJKChars(object):
    """
    An object that enumerates the code points of the CJK characters as listed on
    http://en.wikipedia.org/wiki/Basic_Multilingual_Plane#Basic_Multilingual_Plane

    This is a Python port of the CJK code point enumerations of Moses tokenizer:
    https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/detokenizer.perl#L309
    """
    # Hangul Jamo (1100–11FF)
    Hangul_Jamo = (4352, 4607) # (ord(u"\u1100"), ord(u"\u11ff"))

    # CJK Radicals Supplement (2E80–2EFF)
    # Kangxi Radicals (2F00–2FDF)
    # Ideographic Description Characters (2FF0–2FFF)
    # CJK Symbols and Punctuation (3000–303F)
    # Hiragana (3040–309F)
    # Katakana (30A0–30FF)
    # Bopomofo (3100–312F)
    # Hangul Compatibility Jamo (3130–318F)
    # Kanbun (3190–319F)
    # Bopomofo Extended (31A0–31BF)
    # CJK Strokes (31C0–31EF)
    # Katakana Phonetic Extensions (31F0–31FF)
    # Enclosed CJK Letters and Months (3200–32FF)
    # CJK Compatibility (3300–33FF)
    # CJK Unified Ideographs Extension A (3400–4DBF)
    # Yijing Hexagram Symbols (4DC0–4DFF)
    # CJK Unified Ideographs (4E00–9FFF)
    # Yi Syllables (A000–A48F)
    # Yi Radicals (A490–A4CF)
    CJK_Radicals = (11904, 42191) # (ord(u"\u2e80"), ord(u"\ua4cf"))

    # Phags-pa (A840–A87F)
    Phags_Pa = (43072, 43135) # (ord(u"\ua840"), ord(u"\ua87f"))

    # Hangul Syllables (AC00–D7AF)
    Hangul_Syllables = (44032, 55215) # (ord(u"\uAC00"), ord(u"\uD7AF"))

    # CJK Compatibility Ideographs (F900–FAFF)
    CJK_Compatibility_Ideographs = (63744, 64255) # (ord(u"\uF900"), ord(u"\uFAFF"))

    # CJK Compatibility Forms (FE30–FE4F)
    CJK_Compatibility_Forms = (65072, 65103) # (ord(u"\uFE30"), ord(u"\uFE4F"))

    # Range U+FF65–FFDC encodes halfwidth forms, of Katakana and Hangul characters
    Katakana_Hangul_Halfwidth = (65381, 65500) # (ord(u"\uFF65"), ord(u"\uFFDC"))

    # Supplementary Ideographic Plane 20000–2FFFF
    Supplementary_Ideographic_Plane = (131072, 196607) # (ord(u"\U00020000"), ord(u"\U0002FFFF"))

    ranges = [Hangul_Jamo, CJK_Radicals, Phags_Pa, Hangul_Syllables, 
              CJK_Compatibility_Ideographs, CJK_Compatibility_Forms, 
              Katakana_Hangul_Halfwidth, Supplementary_Ideographic_Plane]

Combining the is_cjk() in this answer and @EvenLisle substring answer 在这个答案和@EvenLisle子串答案中组合了is_cjk()

>>> from nltk.tokenize.util import is_cjk
>>> text = u'sdf344asfasf天地方益3権sdfsdf'
>>> [1 if is_cjk(ch) else 0 for ch in text]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0]
>>> def cjk_substrings(string):
...     i = 0
...     while i<len(string):
...         if is_cjk(string[i]):
...             start = i
...             while is_cjk(string[i]): i += 1
...             yield string[start:i]
...         i += 1
... 
>>> string = "sdf344asfasf天地方益3権sdfsdf".decode("utf-8")
>>> for sub in cjk_substrings(string):
...     string = string.replace(sub, "(" + sub + ")")
... 
>>> string
u'sdf344asfasf(\u5929\u5730\u65b9\u76ca)3(\u6a29)sdfsdf'
>>> print string
sdf344asfasf(天地方益)3(権)sdfsdf

If you can't use regex module that provides access to IsKatakana , IsHan properties as shown in @一二三's answer ; 如果你不能使用提供访问IsKatakana regex模块, IsKatakana IsHan属性如@一二三的回答所示 ; you could use character ranges from @EvenLisle's answer with stdlib's re module: 你可以使用来自@ EvenLisle的字符范围来回答 stdlib的re模块:

>>> import re
>>> print(re.sub(u"([\u3300-\u33ff\ufe30-\ufe4f\uf900-\ufaff\U0002f800-\U0002fa1f\u30a0-\u30ff\u2e80-\u2eff\u4e00-\u9fff\u3400-\u4dbf\U00020000-\U0002a6df\U0002a700-\U0002b73f\U0002b740-\U0002b81f\U0002b820-\U0002ceaf]+)", r"(\1)", u'sdf344asfasf天地方益3権sdfsdf'))
sdf344asfasf(天地方益)3(権)sdfsdf

Beware of known issues . 注意已知问题

You could also check Unicode category: 您还可以检查Unicode类别:

>>> import unicodedata
>>> unicodedata.category(u'天')
'Lo'
>>> unicodedata.category(u's')
'Ll'

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 python中的中文和日文字符支持 - Chinese and Japanese character support in python 有没有办法知道Unicode字符串是否包含Python中的任何中文/日文字符? - Is there a way to know whether a Unicode string contains any Chinese/Japanese character in Python? Python,如何打印日文、韩文、中文字符串 - Python, how to print Japanese, Korean, Chinese strings Python脚本检查非英语字符是中文还是日语 - Python script to check whether a non-English character is Chinese or Japanese 是否有一个模块可以在Python 3中将中文字符转换为日文(汉字)或韩文(hanja)? - Is there a module to convert Chinese character to Japanese (kanji) or Korean (hanja) in Python 3? 如何阅读亚洲语言(中文,日文,泰文等)的PDF文件,并以python的形式存储在字符串中 - How to read PDF files which are in asian languages (Chinese, Japanese, Thai, etc.) and store in a string in python 打印出日文(中文)字符 - Printing out Japanese (Chinese) characters Unicode字符串中有多少个可显示字符(日语/中文) - How many displayable characters in a unicode string (Japanese / Chinese) Python:检查字符串是否包含中文字符? - Python: Check if a string contains chinese character? python字符串中未替换汉字的特殊字符 - special character with chinese characters not substituted in python string
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM